Science in the Humanities: Obtaining a True Text


As I was reading Kelemen’s Textual Editing and Criticism, I was struck by two things: I would not want to be the person diagramming manuscripts, and textual editing is intriguingly scientific. The Oxford Dictionary of English defines the scientific method as “a method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”[1] Just focusing on the Lachmannian method and cladistics, we can see a blend of science and humanities. In its basic form, the Lachmannian method consists of constructing a family tree that shows similarities and differences in text, eliminating manuscripts that are essentially duplicate copies, reconstructing the archetype, and emending the reconstructed text where there are no clear answers.[2]


Figure 1. Basic family tree using Lachmannian method.

Cladistics broken down is a system that uses computer technology to track down the archetype by creating multiple family trees and then choosing the most suitable one (which is usually the one that is closest to all of the witnesses, or documents, of a text).[3] As illustrated in the figure below, there are multiple clade, which represent the witnesses or documents of a text. After looking at discrepancies between witnesses, a hyparchetype is discovered, which is then compared with other hyparchetypes to point toward an archetype or original text.


Figure 2. Basic outline for family tree in cladistics.

Both of these methods observe the various documents for a text, measure how close they are to one another, experiment with which seems to be the best text, and re-test to find a different text if there is not an agreement one which one is best. I think the argument can be made for textual editing belonging to science, but here is a more interesting conundrum: do these scientific methods help textual editors obtain a “truer” text?

Though there is potential for the people behind reconstructed texts to ruin the results, using humanities-based methods with scientific methods introduces the objective act of gathering data from texts and codifying them. But then the question is “are these results accurate when people are still involved?” Textual editors working with the Lachmannian method or cladistics have no other choice but to use their discretion because they often work with manuscripts that no longer exist or exist in a damaged state. For instance, Beowulf is a manuscript that dates back to about the tenth century and has been damaged; there is not even agreement on when it was written.[4] R. D. Fulk writes on the many discrepancies in the Beowulf text: “although paleographical concerns must remain paramount in the interpretation of the remains of damaged words and letters, other considerations must not be ignored, including uneven reliability of the testimony of the first modern witnesses to the text.”[5] Despite recent technology that has allowed editors to look at images of the manuscript under ultraviolet light and access these images electronically, there are plenty of discrepancies that Fulk highlights. Just one example of a disagreement in spelling is whether the manuscript says “wun/dini” or “wun/dmi.” Out of five editors who have worked on Beowulf, one editor could not decide, three agree that it is “wun/dini,” and another decides that it is “wun/dmi.”[6] There is conjecture on what the manuscript actually says, as it seems to appear that under ultraviolet and normal light the text says “wun/dini;” it could just be an archaism from Old English or there may have been a fading in the text.[7] Regardless of this disagreement, most editors seem to agree that the word needs to be emended just not how it should be emended.[8] This presents a case where the scientific method provides data on the text, but the editor’s decision is what determines the text, especially since the text relies on the editor to decide whether there is an “n” or an “m.” This could possibly change the meaning of the text, but the scientific method could not help the text escape the editor’s decision.

Another issue with stemmatics/cladistics aside from working with texts that rely on the editor’s decision is its weaknesses in method. For example, cladistics only allows for two branches at a time in the family tree, whereas there could be many more than two copies made from a hyparchetype.[9] This is where the “human judgment of the copyist comes into play,” which results in the same problem discussed earlier.[10] In addition to the limited structures for family trees, the computer is unable to process versions of a text that may be similar yet unrelated.[11] This presents a flaw in methodology, supporting the fact that using the scientific method cannot help us obtain a “true” text. However, these weaknesses are fully acknowledged: “In building its reconstruction, it acknowledges, moreover, that ‘no such work,’ as Tanselle puts it, ‘is ever definitive’.…it admits a kind of failure up front and in a sense, this is part of its strength and part of its scientific character. Its scientific approach offers not the original text…but a clear demarcation of the limits of our knowledge.”[12] This acknowledgment may be similar to the limitations section in a research study; the researcher acknowledges the limits, but it does not take away from the results that were produced. There is value in the work that went into creating critical editions even if the methods to produce an original text are not perfect, though this may still not be a way to overcome the subjectivity of an editor’s decision.

Despite these methodological flaws in textual criticism, the scientific method does have its value in adding to the argument for a reconstructed text. The flawed human element is backed-up by multiple datasets that show how close a “true” text was achieved because anything short of a séance with the author is only going to give them support for choosing a certain text. We may never be able to reconstruct a text perfectly, but we can come close and have the data to support it. We can produce a lot of good data using these advanced methods in textual editing, but anything besides the data that points toward the best text is the editor’s speculation on what is the original, so while it is not futile to use these methods, it is important to remember that these types of critical editions are the editor’s arguments for the text that they think represents the original. The blend of science and humanities can help us go far in our search for a true text; we just need to remember that it is an ongoing journey.


  1. “scientific method.” Oxford Dictionary of English, accessed March 16, 2015,
  2. Erick Kelemen,Textual Editing and Criticism: An Introduction(New York: W. W. Norton & Company, 2009), 84-85.
  3. Ibid., 96-97
  4. Kenneth Sisam, “The ‘Beowulf’ Manuscript,” Modern Language Review (1916): 336.
  5. R. D. Fulk, “Contested Readings in the ‘Beowulf’ Manuscript,” Review of English Studies 56, no. 224 (2005): 192.
  6. Ibid., 195.
  7. Ibid., 196.
  8. Ibid.
  9. Erick Kelemen, Textual Editing and Criticism, 98.
  10. Ibid., 101.
  11. Ibid., 98.
  12. Ibid., 101.


Fulk, R. D. “Contested Readings in the ‘Beowulf’ Manuscript,” Review of English Studies 56, no.224 (2005): 192-223.

Kelemen, Erick. Textual Editing and Criticism: An Introduction. New York: W. W. Norton & Company, 2009.

“scientific method.” Oxford Dictionary of English. Edited by Stevenson, Angus. Oxford University Press, 2010.

Sisam, Kenneth. “The ‘Beowulf’ Manuscript,” Modern Language Review 11, no. 3 (1916): 335-37.



Add yours →

  1. Thanks for your post, Angie! Your research question is so intriguing: if we view textual editing as a science, how can we determine whether this science can find or determine the truest version of a text? This is such a fascinating question because so much subjectivity is involved; the scientific method(s) used by textual editors seems subjective to some degree because it is conducted by individuals who often have to make choices about which elements are most “correct,” not to mention how subjective the notion of a “true” text is! Based on the discussions we have had in our class meetings, there are many variables and many contradictory—yet equally valid—arguments for which text is best or closest to authorial intent. While the methods outlined in the Kelemen chapter that you discuss certainly seem to give us a chance at understanding the various stages of a text’s history and making informed decisions about how the author (and various editors over time) intended for a text to be received, I think you are correct to point out that textual editing is nevertheless a human process with human potential for errors and biases.

    I can relate to your story about conducting psychology experiments as an undergrad (it is definitely difficult to obtain statistical significance and to increase the power of your experiment when participants cannot often be bothered to…well, participate!), and your discussion makes me wonder if the idea of a larger sample size could apply to textual editors’ search for the truest text. That is, textual editors already seem to strive for the most representative (and often large) sample size of copies of a text in order for their collations to yield the most accurate results. Leaving out a set of copies/manuscripts/drafts/editions might, after all, omit important information about the text’s history from the results. However, it seems possible that it would also be helpful for textual editors to strive to eliminate the “human problem” you mention by increasing the sample size of textual editors themselves. Perhaps if a number of textual editors could collaborate when examining a text’s history, the group would be better able to pinpoint the author’s intent or the truest copy of the text. Of course, subjectivity would still affect the process. But if a group of editors is able to come to a consensus about which text is truest, perhaps that group decision can be viewed as somewhat less subjective than a decision made by a single textual editor.


  2. Angie,
    Your post is so intriguing, and you pose such an important question to the textual editing process: “does using these scientific methods in textual editing help obtain a “truer” text?” Certainly the methods by which some textual editors go about determining the “best text” can be classified as “scientific,” but even the sciences, as you aptly note, are comprised of people creating, recording, and interpreting each experiment. Even computer codes, which we might assume operate objectively, are first crafted by software engineers. You are sure to remind us of this human element, and I agree with you when you claim “the scientific method cannot eliminate the human problem in humanities, especially since textual editing is a human process.”

    Your post reminded me of the recent “distant reading” trend (and debate) in literary studies. Distant reading claims that analyzing a text using scientific methods (testing hypotheses, creating computer codes, quantifying particular elements of a genre) will help give scholars a “truer” understanding of literature and the scope of literary studies. Opponents to distant reading claim that these scholars are not, in fact, reading these books but are instead relying on computers and quantitative date to make qualitative conclusions. However, I think that your point about the science behind textual editing (the “human element”) can also be applied to this distant reading trend. Whether we’re compiling textual editions or analyzing literature through a data mining software, data (quantitative or qualitative) still needs to be interpreted by a reader. Thanks for your analysis!


  3. Linda Wetherall May 14, 2015 — 1:07 am

    The thesis of your post really resonated with me because I grew up in a household where both of my parents are scientists, therefore in my household I witness the melding between English and science oriented manners of thinking on a daily basis. By using science to examine a humanities quandary you managed to achieve a truly fascinating effect within your post that made it so enjoyable to read.

    This notion of “achieving the true text” is a complex one that I doubt will ever be truly solved because it is based on opinion rather than fact, which science depends upon. However, the science approach does offer data and evidence on what “should be” the true text according to that field, which adds another layer to this already complicated debate. Your post also reveals the strengths and downfalls of both of the fields as it becomes much more clear. Science can certainly add to the debate, but it cannot provide an absolute answer to a debate in the humanities since the humanities relies on so many other aspects rather than observable fact.

    This is the intellectual tension that I witness within my house every time we have a debate or a discussion about social issues. We do not achieve any absolute answer or resting point after these debates; yet my parents give me an unique perspective I would have never considered before, while I hope I do the same for them, which is much more rewarding, in my opinion, than an absolute answer would be.

    Thank you so much for a wonderful post.

    Linda Wetherall


  4. Arianne Peterson May 14, 2015 — 6:46 pm

    Angie, this topic seems really important right now as technology is enabling more innovations in terms of quantitative literary analysis, and I appreciate your sharing it with us. It seems to me like we haven’t quite grasped the kinds of scientific tools that would really produce great insights in this kind of literary application. Based on your piece, it seems like borrowing the Lachmannian method and cladistics from the life sciences isn’t quite the right fit for textual history. As everyone in this post discussion acknowledges, scientific methods will never bring us to a completely firm, objective truth, I feel like we’ve only just begun to understand how to study texts in a quantifiable way. It seems like the opportunities for creative, scientific approach to textual history are almost limitless if we could step outside of traditional, discipline-defined modes of thinking. For example, I really like Pearl’s idea of increasing the sample size of editors to enhance the accuracy of a textual study. I was also inspired by Grace’s really exciting work on quantifiably tracing an author’s gendered use of emotion within a text at the April graduate conference. It seems like this is an exciting time to be studying literature, because there is so much room for the invention and application of new methodologies—and those of us who are new to the field may have a creative advantage in not being limited by pre-existing methodological expectations.

    I am really interested in the idea of a methodology for exploring textual history that would trace and display the connections among different versions of a text as the object of study themselves—rather than just finding one objectively “best” text. Using the Beowulf example, wouldn’t it be most useful to future readers if we could find an efficient, elegant way to represent the “wun/dini or wun/dmi” debate as an open question, rather than a decided issue? Across all academic disciplines, it seems like such a taboo to admit that sometimes, we just don’t know the answer; therefore we don’t yet have many scientific methods that allow for that kind of conclusion.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: