Picture of Gabriel Egan G a b r i e l   E g a n  .  com

"What Does Digital Methodology Mean and How Can it Contribute to Humanities' Scholarship?" by Gabriel Egan

I hope we can all agree that the Humanities is the study of human culture and history, certainly encompassing the topics of languages and literatures, philosophy, and history, and perhaps also including the topics of laws, politics, religion, geography, and various forms of art. We do not need to pin down a precise definition of the Humanities and I doubt that we could all agree on one, so long as we agree that the Humanities is not the Sciences, where they do things very differently from us. That is an important distinction for what I wish to talk about, since looked at in one way the Digital Humanities can seem like an encroachment of the Sciences upon the Humanities. I hope to show that a various points the Humanities versus Sciences distinction that I am starting with breaks down. I must confess that my thinking about this topic, and all my examples this evening, come from the textual side of the Humanities and that I ignore the non-verbal. The only excuse I can offer is that I do not really understand the non-verbal.

    Although aspects of Humanities scholarship go back to the ancient Greeks and Romans, I locate the distinctive methodologies of the Humanities in the technological developments of the European Renaissance that started in Italy in the fourteenth century and ended some time in the seventeenth century. I use the term 'Renaissance' self-consciously because I do not believe that this term anachronistically overstates the early modern period's break from the preceding late Middle Ages. For my purposes, there was such a break, because I am concerned with the development of writing technology. I broadly accept Elizabeth Eisenstein's claim, in The Printing Press as an Agent of Change, published in 1979, that the Renaissance was a distinctly textual phenomenon and distinctly technological one. The technology in question is of course printing using movable type, first developed here in Mainz by the man after whom this university is named, Johannes Gutenberg.

    As a Marxist, I consider the advancement of technologies -- and especially the technologies of writing -- to be fundamental to human historical progress. Technology has increase the amount of writing in circulation by lowering the cost of creating, disseminating, and consuming it. This is inherently emancipatory technology, although of course (like writing itself) it can be put to evil ends. From this technological point of view, there are just two central revolutions in the past two thousand years: the invention of printing by movable type by Gutenberg, and almost exactly 500 years later the invention of the computer. Both technologies lowered the cost of writing by several orders of magnitude.

    Looked at as machines for making writing more cheaply and widely available, the printing press and the computer are essentially equivalent and they are equally worthy of our attention as Humanists. But aside from lowering the cost of writing, the printing press did not substantially change what we can do with writing. Of course, lowering the cost of something can of itself be highly disruptive. The economics of text creation and dissemination using computers are, right now, as disruptive to the habits of minds accustomed to the economics of print publication as those of print publication were to the habits of minds accustomed to manuscript culture. In the world of scholarly publication, the disruption is being felt in the Europe-wide Plan-S to make all research that is paid for by the citizens of Europe free at the point of reading. In the fifteenth century, printing made writing much cheaper than it had been and today the computer is doing the same in a similarly disruptive way. But computers also allow us to do things with texts that cannot be done with printed works, and that is not true of the previous change from manuscript to print culture. To explore the new things that computers allow us to do, we must consider the origins of the computer.

    In 1945 John von Neumann came up with a revolutionary innovation in the design of the newly invented digital calculating machines (Von Neumann 1945). Virtually every digital computer since then has been a "Von Neumann" machine and nothing essential has changed: they have simply become faster each year. Von Neumann's revolutionary idea was to put the data being worked upon and the instructions doing the working into the same storage medium, the same memory space, rather than keeping them separate. In this view, instructions are just data -- specifically, data about what the machine should do next -- and conversely data can be expressed as instructions, so for example instead of storing the number pi as a constant it can be algorithmically calculated afresh each time it is needed. With Von Neumann's abstraction of data and instructions into interchangeable numbers, the promise of the Universal Machines described theoretically in Alan Turing's celebrated paper of 1937, "On Computable Numbers", was realized (Priestley 2011, 126-30; Turing 1937).

    The use of the same storage medium for data and instructions means that the interpretation of a set of binary digits inside the computer is utterly dependent on its context [SLIDE]. Inside a computer a pattern of binary digits -- stored as electrical charges or magnetic polarities or holes in a punched card and so on -- such as 0100 0011 has the meaning of the decimal number 67 (say, a person's age in years) in one context, but in another context it is an instruction to the processor (say, to move data from one part of the processor to another), and in another context again it will be the upper-case letter "C". The last context seldom arose in the earliest computers because they seldom held or processed textual data: computers were built by mathematicians for processing numbers. But it became apparent quite soon after the Second World War that for entering into the computer the instructions and data needed for mathematical calculations, and for getting the results back, [SLIDE] the easiest method would be to adapt the existing technology of the teleprinter. This is essentially an electric typewriter that generates pulses encoded in binary format that a computer can receive and generate.

    [SLIDE] Mathematicians using computers in the 1940s also soon realized that their existing notations for expressing the problems they wanted computers to work upon are especially poor for communicating with a machine. [SLIDE] The high-level computer programming language, Fortran, was invented by John W. Backus in the 1950s specifically for Formula Translation, for expressing mathetical formulas using just the characters available on the keyboard of an electric typewriter. Fortran's essential feature was that it turned the shorthand symbolic language of mathematicians into the everyday language of verbs and nouns and conditionals: READ, DO, GO TO, STOP, IF, and so on. In developing high-level programming languages, computer scientists had to find ways to replicate in a computer the processes by which a human reader parses sentences. Backus turned to the notation that Noam Chomsky invented to represent what he called context-free grammars that model how speakers produce natural language (Chomsky 1959). [SLIDE] The Backus-Naur notation that is now used to express the syntax and generative rules of all computer languages is nothing more than a rewriting of Chomsky's notation. Moreover, Noam Chomsky's hierarchy of language grammars (Chomsky 1956), published in 1956, is compulsory reading in computer sciences because it describes the hardware requirements -- cognitive or electronic -- needed for the processing of the increasingly complex computer languages that were developed in the 1960s and 1970s. This brings me to my first main point in this talk. The concerns of computer science and the Humanities do not merely overlap, they are interwined. Our attempts to make sense of language have been essential to the progress of computing technologies over the past 70 years.

* * *

    Yet computing began as an activity for mathematical research rather than language research. The 1930s were a time of severe shock and disappointment as two mathematicians, Kurt Gödel and Alan Turing, independently showed that much of what had been hoped for the subject's development was not going to be possible. The obstacle was the phenomenon of self-referentiality. In philosophy and language studies it is well known that sentences can refer to other sentences, as with [SLIDE] "The following sentence is false", which can be made to 'point' to obviously false sentences such as [SLIDE] "Rabbits are a type of fish" and [SLIDE] "The Earth is flat". [SLIDE] But if the succeeding sentence refers back to the first one, it is possible to create self-referential loops of paradox. [SLIDE] If the first sentence here is true, then the second sentence is false, but if the second sentence is false then what it claims, that the first sentence is true, is false, and hence the first sentence is false. Thus if the first sentence is true it must be false. These loops are easily avoided in language. What Kurt Gödel showed in his Incompleteness Theorem is that such paradoxical loops necessarily exist in mathematics and that in any system of mathematics powerful enough to make useful assertions about, for example, the natural numbers, such paradoxical loops are unavoidable.

    To see why, consider how we might want to define a concept such as 'ancestry' [SLIDE], as in the assertion that 'Person A is an ancestor of Person B'. This is the sort of relationship that historians deal with all the time and if we have a large digital representation of a group of people we will want some way of defining the relationship precisely so that a machine may look for it in our data. We could try listing all the conditions under which it would be true to say that 'Person A is an ancestor of Person B'. It is true if Person A is a parent of Person B, or if Person A is a parent of a parent of Person B, or if Person A is a parent of a parent of a parent of person B, and so on. This definition is necessarily incomplete, so shown by the three dots, meaning "and so on", in my slide. A better definition would use the notion of self-reference, which is computer science is called recursion: Person A is an ancestor of Person B if A is a parent of B or an ancestor of a parent of B. I will say that again because it is tricky. Person A is an ancestor of Person B if A is a parent of B or an ancestor of a parent of B. [SLIDE] We can define the notion of being an ancestor in terms of itself so long as we have a terminating case, and we can write it as a 'function', a set of simple steps taken in turn that result in the answer 'True' or 'False' [SLIDE]. Here are the steps. If the person we want to find the ancestor of, Person B, is [SLIDE] "Adam" (the first human) then our test returns the answer "False" since Adam has no ancestors. But otherwise [SLIDE] we ask if A is the parent of B. If not, we ask [SLIDE] if A is an ancestor of B's parent. Notice that the definition of 'ancestor' itself invokes the notion of 'ancestor'. To find out if A is an ancestor of the parent of B, we re-enter the same "Ancestor()" function but this time with a new value for B: the original B's parent. The second time through the function, we ask if A is the parent of this new-B (that is, original-B's grandparent?). If not, we ask if A is an ancestor of new-B's parent. And so we enter the function again, using as new-new-B the parent of new-B, or original-B's grandparent. Again we ask if A is the parent of B -- now we are asking if A is original-B's great-grandparent -- and if not we ask if A is an ancestor of new-new-B, and round we go again. Recursion is a way of looping around, asking the same question time and time again as we step our way up through the family tree. This process terminates when we find an ancestor or we reach Adam at the top of the tree having not found an ancestor. Our function will always terminate with an answer.

    Recursion is a powerful tool for defining relationships and it is widely used in mathematics and computer science because many everyday relationship simply are recursive. If you substitute an organization's hierarchy chart for the family tree, the Ancestry function will tell you if two people are in the same chain-of-command. [SLIDE] In Noam Chomsky's transformational generative grammar, recursion is the reason that we can generate natural language sentences of any desired length and complexity: a Verb-Phrase may itself consist of Verb-Phrases, a Noun-Phrase may consist of Noun-Phrases. The genetic instructions for making biological structures such as trees also use recursion, as does almost anything that displays such self-similarity where the structure of the smaller parts mirrors the structure of the larger parts. But in mathematics this self-referring recursion can be problematic. [SLIDE] Suppose we have a list of functions that do simple things to numbers. So the 'Double-it' function multiplies its input by two, the 'Square-it' function multiplies its input by itself, and so on. We can put these functions into a list in, say, the order of how big the computer program would be that does this function. Any function of this kind that we care to define would find a place somewhere in the list, according to how long its computer program would be, so that the list is infinite and accommodates all possible functions.

    Consider the new function One-more-than(x) [SLIDE] which takes its input x and finds the function in the list that has the index value x, calculates its result, and adds one to it. So, if x is 2, One-more-than(2) refers us to function #2, [SLIDE] the Square-it function, which returns 4 to which One-more-than adds 1 to make 5 [SLIDE]. If x is 500, One-more-than(500) refers us to function #500 [SLIDE], the Deduct-a-fifth function, which returns 400 to which One-more-than adds 1 to make 401 [SLIDE]. Our list of functions is meant to include all possible functions, so there must be a place in it somewhere for the function One-more-than. But on reflection, there cannot be a place for it in the list. [SLIDE] Suppose its place, on account of the length of its implementation in software, is #605, so we put it there in the list. Now we can ask 'What is the value of One-more-than(605)?' [SLIDE] We get the answer by looking up function #605 in our list, which is the One-more-than function itself, and adding one to it [SLIDE]. But that is recursion without any terminating condition, so either we stop here and say that the result is a contradiction -- no value can equal itself plus one -- or we actually run the One-more-than function again on the value 605 and again get an answer than requires us to run the One-more-than function again, and so on forever. In principle the answer to One-more-than(605) is nonsensical and as practical recursive software it runs forever without terminating. [BLANK SLIDE]

    In 1931, Kurt Gödel showed how disastrous it is for the development of mathematics along the lines that most mathematicians were hoping it would develop that a function such as One-more-than cannot be included in an ordered list of functions without creating a paradox (Gödel 1931). The key to this was showing that mathematical statements can themselves be encoded as numbers, as in our list of functions in which One-more-than takes place 605. In 1937 Alan Turing explored the same problem using imaginary computing machines and showed they could not overcome the limitation Gödel had identified, and that a host of related practical problems arise from Gödel's insight, for example that we cannot create a general purpose set of instructions (a program we would say) for testing if another program will ever finish its execution rather than looping forever (Turing 1937). The key to Turing's insight was that a computer program, a set of verbal instructions, is just a number and hence can be the input to another computer program. At the very origins of computing, then, was a set of insights about the interchangeability of numbers and verbal statements.

    And yet, as the new computing machines imagined by Turing were being assembled in reality, John von Neumann proposed that they should use a single memory space that is shared by data and instructions. Knowing the chaos that could ensue from data being able to refer to themselves and from instructions being able to refer to themselves -- that is, the chaos that may arise from the self-referentiality of data and instructions -- this seems a remarkably incautious proposal. It turned out to be an idea of genius. Because the instructions in a von Neumann machine are just numbers, distinguishable from the numbers representing the data only by context, it is possible for a Von Neumann machine to alter its own instructions. All Von Neumann machines are capable of running instructions than modify themselves, although at first this feature was used only as a way of making possible what are known in computer programming as conditional jumps of the kind "if x = y go to instruction z" (Priestley 2011, 139-42). But the potential for self-referentiality (and hence self-modification) in computer instructions made possible the definition of functions that use self-reference, recursion, in the beneficial way we saw with our Ancestor() function. And more recently, the potential for sets of intruction to modify themselves that was provided by the Von Neumann architecture has given us computer programs that teach themselves how to improve their performance, and hence has given us the world of Machine Learning and Artificial Intelligence.

    Turing's response to his own proof of the theoretical limitation to computing power is especially interesting because his insights did not prevent him from playing a crucial role in the development of practical computing machines. After his work on cryptanalysis during the Second World War, Turning turned to two problems of interest to Humanists: 1) can a machine behave in a way that is indistinguishable from human behaviour?, and 2) how do biological processes turn the information of genetic inheritance into the physical shapes of organisms? His interest in the special qualities of human behaviour was the subject of Turing's essay "Computing machinery and intelligence" in which he introduced his famous Imitation Game as a practical test for whether a computer could be said to think (Turing 1950). The test was for the machine to conduct a conversation via teleprinter with a human being who had to decide if the other end of the conversation was being held up by machine or a person. For Turing it was language generation that provide the hardest challenge for computing machines and gave a practical test for the quality of being human.

    Making sense of human language and generating new sentences have turned out to be among the most interesting problems in computation. Getting a computer to play chess or drive a car are problems that are now essentially solved, but human-like language use remains a significant challenge. When the IBM computer system Watson competed against the world's best human players of the American television quiz show Jeopardy! in 2011, and won, an important threshold towards success at Turing's Imitation Game was passed. Another was passed earlier this year with the public release of the text-generating model called GPT2 by the OpenAI Corporation, which is able to produce remarkably human sounding continuations of any prose sentences you give it. GPT2 relies on a probabilitistic model of language use, using a large corpus of existing writing (specifically threads from the Reddit social news aggregator) to decide how likely or unlikely it is that each word in the language will be followed by each one of all the other words. This is not at all how we would expect a computer system to generate language, but it has turned out to be a remarkably accurate model of what speakers and writers actually do. The same approach is used by the Google Translate service. That leads me to my second main point this evening, which is a restatement of the first: the research concerns of computer scientists and the research interests of Humanists are closely aligned, especially in relation to language problems. Language, we now realize, is the unconquered Everest of computation. The achievements of even our weakest students in making sense of a sonnet or explaining a metaphor or constructing a parody are better than anything computers can do. Unlike many other disciplines, students in the Humanities acquire skills that will not be achieved by computers for many years to come.

* * *

        In this last third of the talk I want to try to offer anyone who does not use digital methods some reasons why they might do so and to give a sense of what those methods could consist of in particular fields. These digital methods depend on the extra opportunities that digital texts give us that printed texts do not, arising from the ways that the computer revolution in textuality is unlike the preceding movable-type revolution that Gutenberg gave us. They arise, in essence, from the fact that digital texts can be searched and can be sorted without a reader first having to read them.

    In the late 1990s my university paid to subscribe to the Literature Online (LION) database from Chadwyck-Healey. Wanting to exploit this new acquisition and convince the university that its expenditure was wise, I devised an undergraduate English Literature course focussing on searching LION in increasingly sophisticated ways. Chadwyck-Healey generously sent an expert on LION's search engine, Dan Burnstone, to join me in teaching the classes. In the first class, Dan demonstrated searching within all the plays in LION for a character with a particular name, and he chose the name 'Hamlet'. Expecting to find only Shakespeare's Hamlet, Dan was astonished to receive this hit [SLIDE]:

Enter Hamlet a footeman in haste.

Ham. What Coachman? my Ladyes Coach for shame; her ladiships ready to come downe;

Enter Potkinn, a Tankerd bearer.

Pot. Sfoote Hamlet; are you madde? whether run you now you should brushe vp my olde Mistresse?

(George Chapman, Ben Jonson, and John Marston Eastward Hoe, STC 4970, 1605)

Only a specialist would know that shortly after Shakespeare's play Hamlet was first performed, his fellow dramatists George Chapman, Ben Jonson, and John Marston parodied its mad prince by giving the same name to a mere household servant who dashes madly about the stage in their play. In a digital corpus such as Literature Online, all writings are on an equal footing: a digital search finds strings of letters no matter where they occur and is blind to the difference between well-known canonical writings and more obscure works. Digital methods are immune to the effects of canonicity. That may well be an advantage in some investigators' research.

    [BLANK SLIDE] About 20 years ago I worked at the Shakespeare's Globe replica theatre in South London and often heard actors and directors remark that in Shakespeare's time people would say that they went to the theatre to hear a play rather than, as we say now, to see a play. This claim was also made by theatre historians. John Orrell, Bruce R. Smith, Andrew Gurr, and Mariko Ichikawa all wrote that to the discriminating playwrights hearing was more important than seeing. But did people really call it "hearing a play" rather than "seeing a play"? It occurred to me that the newly available Literature Online could help verify this claim, so I set about counting the occurrences of the expressions "see a play", "sees a play", "saw a play" seeing a play", "seen a play", "see the play" (or "see the plays"), "sees the play" (or "sees the plays"), "saw the play" (or "saw the plays"), "seeing the play" (or "seeing the plays"), and "seen the play" (or "seen the plays"), and comparing them with the occurrences of "hear a play", "hears a play", "heard the play", and so on, for all the possible verb tenses and numbers (Egan 2001).

    The hits were copious, over 100 occurrences of these phrases across the Shakespearian period and the results were clear: more than nine times out of ten early modern people called it seeing a play (as we do) rather than hearing a play. There were a few examples of hearing a play, and among them Shakespeare was unusually prominant: he uses versions of this phrase once in Hamlet, and three times in The Taming of the Shrew. That is why theatre historians and theatre practitioners have assumed that hearing a play was the normal idiom of the time: they have assumed that Shakespeare's unusual choice of words as normal.

    A quantitative study that is blind to canonicity helps us avoid such mistakes. When we have digital texts, not only are all texts placed on an equal footing but also all words within a text are placed on an equal footing and it becomes possible to attend to words that critics usually ignore. As John Burrows put it in a groundbreaking study [SLIDE]:

It is a truth not generally acknowledged that, in most discussions of works of English fiction, we proceed as if a third, two-fifths, a half of our material were not really there. (Burrows 1987, 1)

Burrows is referring the function words, the short common words such as the, and and, and on, that make up about half of all that we say and write but which literary criticism has made almost no comment upon. Burrows showed that these words matter for how characters are created and how stories are told, and he was able to do that only because he had digital texts and machines to count the words. [BLANK SLIDE]

    My study about hearing or seeing a play was undertaken nearly 20 years ago, and I think it was the first to employ a digital-corpus approach to a problem in theatre history. There have been many more digital-corpus studies since then, illustrating that such methods usefully allow us to focus on the typical rather than the exceptional cases in our data. In some areas of Humanities research, the move away from authorial-canonical cases to typical cases was already happening in traditional studies. In theatre history the recent trend has been away from authors altogether so that drama is treated as a collective, socialized form of creativity not an individual, authorial one. There has been a string of books taking the theatre company rather than the author as the focus of attention: Scott McMillin and Sally-Beth MacLean's study of the Queen's Men (McMillin & MacLean 1998); Andrew Gurr's studies of the Chamberlain's/King's men (Gurr 2004) and the Admiral's men (Gurr 2009); Lucy Munro's study of the Children of the Queen's Revels (Munro 2005); Helen Ostovich, Holger Schott Syme, and Andrew Griffin's study of the Queen's men (Ostovich, Syme & Griffin 2009); Eva Griffith's study of the Queen's Servants at the Red Bull (Griffith 2013); and Lawrence Manley and Sally-Beth MacLean's study of Lord Strange's men (Manley & Maclean 2014). To varying degrees, each of these books goes beyond a merely historiographical narrative of company formation, practice, and demise in order to assert that company identity itself was expressed in the repertory.

    The idea of theatre-company style is popular in theatre-historical work, but is it true? The recent book Style, Computers, and Early Modern Drama by my learned colleague Brett Greatley-Hirsch and Hugh Craig used 39 plays from across the theatre companies of the 1580s and 1590s and showed that computational tools successful in distinguishing authorial style, genres, and chronology can find no evidence for the existence of distinctive theatre-company styles (Craig & Greatley-Hirsch 2017, 164-201). Thus a newly born myth was exploded. Other myths are being exploded by digital methods. It has generally been assumed that one of the things that makes Shakespeare special is the size of his vocabulary, that he simply knew and used more words than other writers. This claim has recently been tested by Hugh Craig, who showed that whatever we think of Shakespeare's exceptional writing, it was not the case that he used lots of other words other people were not using; in fact his vocabulary is remarkably unexceptional (Craig 2011; Rosso, Craig & Moscato 2009).

    Recent theatre historiography and textual criticism has followed the wider trend in Anglo-American literary since the 1960s, arising from French poststructuralist theory, of diminishing the importance of the author and instead emphasizing the collaborative, socialized labours of the actors in theatre companies, and of the scribes and compositors, whose effects upon the surviving script are treated as though they are nearly as important as the author's labour. Because it is difficult to differentiate these various inputs when studying their collective output in an early printed play it is sometimes said to be impossible to do so. Jeffrey Masten insisted that attempts to attribute parts of co-written plays to their respective co-writers are bound to fail because ". . . the collaborative project in the theatre was predicated on erasing the perception of any differences that might have existed, for whatever reason, between collaborated parts" (Masten 1997, 17). Since Masten wrote that 22 years ago, extraordinary successes in the field of computational stylistics have illustrated the importance of authorship in the teeth of postmodernism's denial of it. It turns out that    authorship is indeed individualistic and discernible, and not at all a post-seventeenth century invention.

    Hugh Craig makes this point pithily:

In the case of authorship, statistical studies might have revealed -- were free to reveal -- that authorship is insignificant in comparison to other factors like genre or period. In that case the theory that authors are only secondary to other forces in textual patterning would have been validated. . . . As it happens, however, authorship emerges as a much stronger force in the affinities between texts than genre or period. Unexpectedly, perhaps uncomfortably, it is a persistent, probably mainly unconscious, factor. Writers, we might say, can't help inscribing an individual style in everything they produce. We need to take account of this in a new theory of authorship. (Craig 2009-10, par. 3)

    This brings me to my final point. It is that digital methods for Humanist research are in some regards aligned with the aims of those who want to refocus our attention away from the English literary canon -- away from the Chaucers and the Shakespeares and the Miltons and the Austens and the Dickens -- in order to think about the marginal figures. Because they treat every author equally, digital methods are especially helpful in overcoming the biases of canonicity. If as an historian you care as much about the typical cases of some phenomenon as you care about the exceptional cases, then you will find the ruthless objectivity of digital methods especially helpful. But equally, such methods might by their ruthless objectivity lead you back to the assumptions you set out to overturn. You may find that Shakespeare and Austen and the others are special, because authorship is a special kind of phenomenon. The objectivity is all [SLIDE]:

Nothing amuses more harmlessly than computation, and nothing is oftener applicable to real business or speculative inquiries. A thousand stories which the ignorant tell, and believe, die away at once, when the computist takes them in his gripe. (Johnson 1836, "174. Computation")

Works Cited

Burrows, John. 1987. Computation Into Criticism: A Study of Jane Austen's Novels and an Experiment in Method. Oxford. Clarendon Press.

Chomsky, Noam. 1956. "Three Models for the Description of Language." IRE [Institute of Radio Engineers] Transactions on Information Theory 2. 113-24.

Chomsky, Noam. 1959. "On Certain Formal Properties of Grammars." Information and Control 2. 137-67.

Craig, Hugh and Brett Greatley-Hirsch. 2017. Style, Computers, and Early Modern Drama: Beyond Authorship. Cambridge. Cambridge University Press.

Craig, Hugh. 2009-10. "Style, Statistics, and New Models of Authorship." Early Modern Literary Studies 15.1. 41 paras.

Craig, Hugh. 2011. "Shakespeare's Vocabulary: Myth and Reality." Shakespeare Quarterly 62. 53-74.

Egan, Gabriel. 2001. "Hearing or Seeing a Play?: Evidence of Early Modern Theatrical Terminology." ISSN 1079-3453 ISBN 093395199X. Ben Jonson Journal 8. 327-47.

Gödel, Kurt. 1931. "Über Formal Unentscheidbare Sätze Der Principia Mathematica und Verwandter Systeme, [Part] I [On Formally Undecidable Propositions of Principia Mathematica and Related Systems]." Monatshefte fur Mathematik und Physik 38. 173-98.

Griffith, Eva. 2013. A Jacobean Company and its Playhouse: The Queen's Servants at the Red Bull Theatre (C. 1605-1619). Cambridge. Cambridge University Press.

Gurr, Andrew. 2004. The Shakespeare Company, 1594-1642. Cambridge. Cambridge University Press.

Gurr, Andrew. 2009. Shakespeare's Opposites: The Admiral's Company 1594-1625. Cambridge. Cambridge University Press.

Johnson, Samuel. 1836. Johnsoniana, or Supplement to Boswell: Being Anecdotes and Sayings of Dr Johnson. Ed. J. Wilson Croker. London. John Murray.

Manley, Lawrence and Sally-Beth Maclean. 2014. Lord Stranges Men and Their Plays. Trans. F. W. Bowers. New Haven CT. Yale University Press.

Masten, Jeffrey. 1997. Textual Intercourse: Collaboration, Authorship, and Sexualities in Renaissance Drama. Cambridge Studies in Renaissance Literature and Culture. 14. Cambridge. Cambridge University Press.

McMillin, Scott and Sally-Beth MacLean. 1998. The Queen's Men and Their Plays. Cambridge. Cambridge University Press.

Munro, Lucy. 2005. Children of the Queen's Revels: A Jacobean Theatre Repertory. Cambridge. Cambridge University Press.

Ostovich, Helen, Holger Schott Syme and Andrew Griffin, eds. 2009. Locating the Queen's Men, 1583-1603: Material Practices and Conditions of Playing. Studies in Performance and Early Modem Drama. Aldershot. Ashgate.

Priestley, Mark. 2011. A Science of Operations: Machines, Logic and the Invention of Programming. History of Computing. London. Springer.

Rosso, Osvaldo A., Hugh Craig and Pablo Moscato. 2009. "Shakespeare and Other English Renaissance Authors as Characterized By Information Theory Complexity Quantifiers." Physica A 388. 916-26.

Turing, Alan. 1937. "On Computable Numbers, with an Application to the Entscheidungsproblem." Proceedings of the London Mathematical Society 2. 230-65.

Turing, Alan. 1950. "Computing Machinery and Intelligence." Mind 59.236. 433-60.

Von Neumann, John. 1945. 'First Draft of a Report on the EDVAC: Contract Number W-670-ORD-4926 between the United States Army Ordnance Department and the University of Pennsylvania'. A Report from the Moore School of Engineering at the University of Pennsylvania.