Date
-

Aaron Mendon-Plasek, Yale Law School, "Creativity in an irrational world of inexhaustible meaning: early 1950s origins of machine learning as subjective decision-making, disunified science, and a remedy for what cannot be predicted."
 
Abstract
This paper explores why and how early 1950s machine learning researchers concerned with questions of pattern recognition came to see a range of ‘machine learning’ practices as a superior strategy for employing digital computers to perform creative non-numerical tasks and for identifying what was important in any decision-making process, including political decision-making, scientific inquiry, and self-knowledge. While ‘machine learning’ as a term-of-art and a set of practices constituted a trading zone well into the 1990s, for those working on pattern recognition the term ‘machine learning’ begun to take on a far narrower set of meanings by 1953 in which a learning program’s capacity to perform ‘creative’ work was its ability to redefine the scope of the tasks it was assigned. This paper investigates the local research problems, epistemological commitments, institutional contexts, and transnational exchange of what early 1950s researchers called ‘machine learning’ through three cases studies of early-career researchers imagining, building, and programming digital computers to ‘learn’ from 1950 to 1953. These physicists- and engineers-turned-pattern recognition researchers saw learning programs as potential interlocutors alongside humans to help identify significant differences, resolve contextual ambiguity, and explore epistemic possibility. In doing so, these and other researchers saw machine learning as rooted in a constructivist epistemology in which the possibility of machine ‘originality’ necessarily precluded machine (and even human) objectivity. While these appeals to nominalist strategies for handling poorly-understood, extraordinary complex, or even contradictory systems were often local contingent responses for expanding the uses of digital computers in the early 1950s, these responses quickly came to define what constituted both legitimate problems of knowledge in machine learning and a conception of efficacy rooted in the capacity to make meaning from contradictory information.
This article is a much revised and expanded version of the first chapter from my 2022 history dissertation entitled Genealogies of Machine Learning, 1950-1995 . I am looking for fresh perspectives on this work, including alternative empirical and computational approaches I might use to explore the networks of scientists, engineers, and institutions that I discuss. While I wrote this piece for a history of computing audience, I am actively examining other methods and mediums to share these ideas with a larger audience of historians of technology interested in the relationships between quantification, innovation and maintenance, and infrastructure, and how such technical practices play a role in the imagining of categories like race, gender, social problems, and political possibility. I am also interested in exploring how I might communicate the historical insights my article develops about early machine learning to a general audience interested in what contemporary AI can and can’t do, and how such debates say as much about us as they do about how AI might be regulated. Finally, I am interested in learning novel strategies media scholars have used in showing how the materiality of objects has served as a contingent but crucial component to the creation of abstractions used to imagine what Ian Hacking has called “human kinds.” I also hope that sharing my work with a working group will be an opportunity for me to begin to develop relationships with members affiliated with the Consortium who might be able to alert me to others working on complementary scholarly projects examining the links between quantification, computation, and creation, stabilization, and remaking of social categories.