In 1950, English mathematician and computer science pioneer Alan Turing posed the question, “can machines think?” In his paper, “Computing Machinery and Intelligence,” Turing laid out what has become known as the Turing Test, or imitation game, to determine whether a machine is capable of thinking. The test was based on an adaptation of a Victorian-style game that involved the seclusion of a man and a woman from an interrogator, who must guess which is which. In Turing’s version, the computer program replaced one of the participants, and the questioner had to determine which was the computer and which was the human. If the interrogator was unable to tell the difference between the machine and the human, the computer would be considered to be thinking, or to possess “artificial intelligence.”
Turing’s test came mere years after the development of the first digital computers and before the term “artificial intelligence” had even been coined. At the time, there were various names for the field of “thinking machines,” including cybernetics and automata theory. In 1956, two years after the death of Turing, John McCarthy, a professor at Dartmouth College, organized a summer workshop to clarify and develop ideas about thinking machines — choosing the name “artificial intelligence” for the project. The Dartmouth conference, widely considered to be the founding moment of Artificial Intelligence (AI) as a field of research, aimed to find “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans and improve themselves.”
Over the next few years, the field grew quickly with researchers investigating techniques for performing tasks considered to require expert levels of knowledge, such as playing games like checkers and chess. By the mid-1960s, artificial intelligence research in the United States was being heavily funded by the Department of Defense, and AI laboratories had been established around the world. Around the same time, the Lawrence Radiation Laboratory, Livermore also began its own Artificial Intelligence Group, within the Mathematics and Computing Division headed by Sidney Fernbach. To run the program, Livermore recruited MIT alumnus James Slagle, a former protégé of AI pioneer, Marvin Minsky.
Slagle, who had been blind since childhood, received his doctorate in mathematics from MIT. While pursuing his education, Slagle was invited to the White House where he received an award, on behalf of Recording for the Blind Inc., from President Dwight Eisenhower for his exceptional scholarly work. In 1961, for his dissertation, Slagle developed a program called SAINT (symbolic automatic integrator), which is acknowledged to be one of the first “expert systems” — a computer system that can emulate the decision-making ability of a human expert.
SAINT could solve elementary symbolic integration problems, involving the manipulation of integrals in calculus, at the level of a college freshman. The program was tested on a set of 86 problems, 54 of which were drawn from the MIT freshman calculus examinations final. SAINT succeeded in solving all but two of the questions.
At Livermore, Slagle and his group worked on developing several programs aimed at teaching computer programs to use both deductive and inductive reasoning in their approach to problem-solving situations. One such program, MULTIPLE (MULTIpurpose theorem-proving heuristic Program that LEarns), was designed with the flexibility to learn “what to do next” in a wide-variety of tasks from problems in geometry and calculus to games like checkers.
According to Slagle, AI researchers were no longer spending their time re-hashing the pros and cons of Turing’s question, “can machines think?” Instead, they adopted the view that “thinking” must be regarded as a continuum rather than an “either-or” situation. Whether computers think little, if at all, was obvious — whether or not they could improve in the future remained the open question. However, AI research and progress slowed after a boom start; and, by the mid-1970s, government funding for new avenues of exploratory research had all but dried-up. Similarly at the Lab, the Artificial Intelligence Group was dissolved, and Slagle moved on to pursue his work elsewhere.
The prominence of the field ebbed and flowed over the ensuing years; but, by the late 1990s and early 2000s, AI research had come back to the forefront by focusing on finding specific solutions to specific problems rather than on the original goal of creating versatile, fully intelligent machines. Today faster computers and access to large amounts of data has enabled advances in machine learning and data-driven deep learning methods.
Currently, the Lawrence Livermore National Laboratory is focused on several data science fields, including machine learning and deep learning. In 2018, LLNL established the Data Science Institute (DSI) to bring together the Lab’s various data science disciplines – artificial intelligence, machine learning, deep learning, computer vision, big data analytics, and others – under one umbrella. With the DSI, the Lab is helping to build and strengthen the data science workforce, research, and outreach to advance the state-of-the-art of the nation’s data science capabilities.
Pictured: Physicist James Slagle takes notes on his Braille typewriter, as his administrator Cleo Seamans