Symbolic artificial intelligence

From Wikipedia, the free encyclopedia

In the history of artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as expert systems.

John Haugeland gave the name GOFAI ("Good Old-Fashioned Artificial Intelligence") to symbolic AI in his 1985 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research. In robotics the analogous term is GOFR ("Good Old-Fashioned Robotics").[1]

Subsymbolic artificial intelligence is the set of alternative approaches which do not use explicit high level symbols, such as mathematical optimization, statistical classifiers and neural networks.[2]

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.[3][4] However, the symbolic approach would eventually be abandoned in favor of subsymbolic approaches, largely because of technical limits.

Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. It was succeeded by highly mathematical Statistical AI which is largely directed at specific problems with specific goals, rather than general intelligence. Research into general intelligence is now studied in the exploratory sub-field of artificial general intelligence.

Origins[]

The first symbolic AI program was the Logic theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955-56.

The symbolic approach was succinctly expressed in the "physical symbol systems hypothesis" proposed by Newell and Simon in the middle 1960s:

  • "A physical symbol system has the necessary and sufficient means of general intelligent action."

Dominant paradigm 1955-1990[]

During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in small demonstration programs. AI research was centered in three institutions in the 1960s: Carnegie Mellon University, Stanford, MIT and (later) University of Edinburgh. Each one developed its own style of research. Earlier approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.

Cognitive simulation[]

Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.[5][6] This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[7][8]

Logic-based[]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[a] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[12] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[13][14]

Anti-logic or "scruffy"[]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[15][16][17] found that solving difficult problems in vision and natural language processing required ad hoc solutions—they argued that no simple and general principle (like logic) would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[18][19] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[20][21][22]

Knowledge-based[]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[23][24] The knowledge revolution was driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Techniques[]

A symbolic AI system can be realized as a microworld, for example blocks world. The microworld represents the real world in the computer memory. It is described with lists containing symbols, and the intelligent agent uses operators to bring the system into a new state.[25] The production system is the software which searches in the state space for the next action of the intelligent agent. The symbols for representing the world are grounded with sensory perception. In contrast to neural networks, the overall system works with heuristics, meaning that domain-specific knowledge is used to improve the state space search.

Success with expert systems 1975–1990[]

This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[26][27][28] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[29] These use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. Since symbolic AI works based on set rules and has increasing computing power, it can solve more and more complex problems. In 1996, this allowed IBM’s Deep Blue, with the help of symbolic AI, to win in a game of chess against the world champion at that time, Garry Kasparov.[30]

Abandoning the symbolic approach 1990s[]

Dreyfus' critique[]

An early critic of symbolic AI was philosopher Hubert Dreyfus. Beginning in the 1960s Dreyfus' critique of AI targeted the philosophical foundations of the field in a series of papers and books. He predicted it would only be suitable for toy problems, and thought that building more complex systems or scaling up the idea towards useful software would not be possible.[31]

AI Winters[]

The same argument was given in the Lighthill report, which started the AI Winter in the mid 1970s.[32]

Subsymbolic AI[]

Robotics[]

Opponents of the symbolic approach in the 1980s included roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.[citation needed]

Uncertain reasoning[]

Symbols can be used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using artificial neural networks.[33]

Synthesizing symbolic and subsymbolic[]

Recently, there have been structured efforts towards integrating the symbolic and connectionist AI approaches under the umbrella of neural-symbolic computing. As argued by Valiant and many others,[34] the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient (machine) learning models.

See also[]

Notes[]

  1. ^ McCarthy once said: "This is AI, so we don't care if it's psychologically real".[3] McCarthy reiterated his position in 2006 at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence".[9]. Pamela McCorduck writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplished, and the other aimed at modeling intelligent processes found in nature, particularly human ones."[10], Stuart Russell and Peter Norvig wrote "Aeronautical engineering texts do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool even other pigeons.'"[11]

Citations[]

  1. ^ Haugeland 1985.
  2. ^ Nilsson 1998, p. 7.
  3. ^ a b Kolata 1982.
  4. ^ Russell & Norvig 2003, p. 5.
  5. ^ & McCorduck 2004, pp. 139–179, 245–250, 322–323 (EPAM).
  6. ^ Crevier 1993, pp. 145–149.
  7. ^ McCorduck 2004, pp. 450–451.
  8. ^ Crevier 1993, pp. 258–263.
  9. ^ Maker 2006.
  10. ^ McCorduck 2004, pp. 100–101.
  11. ^ Russell & Norvig 2003, pp. 2–3.
  12. ^ McCorduck 2004, pp. 251–259.
  13. ^ Crevier 1993, pp. 193–196.
  14. ^ Howe 1994.
  15. ^ McCorduck 2004, pp. 259–305.
  16. ^ Crevier 1993, pp. 83–102, 163–176.
  17. ^ Russell & Norvig 2003, p. 19.
  18. ^ McCorduck 2004, pp. 421–424, 486–489.
  19. ^ Crevier 1993, p. 168.
  20. ^ McCorduck 2004, p. 489.
  21. ^ Crevier 1993, pp. 239–243.
  22. ^ Russell & Norvig 2003, p. 363−365.
  23. ^ McCorduck 2004, pp. 266–276, 298–300, 314, 421.
  24. ^ Russell & Norvig 2003, pp. 22–23.
  25. ^ Honavar & Uhr 1994, p. 6.
  26. ^ Russell & Norvig 2003, pp. 22–24.
  27. ^ McCorduck 2004, pp. 327–335, 434–435.
  28. ^ Crevier 1993, pp. 145–62, 197–203.
  29. ^ Hayes-Roth, Murray & Adelman.
  30. ^ "The fascination with AI: what is artificial intelligence?". IONOS Digitalguide. Retrieved 2021-12-02.
  31. ^ Dreyfus 1981, pp. 161–204.
  32. ^ Yao et. al. 2017.
  33. ^ Honavar 1995.
  34. ^ Garcez et. al. 2015.

References[]


Retrieved from ""