Page semi-protected

Artificial intelligence

From Wikipedia, the free encyclopedia

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a] Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however this definition is rejected by major AI researchers.[b][c]

AI applications include advanced web search engines (i.e. Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri or Alexa), self-driving cars (e.g. Tesla), and competing at the highest level in strategic game systems (such as chess and Go),[2] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] For instance, optical character recognition is frequently excluded from things considered to be AI,[4] having become a routine technology.[5]

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[6][7] followed by disappointment and the loss of funding (known as an "AI winter"),[8][9] followed by new approaches, success and renewed funding.[7][10] AI research has tried and discarded many different approaches during its lifetime, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.[11][10]

The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[d] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[12] To solve these problems, AI researchers use versions of search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[e] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[13] Science fiction and futurology have also suggested that, with its enormous potential and power, AI may become an existential risk to humanity.[14][15]

History

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

Precursors

Artificial beings with intelligence appeared as storytelling devices in antiquity,[16] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.[17] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[18]

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[19]

Cybernetics and brain simulation

The Church-Turing thesis, along with concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to consider the possibility of building an electronic brain.[20] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".[21] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Symbolic AI

When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. John Haugeland named these symbolic approaches to AI "good old fashioned AI" or "GOFAI".[22] Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.

The field of AI research was born at a workshop at Dartmouth College in 1956.[f][25] The attendees became the founders and leaders of AI research.[26][g] They and their students produced programs that the press described as "astonishing":[h] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[i][28] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[29] and laboratories had been established around the world.[30]

Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.[31] Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".[32] Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[33]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill[34] and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI.[8] The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[35] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[j][7] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[9]

Early Subsymbolic

Many researchers began to doubt that the symbolic approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[36] Robotics researchers, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move, survive, and learn their environment.[37][k] Interest in neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart and others in the middle of the 1980s.[40] Soft computing finds solutions to problems that cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Soft computing approaches to AI include neural networks, fuzzy systems, Grey system theory, evolutionary computation and many tools drawn from statistics or mathematical optimization. Modern statistical AI (below) is a form soft computing, primarily using neural networks.

Statistical AI

AI gradually restored its reputation in the late 1990s and early 21st century by finding specific solutions to specific problems. The narrow focus allowed researchers to produce verifiable results, exploit more mathematical methods, and collaborate with other fields (such as statistics, economics and mathematics).[l][42] By 2000, solutions developed by Ai researchers were being widely used, although in the 90s they were rarely described as "artificial intelligence".[11]

Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[m] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[10] In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".[44]

Artificial general intelligence research

Bernard Goetz and others became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Statistical AI is overwhelmingly used to solve specific problems, even highly successful techniques such as deep learning. They founded the subfield artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[12]

Goals

The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[d]

Reasoning, problem solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[45] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[46]

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.[47] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[48]

Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.

Knowledge representation[49] and knowledge engineering[50] are central to classical AI research. Some "expert systems" attempt to gather explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the "commonsense knowledge" known to the average person into a database containing extensive knowledge about the world.

Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[51] situations, events, states and time;[52] causes and effects;[53] knowledge about knowledge (what we know about what other people know);[54] and many other, less well researched domains.

A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[55] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[56] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[57] scene interpretation,[58] clinical decision support,[59] knowledge discovery (mining "interesting" and actionable inferences from large databases),[60] and other areas.[61]

Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture a fist-sized animal that sings and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[62] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[63]
Breadth of commonsense knowledge
The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.[64]
Subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[65] or an art critic can take one look at a statue and realize that it is a fake.[66] These are non-conscious and sub-symbolic intuitions or tendencies in the human brain.[67] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge.

Planning

A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.

Intelligent agents must be able to set goals and achieve them.[68] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or "value") of available choices.[69]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[70] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.[71]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[72]

Learning

For this project the AI had to find the typical patterns in the colors and brushstrokes of Renaissance painter Raphael. The portrait shows the face of the actress Ornella Muti, "painted" by AI in the style of Raphael.

Machine learning (ML), a fundamental concept of AI research since the field's inception,[n] is the study of computer algorithms that improve automatically through experience.[o][75]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[75] Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[76] In reinforcement learning[77] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing

A parse tree represents the syntactic structure of a sentence according to some formal grammar.

Natural language processing[78] (NLP) allows machines to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering and machine translation.[79] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. "Keyword spotting" strategies for search are popular and scalable but dumb; a search query for "dog" might only match documents with the literal word "dog" and miss a document with the word "poodle". "Lexical affinity" strategies use the occurrence of words such as "accident" to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond semantic NLP, the ultimate goal of "narrative" NLP is to embody a full understanding of commonsense reasoning.[80] By 2019, transformer-based deep learning architectures could generate coherent text.[81]

Perception

Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.

Machine perception[82] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[83] facial recognition, and object recognition.[84] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.[85]

Motion and manipulation

AI is heavily used in robotics.[86] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[87] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient's breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[88][89][90] Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[91][92] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[93]

Social intelligence

Kismet, a robot with rudimentary social skills[94]

Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects.[95][96][97] For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[98] However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.[99] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[100]

General intelligence

General intelligence is the ability to take on any arbitrary problem. Current AI research has, for the most part, only produced programs that can solve exactly one problem. Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with general intelligence, combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas. The sub-field of artificial general intelligence (or "AGI") studies general intelligence exclusively.[12]

Tools

Applications

AI is relevant to any intellectual task.[101] Modern artificial intelligence techniques are pervasive[102] and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.[103]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google Search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[104] prediction of judicial decisions,[105] targeting online advertisements, [101][106][107] and energy storage[108]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[109] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[110]

AI can also produce Deepfakes, a content-altering technology. ZDNet reports, "It presents something that did not actually occur," Though 88% of Americans believe Deepfakes can cause more harm than good, only 47% of them believe they can be targeted. The boom of election year also opens public discourse to threats of videos of falsified politician media.[111]

Philosophy

Defining artificial intelligence

Alan Turing, in a seminal 1950 paper "Computing Machinery and Intelligence", argued that the only thing that matters is the external behavior of the machine (illustrated by the famous Turing test), and showed that all the common objections to the idea that "machines can think" disappear when we look at the problem from this point of view.[112] Turing proposed changing the question from whether a machine was intelligent, to "whether or not it is possible for machinery to show intelligent behaviour".[113] Since we can only see the behavior, it does not matter if the machine is conscious, or has a mind, or whether the intelligence is merely a "simulation" and not "the real thing". He noted that we also don't know these things about other people, but that we extend a "polite convention" that they are actually "thinking".

In the proposal for the Dartmouth Workshop of 1956, John McCarthy wrote "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."[114] By using the word "simulation", McCarthy defined AI in a way that avoids all discussion of whether their programs could also have subjective conscious experience as humans do.

Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[115] However, in the 1990s it became obvious "symbol manipulation" would not be sufficient to simulate human intelligence; programs that produced precise symbolic solutions could not solve simple common sense problems without using billions of years of computer time (a limit known as "Intractability")[116] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[117] Although his arguments had been ridiculed and ignored when they were first presented, eventually AI research came to agree.[p] and modern statistical AI simulates our ability to "guess" based on our "experience", rather than on precise descriptions and symbols.

The questions that have divided AI research historically have remained unanswered and may have to be revisited by future research.[120][121] A few of the most long-standing questions that have remained unanswered are these:

  • Should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[c]
  • Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of unrelated problems?[124]
  • Can we write programs that find provably correct solutions to a given problem (e.g., using symbolic logic and knowledge)? Or do we use algorithms that can only give us a "reasonable" solution (e.g., probabilistic methods) but may fall prey to the same kind of inscrutable mistakes that human intuition makes?[120]
  • Should AI pursue the goals of artificial general intelligence and superintelligence directly? Or is it better off solving as many specific problems as it can and hoping these solutions will lead indirectly to the field's long term goals?[12]

Machine consciousness, sentience and mind

Can have a machine have a mind, consciousness and mental states in the same sense that human beings do? This question considers the internal experiences of the machine, rather than its external behavior.

Mainstream AI research considers this question irrelevant, because it does not effect the goals of the field. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence."[125] However, the question has become central to the Philosophy of Mind. It is also the central question at issue in artificial intelligence in fiction.

Consciousness

David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[126] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all. Human information processing is easy to explain, however human subjective experience is difficult to explain. For example, it is easy imagine a color blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.

Computationalism and functionalism

Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing.[127] Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.

Strong AI hypothesis

The philosophical position that John Searle has named "strong AI" states: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[q] Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.[129]

Robot rights

If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it could also suffer, and thus it would be entitled to certain rights. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature.[130][131] Some critics of transhumanism argue that any hypothetical robot rights would lie on a spectrum with animal rights and human rights.[132] The subject is profoundly discussed in the 2010 documentary film Plug & Pray,[133] and many sci fi media such as Star Trek Next Generation, with the character of Commander Data, who fought being disassembled for research, and wanted to "become human", and the robotic holograms in Voyager.

Future of AI

Superintelligence

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent.[134]

Technological singularity

If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement.[135] The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario "singularity". Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.[136]

Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029 and predicts that the singularity will occur in 2045.[137]

Transhumanism

Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.[138] This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.

Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.[139]

Risks of AI

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[140]

Technological unemployment

The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[141] A 2017 study by PricewaterhouseCoopers sees the People's Republic of China gaining economically the most out of AI with 26.1% of GDP until 2030.[142] A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, [and] improving the efficiency of production systems through predictive maintenance", while acknowledging potential risks.[102]

The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.[143] Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[144] Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk".[145][146][147] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[148] Author Martin Ford and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.[149]

Use by terrorists and other bad actors

The potential negative effects of AI and automation were a major issue for Andrew Yang's 2020 presidential campaign in the United States.[150] Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed—otherwise there's massive potential of misuse."[151]

Algorithmic bias

Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias.[152] Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be an overestimate than that of white defendants.[153]

Existential risk

Physicist Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[154][155][156][157]

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.

— Stephen Hawking[158]

In his book Superintelligence, philosopher Nick Bostrom provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal. Bostrom also emphasizes the difficulty of fully conveying humanity's values to an advanced AI. He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt. If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.[159] In his book Human Compatible, AI researcher Stuart J. Russell echoes some of Bostrom's concerns while also proposing an approach to developing provably beneficial machines focused on uncertainty and deference to humans,[160]:173 possibly involving inverse reinforcement learning.[160]:191–193

Concern over risk from artificial intelligence has led to some high-profile donations and investments. A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1 billion to OpenAI, a nonprofit company aimed at championing responsible AI development.[161] The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[162] Other technology industry leaders believe that artificial intelligence is helpful in its current form and will continue to assist humans. Oracle CEO Mark Hurd has stated that AI "will actually create more jobs, not less jobs" as humans will be needed to manage AI systems.[163] Facebook CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things," such as curing disease and increasing the safety of autonomous cars.[164] In January 2015, Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The goal of the institute is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence.[165] I think there is potentially a dangerous outcome there."[166][167]

For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[168][169] Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.[170]

Ethical machines

Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use ethical reasoning to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.[171] Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.[172]

Joseph Weizenbaum in Computer Power and Human Reason wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy[r] was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[174]

Artificial moral agents

Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines[175] For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"[176] and "Can (Ro)bots Really Be Moral".[177] For Wallach, the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior, unlike the constraints which society may place on the development of AMAs.[178]

Machine ethics

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[179] The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."[180] Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"[179] that stems from the AAAI Fall 2005 Symposium on Machine Ethics.[180]

Malevolent and friendly AI

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent.[181] He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

One proposal to deal with this is to ensure that the first generally intelligent AI is 'Friendly AI' and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.

Leading AI researcher Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."[182]

Lethal autonomous weapons are of concern. In 2015, over fifty countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.[183]

Regulation

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI);[184][185] it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union.[186] Regulation is considered necessary to both encourage AI and manage associated risks.[187][188] Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[189]

Given the concerns about data exploitation, the European Union also developed an artificial intelligence policy, with a working group studying ways to assure confidence in the use of artificial intelligence. These were issued in two white papers in the midst of the COVID-19 pandemic. One of the policies on artificial intelligence is called A European Approach to Excellence and Trust.[190][191][192]

In fiction

The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots"

Thought-capable artificial beings appeared as storytelling devices since antiquity,[16] and have been a persistent theme in science fiction.

A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[193]

Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics;[194] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[195]

Transhumanism (the merging of humans and machines) is explored in the manga Ghost in the Shell and the science-fiction series Dune. In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always an unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[196]

See also

Explanatory notes

  1. ^ Definition of AI as the study of intelligent agents, drawn from the leading AI textbooks.
  2. ^ Stuart Russell and Peter Norvig characterize this definition as "thinking humanly" and reject it in favor of "acting rationally".[1]
  3. ^ Jump up to: a b Biological intelligence vs. intelligence in general:
    • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
    • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplished, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
    • AI founder McCarthy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real".[122] McCarthy reiterated his position in 2006 at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence".[123]
  4. ^ Jump up to: a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2003), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
  5. ^ See the Dartmouth proposal under Philosophy, below.
  6. ^ Daniel Crevier wrote "the conference is generally recognized as the official birthdate of the new science."[23] Russell and Norvifg call the conference "the birth of artificial intelligence."[24]
  7. ^ Russell and Norvig wrote "for the next 20 years the field would be dominated by these people and their students."[24]
  8. ^ Russell and Norvig wrote "it was astonishing whenever a computer did anything kind of smartish".[27]
  9. ^ The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
  10. ^ The funding initiatives include: Fifth Generation Project (Japan), Alvey (UK), MCC (US), and the SCI (US)
  11. ^ Embodied approaches to AI include Nouvelle AI,[38] Developmental robotics,[39] situated AI, behavior-based AI as well as others. A similar movement in cognitive science was the embodied mind thesis.
  12. ^ Russell and Norvig describe the move towards formal methods as the "Victory of the neats"[41]
  13. ^ Clark wrote: "After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever."[10]
  14. ^ Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[73] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[74]
  15. ^ This is a form of Tom Mitchell's widely quoted definition of machine learning: "A computer program is set to learn from an experience E with respect to some task T and some performance measure P if its performance on T as measured by P improves with experience E."
  16. ^ Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[118][119]
  17. ^ This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."[128] Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  18. ^ In the early 1970s, Kenneth Colby presented a version of Weizenbaum's ELIZA known as DOCTOR which he promoted as a serious therapeutic tool.[173]

References

  1. ^ Russell & Norvig 2009, p. 2.
  2. ^ Google 2016.
  3. ^ McCorduck 2004, p. 204.
  4. ^ Ashok83 2019.
  5. ^ Schank 1991, p. 38.
  6. ^ Crevier 1993, p. 109.
  7. ^ Jump up to: a b c McCorduck (2004, pp. 426–441); Crevier (1993, pp. 161–162,197–203, 211, 240); Russell & Norvig (2003, p. 24); NRC (1999, pp. 210–211); Newquist (1994, pp. 235–248)
  8. ^ Jump up to: a b Crevier (1993, pp. 115–117); Russell & Norvig (2003, p. 22); NRC (1999, pp. 212–213);Howe (1994); Newquist (1994, pp. 189–201)
  9. ^ Jump up to: a b McCorduck (2004, pp. 430–435); Crevier (1993, pp. 209–210); NRC (1999, pp. 214–216); Newquist (1994, pp. 301–318)
  10. ^ Jump up to: a b c d Clark 2015b.
  11. ^ Jump up to: a b Russell & Norvig (2003, p. 28); Kurzweil (2005, p. 265); NRC (1999, pp. 216–222); Newquist 1994, pp. 189–201
  12. ^ Jump up to: a b c d Pennachin & Goertzel (2007); Roberts (2016)
  13. ^ Newquist 1994, pp. 45–53.
  14. ^ Spadafora 2016.
  15. ^ Lombardo, Boehm & Nairz 2020.
  16. ^ Jump up to: a b McCorduck (2004, pp. 4–5); Russell & Norvig (2003, p. 939)
  17. ^ McCorduck 2004, pp. 17–25.
  18. ^ McCorduck 2004, pp. 340–400.
  19. ^ Berlinski 2000.
  20. ^ McCorduck (2004, pp. 51–107); Crevier 1993, pp. 27–32; Russell & Norvig 2003, pp. 15, 940; Moravec 1988, p. 3
  21. ^ Russell & Norvig 2009, p. 16.
  22. ^ Haugeland 1985, pp. 112–117.
  23. ^ Crevier 1993, pp. 47–49.
  24. ^ Jump up to: a b Russell & Norvig 2003, p. 17.
  25. ^ McCorduck (2004, pp. 111–136); NRC & (1999, pp. 200–201)
  26. ^ McCorduck 2004, pp. 129–130.
  27. ^ Russell & Norvig 2003, p. 18.
  28. ^ McCorduck (2004, pp. 243–252); Crevier (1993, pp. 52–107)}; Moravec (1988, p. 9); Russell & Norvig (2003, pp. 18–21)}
  29. ^ McCorduck (2004, p. 131);Crevier (1993, pp. 51, 64–65); NRC (1999, pp. 204–205)
  30. ^ Howe 1994.
  31. ^ Newquist 1994, pp. 86–86.
  32. ^ Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  33. ^ Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
  34. ^ Lighthill 1973.
  35. ^ ACM (1998, I.2.1); Russell & Norvig (2003, pp. 22–24); Luger & Stubblefield (2004, pp. 227–331); Nilsson (1998, chpt. 17.4); McCorduck (2004, pp. 327–335, 434–435); Crevier (1993, pp. 145–62, 197–203); Newquist (1994, pp. 155–183)
  36. ^ Nilsson 1998, p. 7, who uses the term "sub-symbolic".
  37. ^ McCorduck (2004, pp. 454–462); Moravec (1988); Brooks (1990)
  38. ^ Brooks 1990.
  39. ^ Weng et al. (2001); Lungarella et al. (2003); Asada et al. (2009); Oudeyer (2010)
  40. ^ Crevier (1993, pp. 214–215); Russell & Norvig (2003, p. 25)
  41. ^ Russell & Norvig 2003, p. 25.
  42. ^ Russell & Norvig 2003, pp. 25–26; McCorduck (2004, pp. 486–487)
  43. ^ McKinsey 2018.
  44. ^ MIT Sloan Management Review (2018); Lorica (2017)
  45. ^ Problem solving, puzzle solving, game playing and deduction: * Russell & Norvig 2003, chpt. 3–9, * Poole, Mackworth & Goebel 1998, chpt. 2,3,7,9, * Luger & Stubblefield 2004, chpt. 3,4,6,8, * Nilsson 1998, chpt. 7–12
  46. ^ Uncertain reasoning: * Russell & Norvig 2003, pp. 452–644, * Poole, Mackworth & Goebel 1998, pp. 345–395, * Luger & Stubblefield 2004, pp. 333–381, * Nilsson 1998, chpt. 19
  47. ^ Intractability and efficiency and the combinatorial explosion: * Russell & Norvig 2003, pp. 9, 21–22
  48. ^ Psychological evidence of sub-symbolic reasoning: * Wason & Shapiro (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allow the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) * Kahneman, Slovic & Tversky (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). * Lakoff & Núñez (2000) have controversially argued that even our skills at mathematics depend on knowledge and skills that come from "the body", i.e. sensorimotor and perceptual skills. (See Where Mathematics Comes From)
  49. ^ Knowledge representation: * ACM 1998, I.2.4, * Russell & Norvig 2003, pp. 320–363, * Poole, Mackworth & Goebel 1998, pp. 23–46, 69–81, 169–196, 235–277, 281–298, 319–345, * Luger & Stubblefield 2004, pp. 227–243, * Nilsson 1998, chpt. 18
  50. ^ Knowledge engineering: * Russell & Norvig 2003, pp. 260–266, * Poole, Mackworth & Goebel 1998, pp. 199–233, * Nilsson 1998, chpt. ≈17.1–17.4
  51. ^ Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts): * Russell & Norvig 2003, pp. 349–354, * Poole, Mackworth & Goebel 1998, pp. 174–177, * Luger & Stubblefield 2004, pp. 248–258, * Nilsson 1998, chpt. 18.3
  52. ^ Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem): * Russell & Norvig 2003, pp. 328–341, * Poole, Mackworth & Goebel 1998, pp. 281–298, * Nilsson 1998, chpt. 18.2
  53. ^ Causal calculus: * Poole, Mackworth & Goebel 1998, pp. 335–337
  54. ^ Representing knowledge about knowledge: Belief calculus, modal logics: * Russell & Norvig 2003, pp. 341–344, * Poole, Mackworth & Goebel 1998, pp. 275–277
  55. ^ Sikos, Leslie F. (June 2017). Description Logics in Multimedia Reasoning. Cham: Springer. doi:10.1007/978-3-319-54066-5. ISBN 978-3-319-54066-5. S2CID 3180114. Archived from the original on 29 August 2017.
  56. ^ Ontology: * Russell & Norvig 2003, pp. 320–328
  57. ^ Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and retrieval". IEEE Multimedia. 1 (2): 62–72. doi:10.1109/93.311653. S2CID 32710913.
  58. ^ Neumann, Bernd; Möller, Ralf (January 2008). "On scene interpretation with description logics". Image and Vision Computing. 26 (1): 82–101. doi:10.1016/j.imavis.2007.08.013.
  59. ^ Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations". Journal of the American Medical Informatics Association. 13 (4): 369–371. doi:10.1197/jamia.M2055. PMC 1513681. PMID 16622160.
  60. ^ McGarry, Ken (1 December 2005). "A survey of interestingness measures for knowledge discovery". The Knowledge Engineering Review. 20 (1): 39–61. doi:10.1017/S0269888905000408. S2CID 14987656.
  61. ^ Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic retrieval of video sequences using multimedia ontologies". MM '06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
  62. ^ Qualification problem: * McCarthy & Hayes 1969 * Russell & Norvig 2003[page needed] While McCarthy was primarily concerned with issues in the logical representation of actions, Russell & Norvig 2003 apply the term to the more general issue of default reasoning in the vast network of assumptions underlying all our commonsense knowledge.
  63. ^ Default reasoning and default logic, non-monotonic logics, circumscription, closed world assumption, abduction (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning"): * Russell & Norvig 2003, pp. 354–360, * Poole, Mackworth & Goebel 1998, pp. 248–256, 323–335, * Luger & Stubblefield 2004, pp. 335–363, * Nilsson 1998, ~18.3.3
  64. ^ Breadth of commonsense knowledge: * Russell & Norvig 2003, p. 21, * Crevier 1993, pp. 113–114, * Moravec 1988, p. 13, * Lenat & Guha 1989 (Introduction)
  65. ^ Dreyfus & Dreyfus 1986.
  66. ^ Gladwell 2005.
  67. ^ Expert knowledge as embodied intuition: * Dreyfus & Dreyfus 1986 (Hubert Dreyfus is a philosopher and critic of AI who was among the first to argue that most useful human knowledge was encoded sub-symbolically. See Dreyfus' critique of AI) * Gladwell 2005 (Gladwell's Blink is a popular introduction to sub-symbolic reasoning and knowledge.) * Hawkins & Blakeslee 2005 (Hawkins argues that sub-symbolic knowledge should be the primary focus of AI research.)
  68. ^ Planning: * ACM 1998, ~I.2.8, * Russell & Norvig 2003, pp. 375–459, * Poole, Mackworth & Goebel 1998, pp. 281–316, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22
  69. ^ Information value theory: * Russell & Norvig 2003, pp. 600–604
  70. ^ Classical planning: * Russell & Norvig 2003, pp. 375–430, * Poole, Mackworth & Goebel 1998, pp. 281–315, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22
  71. ^ Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning: * Russell & Norvig 2003, pp. 430–449
  72. ^ Multi-agent planning and emergent behavior: * Russell & Norvig 2003, pp. 449–455
  73. ^ Turing 1950.
  74. ^ Solomonoff 1956.
  75. ^ Jump up to: a b Learning: * ACM 1998, I.2.6, * Russell & Norvig 2003, pp. 649–788, * Poole, Mackworth & Goebel 1998, pp. 397–438, * Luger & Stubblefield 2004, pp. 385–542, * Nilsson 1998, chpt. 3.3, 10.3, 17.5, 20
  76. ^ Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends, perspectives, and prospects". Science. 349 (6245): 255–260. Bibcode:2015Sci...349..255J. doi:10.1126/science.aaa8415. PMID 26185243. S2CID 677218.
  77. ^ Reinforcement learning: * Russell & Norvig 2003, pp. 763–788 * Luger & Stubblefield 2004, pp. 442–449
  78. ^ Natural language processing: * ACM 1998, I.2.7 * Russell & Norvig 2003, pp. 790–831 * Poole, Mackworth & Goebel 1998, pp. 91–104 * Luger & Stubblefield 2004, pp. 591–632
  79. ^ Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation: * Russell & Norvig 2003, pp. 840–857, * Luger & Stubblefield 2004, pp. 623–630
  80. ^ Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]". IEEE Computational Intelligence Magazine. 9 (2): 48–57. doi:10.1109/MCI.2014.2307227. S2CID 206451986.
  81. ^ Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on 11 June 2020. Retrieved 11 June 2020.
  82. ^ Machine perception: * Russell & Norvig 2003, pp. 537–581, 863–898 * Nilsson 1998, ~chpt. 6
  83. ^ Speech recognition: * ACM 1998, ~I.2.7 * Russell & Norvig 2003, pp. 568–578
  84. ^ Object recognition: * Russell & Norvig 2003, pp. 885–892
  85. ^ Computer vision: * ACM 1998, I.2.10 * Russell & Norvig 2003, pp. 863–898 * Nilsson 1998, chpt. 6
  86. ^ Robotics: * ACM 1998, I.2.9, * Russell & Norvig 2003, pp. 901–942, * Poole, Mackworth & Goebel 1998, pp. 443–460
  87. ^ Moving and configuration space: * Russell & Norvig 2003, pp. 916–932
  88. ^ Tecuci 2012.
  89. ^ Robotic mapping (localization, etc): * Russell & Norvig 2003, pp. 908–915
  90. ^ Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose; Reid, Ian; Leonard, John J. (December 2016). "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics. 32 (6): 1309–1332. arXiv:1606.05830. Bibcode:2016arXiv160605830C. doi:10.1109/TRO.2016.2624754. S2CID 2596787.
  91. ^ Moravec 1988, p. 15.
  92. ^ Chan, Szu Ping (15 November 2015). "This is what will happen when robots take over the world". Archived from the original on 24 April 2018. Retrieved 23 April 2018.
  93. ^ "IKEA furniture and the limits of AI". The Economist. 2018. Archived from the original on 24 April 2018. Retrieved 24 April 2018.
  94. ^ "Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived from the original on 17 October 2014. Retrieved 25 October 2014.
  95. ^ Thro 1993.
  96. ^ Edelson 1991.
  97. ^ Tao & Tan 2005.
  98. ^ Emotion and affective computing: * Minsky 2006
  99. ^ Waddell, Kaveh (2018). "Chatbots Have Entered the Uncanny Valley". The Atlantic. Archived from the original on 24 April 2018. Retrieved 24 April 2018.
  100. ^ Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003. hdl:1893/25490.
  101. ^ Jump up to: a b Russell & Norvig 2009, p. 1.
  102. ^ Jump up to: a b White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. Archived (PDF) from the original on 20 February 2020. Retrieved 20 February 2020.
  103. ^ "AI set to exceed human brain power". CNN. 9 August 2006. Archived from the original on 19 February 2008.
  104. ^ Using AI to predict flight delays Archived 20 November 2018 at the Wayback Machine, Ishti.org.
  105. ^ N. Aletras; D. Tsarapatsanis; D. Preotiuc-Pietro; V. Lampos (2016). "Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective". PeerJ Computer Science. 2: e93. doi:10.7717/peerj-cs.93.
  106. ^ "The Economist Explains: Why firms are piling into artificial intelligence". The Economist. 31 March 2016. Archived from the original on 8 May 2016. Retrieved 19 May 2016.
  107. ^ Lohr, Steve (28 February 2016). "The Promise of Artificial Intelligence Unfolds in Small Steps". The New York Times. Archived from the original on 29 February 2016. Retrieved 29 February 2016.
  108. ^ Frangoul, Anmar (14 June 2019). "A Californian business is using A.I. to change the way we think about energy storage". CNBC. Archived from the original on 25 July 2020. Retrieved 5 November 2019.
  109. ^ Wakefield, Jane (15 June 2016). "Social media 'outstrips TV' as news source for young people". BBC News. Archived from the original on 24 June 2016.
  110. ^ Smith, Mark (22 July 2016). "So you think you chose to read this article?". BBC News. Archived from the original on 25 July 2016.
  111. ^ Brown, Eileen. "Half of Americans do not believe deepfake news could target them online". ZDNet. Archived from the original on 6 November 2019. Retrieved 3 December 2019.
  112. ^ Turing's original publication of the Turing test: Historical influence and philosophical implications:
  113. ^ Turing, Alan (1948), "Machine Intelligence", in Copeland, B. Jack (ed.), The Essential Turing: The ideas that gave birth to the computer age, Oxford: Oxford University Press, p. 412, ISBN 978-0-19-825080-7
  114. ^ Dartmouth proposal: Historical significance:
  115. ^ Physical symbol system hypothesis: Historical significance:
  116. ^ Russell & Norvig 2003, pp. 9,21–22.
  117. ^ Dreyfus arguments: Historical significance and philosophical implications:
  118. ^ Crevier 1993, p. 125.
  119. ^ McCorduck 2004, p. 236.
  120. ^ Jump up to: a b Langley 2011.
  121. ^ Katz 2012.
  122. ^ Kolata 1982.
  123. ^ Maker 2006.
  124. ^ Neats vs. scruffies: * McCorduck 2004, pp. 421–424, 486–489 * Crevier 1993, p. 168 * Nilsson 1983, pp. 10–11
  125. ^ Russell & Norvig 2003, p. 947.
  126. ^ Chalmers 1995.
  127. ^ , (2005) "The Computational Theory of Mind" Archived 11 September 2018 at the Wayback Machine in The Stanford Encyclopedia of Philosophy
  128. ^ Searle 1980, p. 1.
  129. ^ Searle's Chinese room argument: Discussion:
  130. ^ Robot rights: * Russell & Norvig 2003, p. 964 Prematurity of: * Henderson 2007 In fiction: * McCorduck (2004, pp. 190–25) discusses Frankenstein and identifies the key ethical issues as scientific hubris and the suffering of the monster, i.e. robot rights.
  131. ^ "Robots could demand legal rights". BBC News. 21 December 2006. Archived from the original on 15 October 2019. Retrieved 3 February 2011.
  132. ^ Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
  133. ^ maschafilm. "Content: Plug & Pray Film – Artificial Intelligence – Robots -". plugandpray-film.de. Archived from the original on 12 February 2016.
  134. ^ Roberts 2016.
  135. ^ Omohundro, Steve (2008). The Nature of Self-Improving Artificial Intelligence. presented and distributed at the 2007 Singularity Summit, San Francisco, CA.
  136. ^ Vinge 1993; Russell & Norvig 2003, p. 963
  137. ^ Kurzweil 2005.
  138. ^ Transhumanism: * Moravec 1988 * Kurzweil 2005 * Russell & Norvig 2003, p. 963
  139. ^ AI as evolution: * Edward Fredkin is quoted in McCorduck (2004, p. 401). * Butler 1863 * Dyson 1998
  140. ^ Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.
  141. ^ "Robots and Artificial Intelligence". www.igmchicago.org. Archived from the original on 1 May 2019. Retrieved 3 July 2019.
  142. ^ "Sizing the prize: PwC's Global AI Study – Exploiting the AI Revolution" (PDF). Archived (PDF) from the original on 18 November 2020. Retrieved 11 November 2020.
  143. ^ E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine
  144. ^ "Automation and anxiety". The Economist. 9 May 2015. Archived from the original on 12 January 2018. Retrieved 13 January 2018.
  145. ^ Lohr, Steve (2017). "Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says". The New York Times. Archived from the original on 14 January 2018. Retrieved 13 January 2018.
  146. ^ Frey, Carl Benedikt; Osborne, Michael A (1 January 2017). "The future of employment: How susceptible are jobs to computerisation?". Technological Forecasting and Social Change. 114: 254–280. CiteSeerX 10.1.1.395.416. doi:10.1016/j.techfore.2016.08.019. ISSN 0040-1625.
  147. ^ Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.
  148. ^ Mahdawi, Arwa (26 June 2017). "What jobs will still be around in 20 years? Read this to prepare your future". The Guardian. Archived from the original on 14 January 2018. Retrieved 13 January 2018.
  149. ^ Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian. Archived from the original on 16 June 2018. Retrieved 13 January 2018.
  150. ^ Simon, Matt (1 April 2019). "Andrew Yang's Presidential Bid Is So Very 21st Century". Wired. Archived from the original on 24 June 2019. Retrieved 2 May 2019 – via www.wired.com.
  151. ^ "Five experts share what scares them the most about AI". 5 September 2018. Archived from the original on 8 December 2019. Retrieved 8 December 2019.
  152. ^ "Commentary: Bad news. Artificial intelligence is biased". CNA. 12 January 2019. Archived from the original on 12 January 2019. Retrieved 19 June 2020.
  153. ^ Jeff Larson, Julia Angwin (23 May 2016). "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Archived from the original on 29 April 2019. Retrieved 19 June 2020.
  154. ^ Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a threat". BBC News. Archived from the original on 29 January 2015. Retrieved 30 January 2015.
  155. ^ Holley, Peter (28 January 2015). "Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned'". The Washington Post. ISSN 0190-8286. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  156. ^ Gibbs, Samuel (27 October 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  157. ^ Churm, Philip Andrew (14 May 2019). "Yuval Noah Harari talks politics, technology and migration". euronews. Archived from the original on 14 May 2019. Retrieved 15 November 2020.
  158. ^ Cellan-Jones, Rory (2 December 2014). "Stephen Hawking warns artificial intelligence could end mankind". BBC News. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  159. ^ Bostrom, Nick (2015). "What happens when our computers get smarter than we are?". TED (conference). Archived from the original on 25 July 2020. Retrieved 30 January 2020.
  160. ^ Jump up to: a b Russell, Stuart (8 October 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  161. ^ Post, Washington. "Tech titans like Elon Musk are spending $1 billion to save you from terminators". Archived from the original on 7 June 2016.
  162. ^ Müller, Vincent C.; Bostrom, Nick (2014). "Future Progress in Artificial Intelligence: A Poll Among Experts" (PDF). AI Matters. 1 (1): 9–11. doi:10.1145/2639475.2639478. S2CID 8510016. Archived (PDF) from the original on 15 January 2016.
  163. ^ "Oracle CEO Mark Hurd sees no reason to fear ERP AI". SearchERP. Archived from the original on 6 May 2019. Retrieved 6 May 2019.
  164. ^ "Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'". Business Insider. 25 May 2018. Archived from the original on 6 May 2019. Retrieved 6 May 2019.
  165. ^ "The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers". Tech Insider. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  166. ^ Clark 2015a.
  167. ^ "Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research". Fast Company. 15 January 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  168. ^ "Is artificial intelligence really an existential threat to humanity?". Bulletin of the Atomic Scientists. 9 August 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  169. ^ "The case against killer robots, from a guy actually working on artificial intelligence". Fusion.net. Archived from the original on 4 February 2016. Retrieved 31 January 2016.
  170. ^ "Will artificial intelligence destroy humanity? Here are 5 reasons not to worry". Vox. 22 August 2014. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  171. ^ Iphofen, Ron; Kritikos, Mihalis (3 January 2019). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041. S2CID 59298502.
  172. ^ "Ethical AI Learns Human Rights Framework". Voice of America. Archived from the original on 11 November 2019. Retrieved 10 November 2019.
  173. ^ Crevier 1993, pp. 132–144.
  174. ^ Joseph Weizenbaum's critique of AI: * Weizenbaum 1976 * Crevier 1993, pp. 132–144 * McCorduck 2004, pp. 356–373 * Russell & Norvig 2003, p. 961 Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life.
  175. ^ Wallach, Wendell (2010). Moral Machines. Oxford University Press.
  176. ^ Wallach 2010, pp. 37–54.
  177. ^ Wallach 2010, pp. 55–73.
  178. ^ Wallach 2010, "Introduction".
  179. ^ Jump up to: a b Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.
  180. ^ Jump up to: a b "Machine Ethics". aaai.org. Archived from the original on 29 November 2014.
  181. ^ Rubin, Charles (Spring 2003). "Artificial Intelligence and Human Nature". The New Atlantis. 1: 88–100. Archived from the original on 11 June 2012.
  182. ^ Brooks, Rodney (10 November 2014). "artificial intelligence is a tool, not a threat". Archived from the original on 12 November 2014.
  183. ^ "Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence". Observer. 19 August 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  184. ^ Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation. Archived (PDF) from the original on 20 December 2019. Retrieved 9 August 2020.
  185. ^ Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial intelligence. Cheltenham, UK. ISBN 978-1-78643-904-8. OCLC 1039480085.
  186. ^ Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. LCCN 2019668143. OCLC 1110727808.
  187. ^ Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (24 July 2018). "Artificial Intelligence and the Public Sector – Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602. Archived from the original on 18 August 2020. Retrieved 22 August 2020.
  188. ^ Buiten, Miriam C (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. ISSN 1867-299X.
  189. ^ Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  190. ^ "Does This Change Everything? Coronavirus and your private data". European Investment Bank. Archived from the original on 7 June 2021. Retrieved 7 June 2021.
  191. ^ "White Paper on Artificial Intelligence – a European approach to excellence and trust | Shaping Europe's digital future". digital-strategy.ec.europa.eu. Retrieved 7 June 2021.
  192. ^ "What's Ahead for a Cooperative Regulatory Agenda on Artificial Intelligence?". www.csis.org. Archived from the original on 7 June 2021. Retrieved 7 June 2021.
  193. ^ Buttazzo, G. (July 2001). "Artificial consciousness: Utopia or real possibility?". Computer. 34 (7): 24–30. doi:10.1109/2.933500.
  194. ^ Anderson, Susan Leigh. "Asimov's "three laws of robotics" and machine metaethics." AI & Society 22.4 (2008): 477–493.
  195. ^ McCauley, Lee (2007). "AI armageddon and the three laws of robotics". Ethics and Information Technology. 9 (2): 153–164. CiteSeerX 10.1.1.85.8904. doi:10.1007/s10676-007-9138-2. S2CID 37272949.
  196. ^ Galvan, Jill (1 January 1997). "Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"". Science Fiction Studies. 24 (3): 413–429. JSTOR 4240644.

AI textbooks

History of AI

  • Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3.
  • McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1.
  • Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.
  • Nilsson, Nils (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements. New York: Cambridge University Press. ISBN 978-0-521-12293-1.

Other sources

Further reading

  • DH Author, "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015) 29(3) Journal of Economic Perspectives 3.
  • Boden, Margaret, Mind As Machine, Oxford University Press, 2006.
  • Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
  • Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93.
  • Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65.
  • Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
  • Koch, Christof, "Proust among the Machines", Scientific American, vol. 321, no. 6 (December 2019), pp. 46–49. Christof Koch doubts the possibility of "intelligent" machines attaining consciousness, because "[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings." (p. 48.) According to Koch, "Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects... with a point of view.... Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible—the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself." (p. 49.)
  • Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable disambiguation. An example is the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence refers. (p. 61.)
  • E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine.
  • George Musser, "Artificial Imagination: How machines could learn creativity and common sense, among other human qualities", Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63.
  • Myers, Courtney Boyd ed. (2009). "The AI Report" Archived 29 July 2017 at the Wayback Machine. Forbes June 2009
  • Raphael, Bertram (1976). The Thinking Computer. W.H. Freeman and Co. ISBN 978-0716707233. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
  • Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
  • Serenko, Alexander (2010). "The development of an AI journal ranking based on the revealed preference approach" (PDF). Journal of Informetrics. 4 (4): 447–59. doi:10.1016/j.joi.2010.04.001. Archived (PDF) from the original on 4 October 2013. Retrieved 24 August 2013.
  • Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–49. doi:10.1016/j.joi.2011.06.002. Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013.
  • Tom Simonite (29 December 2014). "2014 in Computing: Breakthroughs in Artificial Intelligence". MIT Technology Review.
  • Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.
  • Taylor, Paul, "Insanely Complicated, Hopelessly Inadequate" (review of Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment, MIT, 2019, ISBN 978-0262043045, 157 pp.; Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Ballantine, 2019, ISBN 978-1524748258, 304 pp.; Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, Penguin, 2019, ISBN 978-0141982410, 418 pp.), London Review of Books, vol. 43, no. 2 (21 January 2021), pp. 37–39. Paul Taylor writes (p. 39): "Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality."
  • Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.)

External links

Retrieved from ""