Murray Shanahan

From Wikipedia, the free encyclopedia
Murray Shanahan
Born
Murray Patrick Shanahan
Alma materImperial College London (BSc)
University of Cambridge (PhD)
Scientific career
FieldsArtificial intelligence
Neurodynamics
Consciousness[1]
InstitutionsImperial College London DeepMind
ThesisExploiting dependencies in search and inference mechanisms
Doctoral advisorWilliam F. Clocksin[2]
Websitewww.doc.ic.ac.uk/~mpsha

Murray Patrick Shanahan is a Professor of Cognitive Robotics at Imperial College London,[3] in the Department of Computing, and a senior scientist at DeepMind.[4] He researches artificial intelligence, robotics, and cognitive science.[1][5]

Education[]

Shanahan was educated at Imperial College London[6] and completed his PhD at the University of Cambridge in 1987[7] supervised by William F. Clocksin.[2]

Career and research[]

At Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then (in 2005) the Department of Computing, where he was promoted from Reader to Professor in 2006.[6] Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina.[8] Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test.[9] Shanahan is on the six-person ethics board for Texan startup Lucid.AI as of 2016,[10] and as of 2017 is on the external advisory board for the Cambridge Centre for the Study of Existential Risk.[11][12] In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines GOFAI with neural networks, and that exhibits a form of transfer learning.[13][14] In 2017, citing "the potential (brain drain) on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind.[4] The Atlantic and Wired UK have characterized Shanahan as an influential researcher.[15][16]

Books[]

In 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina.[17] The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions.[18]

In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans.[19] The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence.[20] Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable.[21]

Shanahan also authored Solving the Frame Problem (MIT Press, 1997) and co-authored Search, Inference and Dependencies in Artificial Intelligence (Ellis Horwood, 1989).[6]

Views[]

As of the 2010s, Shanahan characterizes AI as lacking the common sense of a human child. He endorses research into artificial general intelligence (AGI) to fix this problem, stating that AI systems deployed in areas such as medical diagnosis and automated vehicles should have such abilities to be safer and more effective. Shanahan states that there is no need to panic about an AI takeover because multiple conceptual breakthroughs will be needed for AGI, and that "it is impossible to know when (AGI) might be achievable".[22] Shanahan states "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." In 2014 Shanahan stated there will be no AGI in the next ten to twenty years, but also stated "on the other hand it's probably a good idea for AI researchers to start thinking (now) about the (existential risk) issues that Stephen Hawking and others have raised."[23] Shanahan is confident that AGI will eventually be achieved.[24] In 2015 he speculated that AGI is "possible but unlikely" in 2025 to 2050, and becomes "increasingly likely, but still not certain" in the second half of the 21st century.[8] Shanahan has advocated that such AGI should be taught human empathy.[25]

References[]

  1. ^ Jump up to: a b Murray Shanahan publications indexed by Google Scholar Edit this at Wikidata
  2. ^ Jump up to: a b Murray Shanahan at the Mathematics Genealogy Project Edit this at Wikidata
  3. ^ "How to make a digital human brain". Fox News. 13 June 2013. Retrieved 8 March 2016.
  4. ^ Jump up to: a b Sample, Ian (1 November 2017). "'We can't compete': why universities are losing their best AI scientists". The Guardian. Retrieved 7 June 2020.
  5. ^ Murray Shanahan at DBLP Bibliography Server Edit this at Wikidata
  6. ^ Jump up to: a b c "Murray Shanahan". www.doc.ic.ac.uk.
  7. ^ Shanahan, Murray Patrick (1987). Exploiting dependencies in search and inference mechanisms. cam.ac.uk (PhD thesis). University of Cambridge. OCLC 53611159. EThOS uk.bl.ethos.252643.
  8. ^ Jump up to: a b "AI: will the machines ever rise up?". The Guardian. 26 June 2015. Retrieved 7 June 2020.
  9. ^ "Inside "Devs," a Dreamy Silicon Valley Quantum Thriller". Wired. March 2020. Retrieved 7 June 2020.
  10. ^ "The biggest mystery in AI right now is the ethics board that Google set up after buying DeepMind". Business Insider. 26 March 2016. Retrieved 22 April 2016.
  11. ^ Shead, Sam (25 May 2020). "How Britain's oldest universities are trying to protect humanity from risky A.I." CNBC. Retrieved 7 June 2020.
  12. ^ "Team". Archived from the original on 2017-11-07.
  13. ^ Vincent, James (10 October 2016). "These are three of the biggest problems facing today's AI". The Verge. Retrieved 7 June 2020.
  14. ^ Adee, Sally (2016). "Basic common sense is key to building more intelligent machines". New Scientist. Retrieved 7 June 2020.
  15. ^ Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020. Embodiment is central to thought itself, according to the AI guru Murray Shanahan
  16. ^ Manthorpe, Rowland (12 October 2016). "The UK has a new AI centre – so when robots kill, we know who to blame". Wired UK. Retrieved 7 June 2020. The list of researchers on the Centre’s nine projects features a roll call of AI luminaries: Nick Bostrom, director of Oxford’s Future of Humanity Institute, is leading one, as are Imperial College’s Murray Shanahan and Berkeley’s Stuart Russell.
  17. ^ O'Sullivan, Michael (1 May 2015). "Why are we obsessed with robots?". Washington Post. Retrieved 7 June 2020.
  18. ^ Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020.
  19. ^ "Autumn's science books weigh up humanity's future options". New Scientist. 9 September 2015. Retrieved 8 March 2016.
  20. ^ 2015 Library Journal review of The Technological Singularity by Murray Shanahan. "This evenhanded primer on a topic whose significance is becoming increasingly recognized ought, as per its inclusion in this series, to receive wide exposure."
  21. ^ Sidney Perkowitz on The Technological Singularity and Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, LA Review of Books, February 18, 2016
  22. ^ King, Anthony (2018). "Machines won't be taking over yet, says leading robotics expert". The Irish Times. Retrieved 7 June 2020.
  23. ^ Ward, Mark (2 December 2014). "Does rampant AI threaten humanity?". BBC News. Retrieved 7 June 2020.
  24. ^ "Are Computers Already Smarter Than Humans?". Time. 2017. Retrieved 7 June 2020.
  25. ^ "Robots Read Books to Learn Right and Wrong". Newsweek. 2016. Retrieved 7 June 2020.

External links[]

Retrieved from ""