James H. Moor

From Wikipedia, the free encyclopedia

James H. Moor is the Daniel P. Stone Professor of Intellectual and Moral Philosophy at Dartmouth College. He earned his Ph.D. in 1972 from Indiana University.[1] Moor's 1985 paper entitled "What is Computer Ethics?" established him as one of the pioneering theoreticians in the field of computer ethics.[2] He has also written extensively on the Turing Test. His research includes study in philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic.

Moor was the editor-in-chief of Minds and Machines (2001-2010), a peer-reviewed academic journal covering artificial intelligence, philosophy, and cognitive science.[3]

Work[]

Moor lists four kinds of robots in relation to ethics. A machine can be more than one type of agent.[4]

  • Ethical impact agents: machine systems carrying an ethical impact whether intended or not. Moor gives the example of a watch causing a worker to be on work on time. As well as Ethical impact agents there are Unethical impact agents. Certain agents can be unethical impact agents at certain times and ethical impact agents at other times. He gives the example of what he calls a 'Goodman agent', named after philosopher Nelson Goodman. The Goodman agent compares dates "this was generated by programming yearly dates using only the last two digits of the year, which resulted in dates beyond 2000 being misleadingly treated as earlier than those in the late twentieth century. Thus the Goodman agent was an ethical impact agent before 2000, and an unethical impact agent thereafter."
  • Implicit ethical agents: machines constrained to avoid unethical outcomes.
  • Explicit ethical agents: Machines which have algorithms to act ethical.
  • Full ethical agents: Machines that are ethical in the same way humans are (i.e. have free will, consciousness and intentionality)

He has criticised Asimov's Three Laws of Robotics saying that if applied thoroughly they would produce unexpected results. He gives the example of a robot roaming the world trying to prevent harm from all humans.

Awards[]

Selected publications[]

Source:[6]

  • The Digital Phoenix: How Computers Are Changing Philosophy, Revised Edition, (with Terrell Ward Bynum), Oxford: Basil Blackwell Publishers, 2000.
  • Cyberphilosophy: The Intersection of Philosophy and Computing, (with Terrell Ward Bynum) Oxford: Basil Blackwell Publishers, 2002.
  • The Turing Test: The Elusive Standard of Artificial Intelligence, Dordrecht: Kluwer Academic Publishers, 2003.
  • Nanoethics: The Ethical and Social Implications of Nanotechnology (with Fritz Allhoff, Patrick Lin, and John Weckert), John Wiley & Sons, Inc., 2007.
  • The Logic Book, 5th Edition (with Merrie Bergmann and Jack Nelson), New York: McGraw-Hill Publishing Company, 2009.
  • Some Implications of a Sample of Practical Turing Tests (with Kevin Warwick and Huma Shah), Minds and Machines, Springer, 2013.[7]

References[]

  1. ^ http://www.dartmouth.edu/~jmoor/
  2. ^ "Archived copy". Archived from the original on 2016-08-08. Retrieved 2010-11-18.CS1 maint: archived copy as title (link)
  3. ^ https://www.springer.com/computer/ai/journal/11023?detailsPage=editorialBoard
  4. ^ Four Kinds of Ethical Robots
  5. ^ "Archived copy". Archived from the original on 2016-08-08. Retrieved 2010-11-18.CS1 maint: archived copy as title (link)
  6. ^ http://www.dartmouth.edu/~jmoor/
  7. ^ Warwick, Kevin; Shah, Huma; Moor, James (2013). "Some Implications of a Sample of Practical Turing Tests". Minds and Machines. 23 (2): 163–177. doi:10.1007/s11023-013-9301-y. S2CID 13933358.


Retrieved from ""