Moral agency
hideThis article has multiple issues. Please help or discuss these issues on the talk page. (Learn how and when to remove these template messages)
|
Moral agency is an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions.[1] A moral agent is "a being who is capable of acting with reference to right and wrong."[2]
Development and analysis[]
Most philosophers suggest only rational beings, who can reason and form self-interested judgments, are capable of being moral agents. Some suggest those with limited rationality (for example, people who are mildly mentally disabled or infants[1]) also have some basic moral capabilities.[3]
Determinists argue all of our actions are the product of antecedent causes, and some believe this is incompatible with free will and thus claim that we have no real control over our actions. Immanuel Kant argued that whether or not our real self, the noumenal self, can choose, we have no choice but to believe that we choose freely when we make a choice. This does not mean that we can control the effects of our actions. Some Indeterminists would argue we have no free will either. If, with respect to human behaviour, a so-called 'cause' results in an indeterminate number of possible, so-called 'effects', that does not mean the person had the free-thinking independent will to choose that 'effect'. More likely, it was the indeterminate consequence of his chance genetics, chance experiences and chance circumstances relevant at the time of the 'cause'.
In Kant's philosophy, this calls for an act of faith, the faith free agent is based on something a priori, yet to be known, or immaterial. Otherwise, without free agent's a priori fundamental source, socially essential concepts created from human mind, such as justice, would be undermined (responsibility implies freedom of choice) and, in short, civilization and human values would crumble.
It is useful to compare the idea of moral agency with the legal doctrine of mens rea, which means guilty mind, and states that a person is legally responsible for what he does as long as he should know what he is doing, and his choices are deliberate. Some theorists discard any attempts to evaluate mental states and, instead, adopt the doctrine of strict liability, whereby one is liable under the law without regard to capacity, and that the only thing is to determine the degree of punishment, if any. Moral determinists would most likely adopt a similar point of view.
Psychologist Albert Bandura has observed that moral agents engage in selective moral disengagement in regards to their own inhumane conduct.[4]
Distinction between moral agency and moral patienthood[]
Philosophers distinguish between moral agents, entities whose actions are eligible for moral consideration and moral patients, entities that themselves are eligible for moral consideration. Many philosophers, such as Kant, view morality as a transaction among rational parties, i.e., among moral agents. For this reason, they would exclude other animals from moral consideration.[citation needed] Others, such as Utilitarian philosophers like Jeremy Bentham and Peter Singer have argued the key to inclusion in the moral community is not rationality — for if it were, we might have to exclude some disabled people and infants, and might also have to distinguish between the degrees of rationality of healthy adults — but the real object of moral action is the avoidance of suffering. This is the argument from marginal cases.
Artificial moral agents[]
Discussions of artificial moral agency center around a few main ideas. The first discussion is on whether it is possible for an artificial system to be a moral agent - see artificial systems and moral responsibility. The second discussion concerns efforts to construct machines with ethically-significant behaviors - see machine ethics. Finally, there is debate about if robots should be constructed as moral agents. The proper distinction between these idea has itself been a key point of debate.
Research has shown that humans do perceive robots as having varying degrees of moral agency, and those perceptions can manifest in two forms: (1) ideas about a robot’s moral capacity (the ability to be/do good or bad) and (2) ideas about its (in)dependence on programming (where high dependency equates to low agency).[5] Robots are held more responsible for their (im)moral behaviors than are humans.[6]
See also[]
- Agency (LDS Church)
- Ethics
- Fiduciary
- Free will
- Medical ethics
- Morality
- Moral responsibility
- Argument from marginal cases
- Tree of the knowledge of good and evil
Notes[]
- ^ Jump up to: a b Angus, Taylor (2003). Animals & Ethics: An Overview of the Philosophical Debate. Peterborough, Ontario: Broadview Press. p. 20.
- ^ "Moral," Websters Revised Unabridged Dictionary, 1913, p. 943.
- ^ Hargrove, Eugene C., ed. (1992). The Animal Rights, Environmental Ethics Debate: The Environmental Perspective. Albany: State Univ. of New York Press. pp. 3–4. ISBN 978-0-7914-0933-6.
- ^ Bandura, Albert (June 2002). "Selective Moral Disengagement in the Exercise of Moral Agency". Journal of Moral Education. 31 (2): 101–119. CiteSeerX 10.1.1.473.2026. doi:10.1080/0305724022014322. S2CID 146449693.
- ^ Banks, Jaime (2019-01-01). "A perceived moral agency scale: Development and validation of a metric for humans and social machines". Computers in Human Behavior. 90: 363–371. doi:10.1016/j.chb.2018.08.028. ISSN 0747-5632.
- ^ Banks, Jaime (2020-09-10). "Good Robots, Bad Robots: Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust". International Journal of Social Robotics. doi:10.1007/s12369-020-00692-3. ISSN 1875-4805.
References[]
- Singer, Peter, Animal Liberation, 1975.[full citation needed]
External links[]
Wikiquote has quotations related to: Moral agency |
- Intention
- Morality
- Personhood