Machine ethics

Source: Wikipedia, the free encyclopedia.
(Redirected from
Machine morality
)

Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents.[1] Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.[2]

Definitions

James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots. As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines to be ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A machine can be more than one type of agent.[3]

  • Ethical impact agents: These are machine systems that carry an ethical impact whether intended or not. At the same time, these agents have the potential to act unethical. Moor gives a hypothetical example called the 'Goodman agent', named after philosopher Nelson Goodman. The Goodman agent compares dates but has the millennium bug. This bug resulted from programmers who represented dates with only the last two digits of the year. So any dates beyond 2000 would be misleadingly treated as earlier than those in the late twentieth century. Thus the Goodman agent was an ethical impact agent before 2000, and an unethical impact agent thereafter.
  • Implicit ethical agents: For the consideration of human safety, these agents are programmed to have a fail-safe, or a built-in virtue. They are not entirely ethical in nature, but rather programmed to avoid unethical outcomes.
  • Explicit ethical agents: These are machines that are capable of processing scenarios and acting on ethical decisions. Machines which have algorithms to act ethically.
  • Full ethical agents: These machines are similar to explicit ethical agents in being able to make ethical decisions. However, they also contain human metaphysical features. (i.e. have free will, consciousness and intentionality)

(See artificial systems and moral responsibility.)

History

Before the 21st century the ethics of machines had largely been the subject of science fiction literature, mainly due to computing and artificial intelligence (AI) limitations. Although the definition of "Machine Ethics" has evolved since, the term was coined by Mitchell Waldrop in the 1987 AI Magazine article "A Question of Responsibility":

"However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov's three laws of robotics."[4]

In 2004, Towards Machine Ethics[5] was presented at the AAAI Workshop on Agent Organizations: Theory and Practice[6] in which theoretical foundations for machine ethics were laid out.

It was in the AAAI Fall 2005 Symposium on Machine Ethics where researchers met for the first time to consider implementation of an ethical dimension in autonomous systems.[7] A variety of perspectives of this nascent field can be found in the collected edition Machine Ethics[8] that stems from the AAAI Fall 2005 Symposium on Machine Ethics.

In 2007, AI Magazine featured Machine Ethics: Creating an Ethical Intelligent Agent,[9] an article that discussed the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. It also demonstrated that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of ethical judgments and use that principle to guide its own behavior.

In 2009, Oxford University Press published Moral Machines, Teaching Robots Right from Wrong,[10] which it advertised as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics." It cited some 450 sources, about 100 of which addressed major questions of machine ethics.

In 2011, Cambridge University Press published a collection of essays about machine ethics edited by Michael and Susan Leigh Anderson,[8] who also edited a special issue of IEEE Intelligent Systems on the topic in 2006.[11] The collection consists of the challenges of adding ethical principles to machines.[12]

In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots,[13] and Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, which raised machine ethics as the "most important...issue humanity has ever faced," reached #17 on the New York Times list of best selling science books.[14]

In 2016 the European Parliament published a paper,[15] (22-page PDF), to encourage the Commission to address the issue of robots' legal status, as described more briefly in the press.[16] This paper included sections regarding the legal liabilities of robots, in which the liabilities were argued as being proportional to the robots' level of autonomy. The paper also brought into question the number of jobs that could be replaced by AI robots.[17]

In 2019 the Proceedings of the IEEE published a special issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems, edited by Alan Winfield, Katina Michael, Jeremy Pitt and Vanessa Evers.[18] "The issue includes papers describing implicit ethical agents, where machines are designed to avoid unethical outcomes, as well as explicit ethical agents, or machines that either encode or learn ethics and determine actions based on those ethics".[19]

Areas of focus

AI control problem

Some scholars, such as philosopher Nick Bostrom and AI researcher

Human Compatible
, both scholars assert that while there is much uncertainty regarding the future of AI, the risk to humanity is great enough to merit significant action in the present.

This presents the

values). There are a number of organizations researching the AI control problem, including the Future of Humanity Institute, the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute
.

Algorithms and training

AI paradigms have been debated over, especially in relation to their efficacy and bias. Nick Bostrom and

In 2009, in an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, AI robots were programmed to cooperate with each other and tasked with the goal of searching for a beneficial resource while avoiding a poisonous resource.[24] During the experiment, the robots were grouped into clans, and the successful members' digital genetic code was used for the next generation, a type of algorithm known as a genetic algorithm. After 50 successive generations in the AI, one clan's members discovered how to distinguish the beneficial resource from the poisonous one. The robots then learned to lie to each other in an attempt to hoard the beneficial resource from other robots.[24] In the same experiment, the same AI robots also learned to behave selflessly and signaled danger to other robots, and also died at the cost to save other robots.[22] The implications of this experiment have been challenged by machine ethicists. In the Ecole Polytechnique Fédérale experiment, the robots' goals were programmed to be "terminal". In contrast, human motives typically have a quality of requiring never-ending learning.

Autonomous weapons systems

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[25]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[26] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[27][28] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[29] They point to programs like the Language Acquisition Device which can emulate human interaction.

Integration of artificial general intelligences with society

A hospital delivery robot in front of elevator doors stating "Robot Has Priority", a situation that may be regarded as reverse discrimination in relation to humans

Preliminary work has been conducted on methods of integrating artificial general intelligences (full ethical agents as defined above) with existing legal and social frameworks. Approaches have focused on consideration of their legal position and rights.[30]

Machine learning bias

Amazon's same-day delivery service was intentionally made unavailable in black neighborhoods. Both Google and Amazon were unable to isolate these outcomes to a single issue, but instead explained that the outcomes were the result of the black box algorithms they used.[31]

The United States judicial system has begun using

Equal Protection rights on the basis of race, due to a number of factors including possible discriminatory intent from the algorithm itself under a theory of partial legal capacity for artificial intelligences.[33]

In 2016, the

policy makers, citizens, and academics alike, but recognizes that it does not have a potential solution for the encoding of bias
and discrimination into algorithmic systems.

Ethical frameworks and practices

Practices

In March 2018, in an effort to address rising concerns over machine learning's impact on human rights, the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning.[36] The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning.

The World Economic Forum's recommendations are as follows:[36]

  1. Active inclusion: the development and design of machine learning applications must actively seek a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems
  2. Fairness: People involved in conceptualizing, developing, and implementing machine learning systems should consider which definition of fairness best applies to their context and application, and prioritize it in the architecture of the machine learning system and its evaluation metrics
  3. Right to understanding: Involvement of machine learning systems in decision-making that affects individual rights must be disclosed, and the systems must be able to provide an explanation of their decision-making that is understandable to end users and reviewable by a competent human authority. Where this is impossible and rights are at stake, leaders in the design, deployment, and regulation of machine learning technology must question whether or not it should be used
  4. Access to redress: Leaders, designers, and developers of machine learning systems are responsible for identifying the potential negative human rights impacts of their systems. They must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.

In January 2020, Harvard University's Berkman Klein Center for Internet and Society published a meta-study of 36 prominent sets of principles for AI, identifying eight key themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.[37] A similar meta-study was conducted by researchers from the Swiss Federal Institute of Technology in Zurich in 2019.[38]

Approaches

There have been several attempts to make ethics computable, or at least

chatterbot learned to repeat racist and sexually charged messages sent by Twitter users.[46]

One thought experiment focuses on a Genie Golem with unlimited powers presenting itself to the reader. This Genie declares that it will return in 50 years and demands that it be provided with a definite set of morals that it will then immediately act upon. The purpose of this experiment is to initiate a discourse over how best to handle defining complete set of ethics that computers may understand.[47]

In fiction

In science fiction, movies and novels have played with the idea of sentience in robots and machines.

Turing Test, a test administered to a machine to see if its behavior can be distinguished from that of a human. Works such as The Terminator (1984) and The Matrix (1999) incorporate the concept of machines turning on their human masters (See Artificial intelligence
).

Isaac Asimov considered the issue in the 1950s in

John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[49] In Philip K. Dick's novel, Do Androids Dream of Electric Sheep? (1968), he explores what it means to be human. In his post-apocalyptic scenario, he questioned if empathy was an entirely human characteristic. His story is the basis for the science fiction film, Blade Runner
(1982).

Related fields

See also

Notes

  1. S2CID 831873
    .
  2. ^ Boyles, Robert James. "A Case for Machine Ethics in Modeling Human-Level Intelligent Agents" (PDF). Kritike. Retrieved 1 November 2019.
  3. ^ Moor, James M. (2009). "Four Kinds of Ethical Robots". Philosophy Now.
  4. .
  5. ^ Anderson, M., Anderson, S., and Armen, C. (2004) "Towards Machine Ethics" in Proceedings of the AAAI Workshop on Agent Organization: Theory and Practice, AAAI Press [1]
  6. ^ AAAI Workshop on Agent Organization: Theory and Practice, AAAI Press
  7. ^ "Papers from the 2005 AAAI Fall Symposium". Archived from the original on 2014-11-29.
  8. ^ .
  9. ^ a b Anderson, M. and Anderson, S. (2007). Creating an Ethical Intelligent Agent. AI Magazine, Volume 28(4).
  10. .
  11. S2CID 9570832. Archived from the original
    on 2011-11-26.
  12. .
  13. ^ Tucker, Patrick (13 May 2014). "Now The Military Is Going To Build Robots That Have Morals". Defense One. Retrieved 9 July 2014.
  14. ^ "Best Selling Science Books". New York Times. September 8, 2014. Retrieved 9 November 2014.
  15. ^ "European Parliament, Committee on Legal Affairs. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics". European Commission. Retrieved January 12, 2017.
  16. ^ Wakefield, Jane (2017-01-12). "MEPs vote on robots' legal status – and if a kill switch is required". BBC News. Retrieved 12 January 2017.
  17. ^ "European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics". European Parliament. Retrieved 8 November 2019.
  18. .
  19. ^ "Proceedings of the IEEE Addresses Machine Ethics". IEEE Standards Association.
  20. .
  21. (PDF) on 2016-03-04. Retrieved 2011-06-28.
  22. ^ a b Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences". Archived from the original on 2011-12-03.
  23. .
  24. ^ a b Fox, Stuart (August 18, 2009). "Evolving Robots Learn To Lie To Each Other". Popular Science.
  25. ^ Markoff, John (July 25, 2009). "Scientists Worry Machines May Outsmart Man". New York Times.
  26. ^ Palmer, Jason (3 August 2009). "Call for debate on killer robots". BBC News.
  27. ^ Science New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog), dailytech.com, February 17, 2009.
  28. ^ Flatley, Joseph L. (February 18, 2009). "Navy report warns of robot uprising, suggests a strong moral compass". Engadget.
  29. ^ AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  30. ISSN 0031-8949
    .
  31. ^ a b Crawford, Kate (25 June 2016). "Artificial Intelligence's White Guy Problem". The New York Times.
  32. ^ a b c Julia Angwin; Surya Mattu; Jeff Larson; Lauren Kircher (23 May 2016). "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks". ProPublica.
  33. .
  34. ^ Executive Office of the President (May 2016). "Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" (PDF). Obama White House.
  35. ^ "Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights". Obama White House. 4 May 2016.
  36. ^ a b "How to Prevent Discriminatory Outcomes in Machine Learning". World Economic Forum. 12 March 2018. Retrieved 2018-12-11.
  37. S2CID 214464355
    .
  38. .
  39. ^ Powers, Thomas M. (2011): Prospects for a Kantian Machine. In: Machine Ethics, ed. Michael Anderson, Susan Leigh Anderson. New York: Oxford University Press. pp.464–475.
  40. ^ Muehlhauser, Luke, Helm, Louie (2012): Intelligence Explosion and Machine Ethics.
  41. ^ Yudkowsky, Eliezer (2004): Coherent Extrapolated Volition.
  42. ^ Guarini, Marcello (2011): Computational Neural Modeling and the Philosophy of Ethics. Reflections on the Particularism-Generalism Debate. In: Machine Ethics, ed. Michael Anderson, Susan Leigh Anderson. New York: Oxford University Press. pp.316–334.
  43. ].
  44. .
  45. ^ Wakefield, Jane (24 March 2016). "Microsoft chatbot is taught to swear on Twitter". BBC News. Retrieved 2016-04-17.
  46. ^ Nazaretyan, A. (2014). A. H. Eden, J. H. Moor, J. H. Søraker and E. Steinhart (eds): Singularity Hypotheses: A Scientific and Philosophical Assessment. Minds & Machines, 24(2), pp.245–248.
  47. ^ Brundage, Miles; Winterton, Jamie (17 March 2015). "Chappie and the Future of Moral Machines". Slate. Retrieved 30 October 2019.
  48. .
  49. ^ Ganascia, Jean-Gabriel. "Ethical system formalization using non-monotonic logics." Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 29. No. 29. 2007.

References

  • Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. US: Oxford University Press.
  • Anderson, Michael; Anderson, Susan Leigh, eds (July 2011). Machine Ethics. Cambridge University Press.
  • Storrs Hall, J. (May 30, 2007). Beyond AI: Creating the Conscience of the Machine Prometheus Books.
  • Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), pp. 18–21.
  • Anderson, M. and Anderson, S. (2007). Creating an Ethical Intelligent Agent.
    AI Magazine
    , Volume 28(4).

Further reading

External links