ARTIFICIAL INTELLIGENCE AND THE COMPLEXITY OF ETHICS

Authors

  • Peter G. Kirchschlaeger University of Lucerne, Switzerland

Keywords:

Artificial Intelligence, Complexity of Ethics, Conscience, Databased Systems, Epikeia, Ethical Principles and Norms, Freedom, Moral Technologies

Abstract

While reflecting upon artificial intelligence, one of its characteristics is often highlighted: its complexity. Sometimes the complexity of artificial intelligence is even used as an argument against holding humans responsible for it. At the same time, surprisingly the complexity of ethics is usually perceived with a reductionist understanding of ethics. In this article, the concept “artificial intelligence” itself is critically reviewed resulting in the introduction of a more adequate term: “databased systems.” Beyond that, I argue against the possibility of “ethical” databased systems and in favour of databased systems with ethics. Finally, the complexity of ethics and its consequences for the ethical dimension of technology-based innovation will be in the focus.

Author Biography

Peter G. Kirchschlaeger, University of Lucerne, Switzerland

Peter G. Kirchschlaeger is Full Professor of Theological Ethics and Director of the Institute of Social Ethics (ISE) at the Faculty of Theology of the University of Lucerne. He is Research Fellow at the University of the Free State, Bloemfontein. He had his studies in Theology and Judaism in Lucerne, in Rome (Gregoriana), and Jerusalem (2001: MA University of Lucerne), and studies in Philosophy, Religious Studies, and Political Science at the University of Zurich (2003: MA University of Zurich). 2005-2006: Research Stay at the University of Chicago Divinity School with a scholarship of the Swiss National Science Foundation. 2008: PhD University of Zurich. 2012: Habilitation University of Fribourg. 2011-2015 member of the Board of the Swiss Centre of Expertise in Human Rights; 2013 Visiting Scholar at the University of Technology Sydney, 2013-2014; Guest-Professor at the Faculty of Theology and Religious Studies at the Katholieke Universiteit Leuven; 2013-2017 Fellow at the Raoul Wallenberg Institute of Human Rights and Humanitarian Law; 2015-2019 Guest-Lecturer at the Leuphana University Lueneburg; and 2015-2017 Visiting Fellow at Yale University. One of his research focuses is Ethics of Artificial Intelligence. Email: Peter.Kirchschlaeger@unilu.ch

References

Adrian Holderegger, “Art. Verantwortung,” in Jean-Pierre Wils and Christoph Hübenthal, ed., Lexikon der Ethik, Paderborn 2006, 394-403.

Adrian Holderegger, Beat Sitter-Liver, Christian W. Hess, Günter Rager, ed., Hirnforschung und Menschenbild. Beiträge zur interdisziplinären Verständigung, Fribourg: Academic Press Fribourg & Basel: Schwabe, 2007.

Aristoteles, Nikomachische Ethik V, Ditzingen: Reclam.

Barbara Guckes, Ist Freiheit eine Illusion? Eine metaphysische Untersuchung, Paderborn: Mentis, 2003.

Brian Talbot, Ryan Jenkins and Duncan Purves, “When Robots Should Do the Wrong Thing,” in Patrick Lin, Ryan Jenkins and Keith Abney, ed., Robot Ethics 2.0 from Autonomous Cars to Artificial Intelligence, New York: Oxford University Press, 2020, 258-273.

Cathy O’Neil cited from: Yves Demuth, “Die unheimliche Macht der Algorithmen,” Beobachter 9 (2018) 14-18.

Catrin Misselhorn, Grundfragen der Maschinenethik, Ditzingen: Reclam, 2018.

Catrin Misselhorn, Grundfragen der Maschinenethik, Ditzingen: Reclam, 2018, 70-135.

Claude Huriet, “Ethics Committees,” in Henk A.M.J. ten Have and Michele S. Jean, ed., The UNESCO Universal Declaration on Bioethics and Human Rights. Background, Principles and Application, Paris: UNESCO, 2009, 265-270.

Davide Castelvecchi, “Can We Open the Black Box of AI?” Nature 538, 7623 (2016) 21-23.

Deborah Johnson, “Computer Systems: Moral Entities but not Moral Agents,” Ethics and Information Technology 8, 4 (2006) 195-204.

Eberhard Schockenhoff, Grundlegung der Ethik. Ein theologischer Entwurf, Freiburg i. B.: Herder, 2014.

Emmanuel J. Bauer, ed., Freiheit in philosophischer, neurowissenschaftlicher und psychotherapeutischer Perspektive, München: Wilhelm Fink Verlag, 2007.

Ernest Davis, Ethical Guidelines for A Superintelligence, New York: New York University Press, 2014, 1-5, 3.

Gonenc Gurkaynak, Ilay Yilmaz and Gunes Haksever, “Stifling Artificial Intelligence: Human Perils,” Computer Law & Security Review 32, 5 (2016) 749-758.

Gudmund R. Iversen and Mary Gergen, Statistics: The Conceptual Approach, New York: Springer, 1997.

Günter Virt, Damit Menschsein Zukunft hat, Würzburg: Echter Verlag, 2007.

Günter Virt, Damit Menschsein Zukunft hat, Würzburg: Echter Verlag, 2007.

Hanspeter Schmitt, Sozialität und Gewissen. Anthropologische und theologisch-ethische Sondierung der klassischen Gewissenslehre. Studien der Moraltheologie Bd. 40, Wien: Lit, 2008.

Helmut Fink and Rainer Rosenzweig, ed., Freier Wille – frommer Wunsch? Gehirn und Willensfreiheit, Paderborn: Mentis, 2006.

Hubert L. Dreyfus and Stuart E. Dreyfus, Mind Over Machine, New York: The Free Press, 1986.

Hubert L. Dreyfus, What Computers Can’t Do. The Limits of Artificial Intelligence, New York: MIT Press, 1972.

Immanuel Kant, Grundlegung zur Metaphysik der Sitten, Werkausgabe Weischedel, Vol. 7. Frankfurt a. M.: Suhrkamp, 1974.

Isaac Asimov, Meine Freunde, die Roboter, München: Heyne Verlag, 1982.

James F. Keenan, A History of Catholic Moral Theology in the Twentieth Century. From Confessing Sins to Liberating Conscience, New York: Continuum, 2010.

Jeanne Hersch, Im Schnittpunkt der Zeit, Zurich: Benziger, 1992.

John R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, 3 (1980) 417-457.

John Sullins, “When Is a Robot a Moral Agent,” International Review of Information Ethics 6 (2006) 23-30.

Joseph Weizenbaum, “Not Without Us,“ SIGCAS Computers and Society 16, 2-3 (1986) 2-7.

Kerstin Schlögl-Flierl, “Die Tugend der Epikie im Spannungsfeld von Recht und Ethik,” in Paul-Chummar Chittilappilly, ed., Horizonte gegenwärtiger Ethik. Freiburg i. B.: FS Josef Schuster, 2016, 29-39.

Kerstin Schlögl-Flierl, “Die Tugend der Epikie im Spannungsfeld von Recht und Ethik,” in Paul-Chummar Chittilappilly, ed., Horizonte gegenwärtiger Ethik. Freiburg i. B.: FS Josef Schuster, 2016, 29-39.

Ko Insok, “Can Artificial Intelligence Be an Autonomous Entity?” Korean Journal of Philosophy 133 (2017) 163-187.

Linda Hogan, “Conscience in the Documents of Vatican II,” in Charles E. Curran, ed., Conscience: Readings in Moral Theology No 14, New York: Paulist Press, 2004, 82-88.

Luciano Floridi and J.W. Sanders, “On the Morality of Artificial Agents,” Minds and Machines 14, 3 (2004) 349-379.

Margaret A. Boden, AI: Its Nature and Future, Oxford: Oxford University Press, 2016.

Margot Fleischer, Menschliche Freiheit – ein vielfältiges Phänomen. Perspektiven von Aristoteles, Augustin, Kant, Fichte, Sartre und Jonas, Freiburg i. B.: Verlag Karl Alber, 2012.

Mark Coeckelberg, AI Ethics, London: MIT Press, 2020.

Mark Coeckelberg, AI Ethics, London: MIT Press, 2020.

Mark Graves “Shared Moral and Spiritual Development Among Human Persons and Artificially Intelligent Agents,” Theology and Science 15, 3 (2017) 333-351.

Michael Anderson and Susan Anderson, “General Introduction,” in Michael Anderson and Susan Anderson, ed., Machine Ethics, Cambridge: Cambridge University Press, 2011, 1-4.

Michał Klincewicz, “Challenges to Engineering Moral Reasoners,” in Patrick Lin, Ryan Jenkins and Keith Abney, ed., Robot Ethics 2.0 from Autonomous Cars to Artificial Intelligence, New York: Oxford University Press, 2020, 244-257.

Moor, James H., “The Nature, Importance, and Difficulty of Machine Ethics,” in IEEE Intelligent Systems 21, 4, (2006) 18–21.

Naief Yehya, Homo cyborg. Il corpo postumano tra realtà e fantascienza, Milano, 2005.

Nate Soares and Benya Fallenstein, Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda, Machine Intelligence Research Institute Technical Report 2014/8, 1-14.

Nicholas Agar, “Don’t Worry about Superintelligence,” Journal of Evolution and Technology 26, 1 (February 2016) 73-82.

Nicholas Agar, “Don’t Worry about Superintelligence,” Journal of Evolution and Technology 26, 1 (February 2016) 73-82.

Nik Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines 22, 2 (2012) 71-85.

Noel Sharkey, cited from Patrick Tucker, “Can the Military Really Teach Robots Right from Wrong?” The Atlantic, May 14, 2014.

Ohly Lukas, Ethik der Robotik und der Künstlichen Intelligenz, Berlin: Lang, 2019.

Peter G. Kirchschlaeger “Gewissen aus moraltheologischer Sicht,” Zeitschrift für katholische Theologie 139 (2017) 152–177.

Peter G. Kirchschlaeger, “Die Rede von ‘moral technologies’: Eine Kritik aus theologisch-ethischer Sicht,” in feinschwarz.net (2017).

Peter G. Kirchschlaeger, “Verantwortung aus christlich-sozialethischer Perspektive,” ETHICA 22 (2014) 29-54.

Peter G. Kirchschlaeger, Digital Transformation and Ethics: Ethical Considerations on the Robotization and Automatization of Society and Economy and the Use of Artificial Intelligence, Baden-Baden: Nomos-Verlag, 2021 (forthcoming).

Peter Koller, “Die Begründung von Rechten,” in Peter Koller, Csaba Varga and Ota Weinberger, ed., Theoretische Grundlagen der Rechtspolitik. Ungarisch-Österreichisches Symposium der internationalen Vereinigung für Rechts- und Sozialphilosophie, ARSP 54, Stuttgart: Franz Steiner Verlag, 1990, 74-84.

Peter Koller, “Die Begründung von Rechten,” in Peter Koller, Csaba Varga and Ota Weinberger, ed., Theoretische Grundlagen der Rechtspolitik. Ungarisch-Österreichisches Symposium der internationalen Vereinigung für Rechts- und Sozialphilosophie, ARSP 54, Stuttgart: Franz Steiner Verlag, 1990, 74-84.

Roman V. Yampolski, “Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach,” in Vincent C. Mueller, ed., Philosophy and Theory of Artificial Intelligence, Cham: Springer, 2013, 289-296.

See Klaus Demmer, Bedrängte Freiheit. Die Lehre von der Mitwirkung—neu bedacht, SThE 127, Fribourg: Academic Press Fribourg, 2010, 110-113.

Spyros G. Tzafestas, Roboethics, A Navigating Overview, Cham: Springer, 2016.

Steven Pinker, The Language Instinct: How the Mind Creates Language, New York: Collins Publishers, 2007.

Stuart Russell, “Will They Make Us Better People?” contribution to the Annual Question 2015 on edge.org, http://www.edge.org/response-detail/26157.

Thomas Metzinger, “Nehmt der Industrie die Ethik weg! EU-Ethikrichtlinien für Künstliche Intelligenz,” in der Tagesspiegel, April 8, 2019.

UNESCO, Steering AI and Advanced ICTs for Knowledge Societies. A Rights, Openness, Access, and Multi-stakeholder Perspective, Paris: UNESCO, 2019.

Virginia Dignum, Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Cham: Springer, 2019.

Walter Bloch, “Willensfreiheit? Neue Argumente in einem alten Streit. Hodos – Wege bildungsbezogener Ethikforschung,” in Philosophie und Theologie, 11, Frankfurt a. M.: Peter Lang, 2011.

Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford: Oxford University Press, 2009.

Werner Wolbert, Gewissen und Verantwortung. Studien zur Theologischen Ethik, Freiburg: Academic Press Fribourg, 2008.

Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017.

Will Knight, “The Financial World Wants to Open AI’s Black Boxes,” MIT Technology Review, April 13, 2017.

Wolfgang Achtner, Willensfreiheit in Theologie und Naturwissenschaften. Ein historisch-systematischer Wegweiser, Darmstadt: WBG, 2010.

Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation,” Harvard Journal of Law & Technology 31, 2 (Spring 2018) 889-938.

Downloads

Published

2020-11-09

How to Cite

Kirchschlaeger, P. G. . (2020). ARTIFICIAL INTELLIGENCE AND THE COMPLEXITY OF ETHICS. Asian Horizons, 14(3), 587–600. Retrieved from https://dvkjournals.in/index.php/ah/article/view/3203