Moral consideration of nonhumans in the ethics of artificial intelligence

Andrea Owe | December 16, 2021 | Leave a Comment Download as PDF

This article appeared first at the MONTREAL AI ETHICS INSTITUTE on September 13, 2021

Overview: As AI becomes increasingly impactful to the world, the extent to which AI ethics includes the nonhuman world will be important. This paper calls for the field of AI ethics to give more attention to the values and interests of nonhumans. The paper examines the extent to which nonhumans are given moral consideration across AI ethics, finds that attention to nonhumans is limited and inconsistent, argues that nonhumans merit moral consideration, and outlines five suggestions for how this can better be incorporated across AI ethics.

Introduction

Is the field of AI ethics adequately accounting for nonhumans? Recent work on AI ethics has often been human-centered, such as on “AI for people”, “AI for humanity”, “human-compatible AI”, and “human-centered AI”. This work has value by shifting emphasis away from the narrow interests of developers, but it does not include explicit consideration of nonhumans. How do AI systems’ resource and energy use impact nonhumans? What is the potential of AI for environmental protection or animal welfare? Social algorithmic bias is currently a major topic but are there important nonhuman algorithmic biases? How may we incorporate nonhuman interests and values into AI system design? What might be the risks of not doing so?

This paper documents the state of attention to nonhumans in AI ethics and argues that the field can and should do more. The paper finds that the field generally fails to give moral consideration to nonhumans, such as nonhuman animals and the natural environment, aside from some consideration of the AI itself. The paper calls on the field to give more attention to nonhumans, suggesting five specific ways AI researchers and developers can accomplish this. 

Key Insights

What it means to give moral consideration to nonhumans

Moral consideration of nonhumans means actively valuing nonhumans for their own sake. In moral philosophy terminology, to “intrinsically value” nonhumans. One can fail to give moral consideration to nonhumans by actively denying their intrinsic value or by neglecting to actively recognize their intrinsic value. There are many conceptions of which nonhumans merit moral consideration, such as the welfare of nonhuman animals or sentient AI systems, or the flourishing of ecosystems. Moral consideration of nonhumans does not require any one specific conception of which nonhumans merit moral consideration. It also does not require a specific type of moral framework, such as consequentialism, deontology, or virtue ethics.

Why it matters that AI ethics morally consider nonhumans

Moral consideration of nonhumans is a practical issue for real-world AI systems, with several matters at stake. For example, AI can be applied for the advancement of nonhuman entities, such as for environmental protection. On the other hand, AI can inadvertently harm the nonhuman world, such as via its considerable energy consumption. Certain algorithmic biases could additionally affect nonhumans in a variety of ways. Further, the long-term prospect of strong AI or artificial general intelligence may radically transform the world for humans and nonhumans alike. The extent to which nonhumans are morally considered can play an important role in assessing how AI systems should be designed, built, and used. 

Empirical findings: Limited attention to nonhumans

The paper surveys a variety of prior work in AI ethics in terms of the extent to which it gives moral consideration to nonhumans. Overall, the paper finds that the field generally fails to give moral consideration to nonhumans. The primary exception is the line of research on the moral status of AI. The paper finds no attention to nonhumans in 76 of 84 sets of AI ethics principles surveyed by Jobin et al., 40 of 45 artificial general intelligence R&D projects surveyed by Baum, 38 of 44 chapters in the Oxford Handbook of Ethics of AI, and 13 of 17 chapters in the anthology Ethics of Artificial Intelligence. In the two latter examples, any dedicated attention is on the moral status of AI itself. No other types of nonhumans are given dedicated attention.

The case for moral consideration of nonhumans

Modern science is unambiguous in documenting that humans are members of the animal kingdom and part of nature. Attributes of humans that are commonly intrinsically valued, such as human life or human welfare, are also found in many nonhuman entities. It would very arguably be an unfair bias to intrinsically value something in humans but not intrinsically value the same thing in nonhumans. Additionally, compelling arguments can be made for intrinsically valuing things that inherently transcend the human realm, such as biodiversity. To insist on only giving moral consideration to humans requires rejecting all of these arguments. The paper posits that this is untenable, meaning that nonhumans merit moral consideration.

What can be done? Five suggestions for future work

1. AI ethics research needs a robust study of the moral consideration of nonhumans, focusing on issues such as how to balance between humans and nonhumans, the handling of the natural nonhuman world, and the role of nonhumans in major AI issues. For example, research in ecolinguistics shows that English—the primary language for AI system design—contains biases in favor of humans over nonhumans. This insight could be applied to the study of nonhuman algorithmic bias in, for example, natural language processing.

2. Statements of AI ethics principles should give explicit attention to the intrinsic value of nonhumans. The Montréal Declaration for the Responsible Development of Artificial Intelligence is one example, with the principle stating: “The development and use of artificial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings.” For illustration, an even stronger statement would be: “The main objective of development and use of AIS must be to enhance the wellbeing and flourishing of all sentient life and the natural environment, now and in the future.”

3. AI projects that advance the interests and values of nonhumans should be among the projects considered when selecting which AI projects to pursue. The Microsoft AI for Earth program is a good example of AI used in ways that benefit nonhumans, and further serves as an example of how to operationalize moral consideration for nonhumans in AI project selection. The program supports several projects for environmental protection and biodiversity conservation that give explicit moral consideration to nonhumans, including Wild Me, eMammal, NatureServe, and Zamba Cloud.

4. The inadvertent implications for nonhumans should be accounted for in decisions about which AI systems to develop and use, such as the material resource consumption and energy use of AI systems. AI groups should acknowledge that if an AI system will/could cause sufficient harm to nonhumans, it would be better to not use it in the first place.

5. AI research should investigate how to incorporate nonhuman interests and values into AI system designs. How to incorporate human values is currently a major subject of study in AI, but some of the proposed techniques do not apply to nonhumans. AI ethics design is of particular importance for certain long-term AI scenarios in which an AGI takes a major or dominant position within human society, the world at large, and even broader portions of outer space. Even the most well-designed AGI could be catastrophic for some nonhumans if it is designed to advance the interests of humans or other nonhumans.

Between the lines

In summary, accounting for nonhumans in AI R&D is critical to ensure that AI benefits more than just humans. This can prevent further harm to nonhuman entities already under immense pressure from human activities. Furthermore, this will enable the field to better handle future moral issues, such as the potential of artificial entities like AI to merit moral consideration themselves. In addition, there are plenty of opportunities for AI to mitigate existing harm to nonhumans and enable benefits to also nonhumans. As documented by this paper, the AI ethics field has given little attention to nonhumans thus far. Therefore, there exist manifold opportunities for work addressing the implications of nonhumans across AI design and use.

Research summary written by Andrea Owe, Environmental and space ethicist, and Research Associate at the Global Catastrophic Risk Institute. 

Original paper by Andrea Owe and Seth D. Baum.

The views and opinions expressed through the MAHB Website are those of the contributing authors and do not necessarily reflect an official position of the MAHB. The MAHB aims to share a range of perspectives and welcomes the discussions that they prompt.