Rudimental Decisional Autonomy & Mechanical Autonomy

Get Complete Project Material File(s) Now! »

Classes of Artificial Agents & Ethical Design Concerns

The Umwelt of an artificial agent, as we have just seen, provides the designer with a wealth of solutions to the design problems of an artificial agent. If the designer is able to typify the agent’s Umwelt in a robust way, this can afford him insight into a) how the agent ought to act, or what it ought to achieve (its ‘rationality’, a function of its performance measure), b) the way the agent ought to move from perception to action (its agent program, for instance a utility-based program), and c) the hardware it needs to sense and act in its environment (its sensors and actuators). When this process is successful, it yields a highly specified artificial agent, capable only of efficient action in the Umwelt for which it was designed.
Nevertheless, across the myriad artificial agents to which the modern world is privy, certain general classifications can be made, based on either the type of Umwelt in which an agent acts, or the type of programming style by which it operates. These distinctions are useful for our analysis in so far as they are themselves subject to moral scrutiny: some programming techniques are more sensitive to ethical design concerns than others, while different types of artificial agents can cause different types of moral damage. For clarity’s sake, it is important to specify that there is a salient difference between what we have been calling artificial morality and these ethical design concerns. The former describes how a particular artificial agent responds to the moral value of its environment, while the latter describes how the design choices of a human agent (generally a designer or engineer) may fail to adequately respect human rights, dignity, or undermine societal welfare. In this way, a given artificial moral agent—which we will recall, is an artificial agent equipped with artificial morality—can still fail to meet the demands of various ethical design concerns, although this is hardly desirable. This distinction is often lost in more carefree debates surrounding the ‘ethics of AI’.

ART Principles and the Embodied-Virtual Distinction

 In the modern world, a designer has two options for the type of environment in which he aims to implement an artificial agent: the real world, or the virtual world. Real world artificial agents exist in physical space, and are correspondingly often called embodied agents, or more colloquially, robots. Artificial agents that exist in virtual space go by different names: algorithms, software bots or agents, or virtual agents being some of the most common options. Importantly, while the choice of the agent’s environment has an inevitable and substantive impact on every aspect of its PEAS classification (what it ought to achieve, how it moves from perception to action, its actuators, and its sensors), neither environment is ‘easier’ to tackle for the designer121. Still, from a design perspective, there are some further challenges associated with embodied forms of artificial agents. First, embodied agents generally require mobility122, or the ability to navigate a physical environment. Efficient mobility in embodied artificial agents has traditionally posed a lofty challenge for engineers, from walking and ascending stairs to the manipulation of every-day objects, such as doorknobs, cups or stools. In other words, the actuators of the embodied artificial agent must enable the full range of mobility necessary to ensure that their performance measure is attainable in their environment, and if this is not possible, it may affect the types of tasks they can accomplish, or the overall quality of their agency123.
Furthermore, and of considerable ethical salience, embodied forms of artificial agents introduce the capacity for physical harm to human agents, and there has been a long history of such unfortunate events occurring when humans fail to interpret and adapt to the behavior of an AA124. Beyond these lethal mishaps, there are also what are called safety critical systems125, technology whose very purpose implies some potential for human harm, such as autonomous weapons or autonomous vehicles126. Virtual agents, as we have seen in the case of the COMPAS recidivism prediction agent, are not above harm and wrongdoing: through their decisions, actions and predictions, human agents may lose opportunities, experience discrimination, or generally have their interest and welfare thwarted. It is difficult, however, to maintain that they cause direct lethal harm127. The risk of physical harm then, is a major burden on the design of embodied artificial agents, a concern which in itself may preclude their implementation into certain spheres of human society128.

ART Principles and the Deterministic-Stochastic Distinction

While the physical distinction between embodied and virtual artificial agents is clearly pertinent to the types of behavior these agents can perform, and to a certain degree, the types of environment that are open to responsible design and implementation, a far more prevalent distinction can be made concerning the types of system design, or programming, on which an artificial agent is based. There are two general categories within which a given artificial agent could fall: deterministic—sometimes called top-down programming or ‘expert systems’—and stochastic—-also called, probabilistic, bottom-up, machine learning, or ‘learning from data’ approaches. Most of the buzz concerning the ‘AI boom’ of recent years concerns the latter, machine learning type of technology, since it is this type that has seen a resurgence in popularity with the advent of the internet, and the massive amounts of accessible data which it provides141. Nevertheless, deterministic expert systems remain quite popular in certain areas of engineering and robotics, all the more so in areas where it is desirable that an artificial agent abide by strict rules and constraints142. We can see why if we explore the definitions of these two approaches in more detail.

READ  The need for hybrid electric vehicles

The Engineer’s Concept of Machine Autonomy

As a point of departure, we will consider Russel & Norvig’s definition of autonomy, as described in the widely heralded textbook, Artificial Intelligence: A Modern Approach:
To the extent that an agent relies on the prior knowledge of its designer rather than its own percepts, we can say that the agent lacks autonomy…A rational agent should be autonomous—it should learn what it can compensate for partial or incorrect prior knowledge…After sufficient experience of its environment, the behavior of a rational agent can become effectively independent of its prior knowledge16.
Compared to other definitions of classic computational terms, Russel & Norvig’s treatment of autonomy is decidedly broad17. In effect, we can establish only three meaningful delimitations: first, that an autonomous agent is one whose knowledge of its environment extends past that which was provided by its human programmer, second, that it is perhaps better that rational agents be autonomous rather than not, and third, that given sufficient environmental experience, the autonomous agent will act in a self-sufficient18 way, mobilizing knowledge which may differ from that which was originally imparted by the human programmer. Notice too, the idea of compensation for incorrect knowledge imparted by the programmer. This seems to suggest a connection between the evolution of an agent’s environment and the need for new (self-gained) knowledge on the part of the agent. Taken together, we might reformulate this concept of autonomy in the following way: an artificial agent is autonomous to the degree to which its knowledge a) departs from the a priori knowledge provided by the human programmer, and b) aids in the robustness of its behavior in its environment. Then, a maximally autonomous artificial agent is one whose self-gained knowledge allows a high degree of robustness to environmental change, and a minimally autonomous AA is one whose knowledge fails to yield efficient behavior across environmental change. We can call this vision of machine autonomy as independence, since it is the (epistemological) independence of an artificial agent that allows it to better perform its purpose in an Umwelt.

The Emergence of Artificial Moral Agents: The Machine Autonomy Continuum

In the modern world, there are many types of technological artefacts which could conceivably entertain a relationship with moral value. In effect, if we view technological artefacts as the product, or perhaps the instantiation of various human intentions57, it should come as no surprise that those intentions aim at specific goals and purposes, which may privilege certain values over others, or benefit some over others. Undoubtedly, this happens at a global level: smartphones and communication technology allow unprecedented levels of coordinated action and awareness, but this very same technology threatens to carve a ‘digital divide’ across individuals of different generations, socioeconomic statuses, perhaps even across nation states. It appears then that all the world’s a stage for morally salient environments, and all the men and women merely players. Clearly, however, the ‘world’ is too broad a context for any meaningful analysis to take place.

Table of contents :

RESUME SUBSTANTIEL EN FRANÇAIS – SUBSTANTIAL FRENCH SYNOPSIS
INTRODUCTION
ARTIFICIAL AGENT & ENVIRONMENT
1. DEFINING ARTIFICIAL AGENTS
2. DESIGNING ARTIFICIAL AGENTS IN AN UMWELT
3. CLASSES OF ARTIFICIAL AGENTS & ETHICAL DESIGN CONCERNS
3.1 ART Principles as Ethical Design Concerns
3.2 ART Principles and the Embodied-Virtual Distinction
3.3 ART Principles and the Deterministic-Stochastic Distinction
4. CONCLUSION
AUTONOMY & ARTIFICIAL MORAL AGENTS
1. UNPACKING THE ARGUMENT FROM INCREASING AUTOMATION
1.1 The Engineer’s Concept of Machine Autonomy
1.2 The Philosopher’s Concept of Machine Autonomy
2. THE EMERGENCE OF ARTIFICIAL MORAL AGENTS: THE MACHINE AUTONOMY CONTINUUM
2.1 Levels 1 & 2: Rudimental Decisional Autonomy & Mechanical Autonomy
2.2 Level 3 & 4: Human-in-the-loop Technology & Decisional Autonomy
2.3 Moor’s Bright Line, Moral Agents and Superintelligent Agents
3. CONCLUSION
HETERONOMY, MODULARITY &ARTIFICIAL MORAL AGENTS
1. MODULAR ARTIFICIAL MORAL AGENTS
2. SURROGATE AGENTS AND DISTRIBUTIVE AGENTS
2.1 Surrogate Agents
2.2 Distributive Agents
3. CONCLUSION
ARTIFICIAL MORALITY & THE HARD PROBLEM OF MACHINE ETHICS TECHNICAL CONSTRAINTS & THE STRUCTURE OF ARTIFICIAL MORALITY
1. THE ENGINEER’S CONCEPT OF TOP-DOWN, BOTTOM-UP AND HYBRID APPROACHES
2. THE PHILOSOPHER’S CONCEPT OF TOP-DOWN, BOTTOM-UP, AND HYBRID APPROACHES
2.1 The Philosophical Concept of Top-Down Approaches
2.2 The Philosophical Concept of Bottom-Up Approaches
2.3 The Philosophical Concept of Hybrid Approaches
3. TECHNICAL CONSTRAINTS & ARTIFICIAL MORALITY
3.1 The Structure of a Moral Theory
3.2 There Is No ‘I’ In Robot
3.3 The Place of Artificial Morality
4. CONCLUSION
ACCEPTABILITY & ARTIFICIAL MORALITY
1. ACCEPTABILITY AS MORAL PREFERENCE & THE SCOPE OF ARTIFICIAL MORALITY
2. GIVE THE PEOPLE WHAT THEY WANT: ACCEPTABILITY AS ADOPTABILITY
3. ACCEPTABILITY AS INSTITUTIONAL VIABILITY: THE PROBLEM OF ARTIFICIAL MORAL UPTAKE .
3.1 An Illustration of the Problem of Artificial Moral Uptake: SoupSaint2020™
4. CONCLUSION
ARTIFICIAL MORALITY & THE ETHICAL VALENCE THEORY
1. EXPANDING ON THE ARGUMENT FROM INCREASING AUTOMATION
1.1 Exploring the Principle of Total Irreproachability in the Design of Artificial Morality
2. THE ETHICAL VALENCE THEORY
2.1 Foundations, Affordances and Moral Perception
2.2 Claims & Valences
2.3 Moral Profiles
3. CONCLUSION
THE ETHICAL VALENCE THEORY & AUTONOMOUS VEHICLES
1. DILEMMA SCENARIOS & MARKOVIAN DECISION PROCEDURES
1.1 DILEMMA SITUATIONS & THE LAW
2. ETHICAL DELIBERATION
3. ETHICAL VALENCES
4. ETHICAL DELIBERATION & MORAL PROFILES
5. APPLICATION OF THE ETHICAL VALENCE THEORY IN A HYPOTHETICAL SITUATION
6. CONCLUSION
CONCLUSION
REFERENCES

GET THE COMPLETE PROJECT

Related Posts