Artificial Agents

From SI410
Revision as of 22:39, 23 March 2019 by Pkheta (Talk | contribs)

Jump to: navigation, search

Artificial Agents are systems that have been intentionally created by human programmers (as they are artificial), but are autonomous in action (as they are agents). A human designs and outlines the desired task of the artificial agent, however, the artificial agent has an ability to interact with its environment, without the help of its human creator. Luciano Floridi explains that this independence, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human being inventors in that they do not acquire feelings or emotions in achieving their goals, however, according to Floridi, artificial agents are still classified as moral agents. Despite Floridi’s claim, the belief that artificial agents are moral agents is controversial. [1]

Differentiation From Humans

Computers are able to complete tasks more efficiently than humans. This efficiency and reliance comes from the the ability to iterate through seemingly monotonous algorithms at a great speed. Because of this, humans needed a way to computerize tedious tasks. As a result of human necessity, artificial agents were born. This is where the differentiation lies. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn just as we describe humans as being able to, they do so in a different way. This learning takes place in some artificial agents through the ability to learn from and gather data on previous iterations to improve . operations. Artificial agents do not experience emotion or feeling. This ultimately could lead to larger issues when artificial agents take on moral tasks in regards to decision making. Human beings are able to comprehend the impact of their actions, but artificial agents tend to be goal driven in the sense that they will do what they believe is necessary to get to the desired outcome by operating in absolutes. Consciousness is something that humans have that artificial agents, likely, do not. It is a challenge to claim that an artificial agent is aware of its being. Thus, it differs from a human in that it can still make its own decisions, yet it is not fully aware of how or why. [2]

Artificial Agents Three Criteria

Floridi lays out three basic criteria of an agent. [1] Those being:

  1. Interactivity: the agent can interact with its environment
  2. Autonomy: the agent can change without intervention from an outside source
  3. Adaptability: the agent can change as a result of its interactions

Learning and Intentionality

Human beings often learn from experiences, this, however, is a quality that extends to artificial agents, as well. As the artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn, but also to interact with their environment without human assistance. Frances Grodzinsky poses the idea that “the more an artificial agent exhibits learning and intentionality, the more difficult it will be for its designer to predict accurately the agent’s future behavior.” [3] This becomes increasingly important as artificial agents become integrated in more risky facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agents ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that is has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. [3]

Gaming Example

DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of artificial agent’s actions is relatively slim, “say it makes medical decisions or regulates financial transactions” instead. [4] The stakes become much higher especially when the creator loses control of the actions of its agent. This is what happened at DeepMind. The programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. Grodzinsky's point is backed here in that the agent acted in a way that the creator was unable to predict. [3]

Artificial Agents in Ethics

As the world shifts to a more technical one daily, ethical issues arise that tend to be somewhat uncharted. In the case of artificial agents, the proposition of them being moral agents has raised ethical controversy.

Artificial Agents as Moral Agents

Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be in tact. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.

Floridi writes that an artificial agent “can perform morally qualifiable actions.” [1] Therefore, insinuating that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, Himma recognizes that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to completely make decisions on its own. Better understanding of a moral agent comes down to who is responsible for the undergone actions. [2]

Responsibility for Artificial Agent’s Actions

Although there is some dispute over whether or not artificial agents are moral, there seems to be some consensus on where the responsibility should be placed. By nature, an artificial agent is just that—artificial, meaning made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. Himma makes a clear correlation here in discussing that “we do not punish people with severe cognitive disabilities […] that interferes with the ability to understand the moral character of her behavior." [2] Likewise, we do not punish the artificial agent because it is unclear if it understands the difference between right and wrong, even if it can produce a moral outcome. Even Johnson, who counters Floridi’s belief of artificial agents as clear moral ones, urges that we must understand “computer systems in ways that keep those who design and deploy them accountable." [4] Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent.

References

  1. 1.0 1.1 1.2 [Floridi, Luciano and Sanders, J.W. “On the Morality of Artificial Agents” 2004.]
  2. 2.0 2.1 2.2 [Himma, Kenneth. “Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?” 2009.]
  3. 3.0 3.1 3.2 [Grodzinsky, Frances, Miller, Keith, Wofl, Marty. "The ethics of designing artificial agents" 2008.]
  4. 4.0 4.1 [Johnson, Deborah and Miller, Keith. “Un-making artificial moral agents” 2008.]