Artificial Agents

From SI410
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Logic Architecture for Artificial Agents

Artificial Agents are bots or programs that autonomously collect information or perform a service based on user input or its environment.[1] Typical characteristics of agents can include adapting based on experience, problem solving, analyzing success and failure rates, as well as using memory-based storage and retrieval. [1] Humans are responsible for the design and the behavior of the agent, however the agent itself has the ability to interact with its environment freely within the scope of its granted domain. Luciano Floridi, who is most known for his work in the philosophy of information and information ethics, explains that this autonomy, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human inventors in that they do not acquire feelings or emotions in achieving their goals, but, according to Floridi, artificial agents are still classified as moral agents. An artificial agent's morality, actions, embedded values, and bias is ethically controversial. [2]

Differences From Humans

Compared to humans, computers are highly efficient in their ability to process complex calculations and complete repetitive tasks with minimal margins of error. Such realizations combined with needs to increase productivity lead to the birth of artificial agents. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn as humans can, they do so in a seemingly different way. Artificial agents currently do not experience emotions or feeling which, suggests they might have issues with moral tasks that depend on having emotional experiences. On the other hand, emotional experiences may be orthogonal to moral action, and therefore would not prevent an artificial agent from acting in a moral way.

Human beings appear able to comprehend the impact of their actions and make deliberate choices, but artificial agents tend to be goal-driven in the sense that they will do whatever is necessary to reach the desired outcome. However, in a determinist worldview, the human ability to make choice may actually be illusory, and though it feels like we are making a choice, we in fact are simply following the same kind of goal-driven decision making that an artificial agent does.[3] In fact, artificial agents may internally "feel" the same way humans do about their ability to make choices that maximize their utility function.

In 1950, computer scientist Alan Turing proposed a theoretical procedure for determining if a machine was able to think, originally known as the Imitation Game, but later popularized as the Turing Test. In his seminal paper, Turing claims that any objections to whether a machine that passed the Turing Test by holding a conversation indistinguishable to a human interrogator from another human, could similarly apply to human beings who can also pass the Turing Test. For example, objections that an artificial agent that can hold a conversation does not actually have internal experience are symmetric to solipsistic arguments that other humans who can hold a conversation also do not necessarily have internal experience.[4] This suggests that there is no fundamental difference between human agents and artificial agents, and that once they have passed some threshold in either software or hardware, they too can be considered to have moral agency.

In contrast, Kenneth Himma argues that current artificial agents cannot pass the Turing Test, and as such, may not have internal experience. This lack of internal experience in turn suggests a lack of ability to deliberate about decisions, which Himma considers key to agenthood.[5]

Artificial Agents Three Criteria

Floridi lays out three basic criteria of an agent: [2]

  1. Interactivity
  2. Autonomy
  3. Adaptability

Interactivity

Interactivity refers to the ability of an agent and its environment to act upon each other. Input or output of a value is a common example of interactivity. Interactivity can also refer to action by agents and participants that are occurring at the same time [2]. An artificial agent's ability to perform this interactivity often hinges on the algorithm to which it was programmed. These algorithms cause the agent to respond to input in varying forms, apply it to the current state they are in, and produce output or adjust states accordingly. Interactivity is essential in allowing artificial agents to enact the other two criteria.

Autonomy

An autonomous agent is one with a certain degree of intelligence so that it can act on the user's behalf. This means that, without having a direct intervention from an outside source, an agent can change its state. However, having autonomy doesn't mean the agent can do whatever it pleases. It can only act and make decisions within the degree of intelligence with which it has been installed.[6] There is a sense of complexity when it comes to autonomy. If an agent can perform internal transitions that change its state, it is autonomous [2] [7]. As an artificial agent continues to learn and adapt with more data, its level of autonomy will continue to increase as it becomes capable of making more and more decisions, along with increasing accuracy.

Adaptability

Adaptability refers to the idea that the agent has the ability to change to a different state without having a direct response with interaction. Adaptability comes after interactivity and autonomy if in the internal state, the agent has its transition rules stored.[2] As autonomy and interactivity increase, so does adaptability, furthering the autonomous agent's ability to emulate intelligence.

Examples of Artificial Agents

A Smart Thermostats controlling home temperature

Web bots:
Web bots are widely used as filters for users' email accounts. Web bots satisfy the criteria to be considered as artificial agents in that they interact with its environment - in this case, the users' email - by blocking unwanted messages. This process is fully automated without users having to delete unwanted emails manually. Web bots are also constantly learning to adapt to users' preferences in order to improve the accuracy of their filters.

Smart Thermostats:
Smart thermostats provide solutions for maintaining optimal residential temperature. The device interacts with the residential environment by engaging in heating or cooling activities. The device is designed to operate with minimal human interaction to reduce the number of human errors and is able to adapt to its environment through sensors that allow the device to determine whether to suspend or engage in heating or cooling activities.

Autonomous Vehicles:
There are many levels of self-driving cars with varying degrees of required human input, but fully autonomous vehicles need no human interference. Autonomous vehicles remove the burden of driving from their human controllers and place the responsibility on themselves to handle the various responsibilities involved in driving. In order to function, autonomous vehicles use a variety of computer vision techniques in order to adapt to changes in their environment. In conjunction with complex algorithm to handle incoming computer vision data, these artificial agents may in some cases outperform their human counterparts.

Learning and Intentionality

Human beings often learn from experiences, a quality that extends to artificial agents as well. As artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn but also to interact with their environment without human assistance. Frances Grodzinsky postulates that as an artificial agent learns and becomes more complex, its future behavior becomes harder to predict.[8] This becomes increasingly important as artificial agents become integrated with riskier facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agent's ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. [8]

Artificial Agents in Gaming

DeepMind AI masters the classic Atari video games[9]

DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of an artificial agent’s actions is relatively slim, the growing use of artificial agents poses bigger issues. For example, one-day artificial agents may be making medical, financial, and even governmental decisions.[10] This makes the stakes higher especially because the creators can easily lose control of the actions of the agent.

For example, at DeepMind the programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. The agent acted in a way that the creator was unable to predict.[8]

Ethics

The rise of revolutionary technologies demands a symmetric rise in information ethics, at least according to leading ICT ethicist Phillip Brey [11] and Dartmouth Philosophy Professor, James Moor[12]. The value that technology can provide to humans makes their integration into our lives both consistent and ever-expanding. James Hogan (author of Two Faces of Tomorrow) discusses how we must control the amount of power we give artificial agents. It is important to understand and prioritize the fact that artificial agents are autonomous. The more power they are given, the less certain we can be about how they will act. [13] Another major ethical concern with artificial agents is the proposition of them being moral agents has raised ethical controversy.

Artificial Agents as Moral Agents

Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be intact. Moral agents can perform actions for good or for evil[2]. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.

An artificial agent can do things that have moral consequences.[2] This insinuates that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, they recognize that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to make decisions on their own. A better understanding of a moral agent comes down to who is responsible for the undergone actions. [5]

Responsibility for Artificial Agent’s Actions

By nature, an artificial agent is just that—artificial, made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. To better understand this, it is important to know what responsibility entails. A responsible entity has intention and awareness of their actions.

Himma makes a clear correlation here in discussing that society doesn't hold people with cognitive disabilities morally accountable in the same way that we hold other people accountable, because their disability interferes with their ability to comprehend moral consequences."[5] Likewise, we do not punish the artificial agent because it is unclear if they understand the difference between right and wrong, even if they can produce a moral outcome. We must keep the people who design artificial agents accountable for that agent's actions, otherwise, no one can be held accountable.[10] Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent. Frances Grodzinsky echo's this sentiment placing the responsibility of artificial agents' behavior on the designer [8]. When designing and planning the actions of artificial agents, it's important for the designer to establish what their intentions are, and to try to anticipate any behaviors that could result in unmoral interactions. For this reason, when creating artificial agents, designers must be careful when developing and designing the interactions .[8] .

Artificial Agents and Bias

Amazon's recruiting AI was found biased against women[14]

Because artificial agents are human-made but still autonomous, major ethical issues arise when values are brought into play. Innately, human beings have opinions, and thus, as the creators of artificial agents, these opinions can slip into the technologies. As it has become a trend to apply artificial agents in the real world, the biases they pose inevitably influence certain groups of people.

Examples

COMPAS is an AI algorithm used by law enforcement departments to predict whether criminals are likely to commit another crime in the future. It was designed to take personal human bias out of the process of computing this sensitive metric, based on information including a criminal's age, gender, and previously committed crimes. However, tests of this system have found it to overestimate the likelihood of recidivism for black people at a much higher rate than it does for white people[15] because it was designed by people who have their own biases.

The recruiting AI adopted by Amazon to review resumes was found to be biased against female applicants.[16]

While the intention of these algorithms is to remove human bias from the equation, humans have moral consciousness, whereas it is unclear if artificial agents do. When artificial agents have their creator's biases built in and then go off on their own, these biases can be exploited and amplified. [17]

Steps to Remove Bias

One main factor for bias in Artificial Agents is the data sample used to develop their decision making. Data can carry the most undesirable human traits even when data sets are collected from variously different sources. Through the machine learning process, the Artificial Agent can become bias without it being the intention of the developer. To ensure a less bias algorithm, ethicists should analyze the sample used for the data. Confirm that sources of different backgrounds and opinions are properly represented. Analyze the data that may instruct a form of bias and remove it. Lastly, the decision-making process of an artificial agent should be transparent. The decisions made should be traceable to understand how conclusions were made in order to dictate where bias occurs and allow for adjustments to be made to the data[18]. Active research at MIT is currently underway to use Machine Learning techniques to analyze how black box Neural Networks make decisions, in a more human understandable way. This will greatly help reduce bias, but also improve Machine Learning algorithms.[19]

See Also

References

  1. 1.0 1.1 Rouse, Margaret (2019). "Intelligent Agent". TechTarget SearchEnterpriseAI. Retrieved April 22, 2019.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Floridi, Luciano; Sanders, J.W. (2004). "On the Morality of Artificial Agents". Minds and Machines. 14(3): 349-379. Retrieved April 22, 2019.
  3. Hoefer, Carl (2016). "Causal Determinism". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2016.
  4. Turing, Alan (1950). "Computing Machinery and Intelligence". Mind. 49: 433-460. Retrieved April 22, 2019.
  5. 5.0 5.1 5.2 Himma, Kenneth (2009). "Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?". Ethics and Information Technology. 11(1): 19-29.
  6. Chang, Chia-hao; Yubao Chen (October 1996). "Autonomous Intelligent Agent and Its Potential Applications." Computers & Industrial Engineering. 31(1-2): 409-412.
  7. Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018). "Artificial Intelligence". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2019.
  8. 8.0 8.1 8.2 8.3 8.4 Grodzinsky, Frances; Miller, Keith; Wofl, Marty (September 2008). "The ethics of designing artificial agents". Ethics and Information Technology. 10(2-3): 115-121.
  9. Hornyak, Tim (February26, 2015). "Google's Powerful DeepMind AI Masters Classic Atari Video Games". PC World. Retrieved April 22, 2019.
  10. 10.0 10.1 Johnson, Deborah; Miller, Keith (September 2008). "Un-making artificial moral agents". Ethics and Information Technology. 10(2-3): 123-133.
  11. Brey, Phillip (December 2012). "Anticipating ethical issues in emerging IT". Ethics and Information Technology. 14(4): 305-317.
  12. Moor, James H. (2005). "Why We Need Better Ethics for Emerging Technologies". Ethics and Information Technology. 7: 111-119.
  13. Hogan, James (1979). Two Faces of Tomorrow. Baen Books. ISBN 978-0-671-87848-1.
  14. Torres, Monica (October 10, 2018). "Amazon Reportedly Scraps AI Recruiting Tool That Was Biased against Women". The Ladders. Retrieved April 22, 2019.
  15. Larson, Jeff; Mattu, Surya; Kirchner, Lauren; Angwin, Julia (May 23, 2016). "How We Analyzed the COMPAS Recidivism Algorithm". Propublica. Retrieved April 22, 2019.
  16. Marr, Bernard (January 29, 2019). "Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It". Forbes. Retrieved April 22, 2019.
  17. Brey, Philip (2009). "Values in technology and disclosure computer ethics". Cambridge Handbook of Information and Computer Ethics. Cambridge University Press. ISBN 9780511845239
  18. Eder, Sascha. “How Can We Eliminate Bias In Our Algorithms?” Forbes, Forbes Magazine, 28 June 2018, www.forbes.com/sites/theyec/2018/06/27/how-can-we-eliminate-bias-in-our-algorithms/#15f02f93337e.
  19. Anne McGovern, MIT News, Taking machine thinking out of the black box, http://news.mit.edu/2018/mit-lincoln-laboratory-adaptable-interpretable-machine-learning-0905, September 5, 2018