Difference between revisions of "Artificial Agents"

From SI410
Jump to: navigation, search
(Responsibility for Artificial Agent’s Actions)
 
(31 intermediate revisions by 13 users not shown)
Line 1: Line 1:
 +
{{Nav-Bar|Topics##}}<br>
 
[[File:Agents.jpg|thumb|500px|Logic Architecture for Artificial Agents]]
 
[[File:Agents.jpg|thumb|500px|Logic Architecture for Artificial Agents]]
'''Artificial Agents''' are systems that have been created by human programmers, but are autonomous in action. With artificial agents, human are responsible for the design and behavior of the such an agent, however the the agent itself has the ability to interact with its environment freely within the scope of its granted domain, independent of its human creator. Luciano Floridi explains that this autonomy, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human inventors as they do not acquire feelings or emotions in achieving their goals, according to Floridi, artificial agents are still classified as moral agents. An artificial agent's morality, actions, embedded values, and bias is ethically controversial. <ref name="Floridi">Floridi, Luciano and Sanders, J.W. “On the Morality of Artificial Agents” 2004.</ref>
+
'''Artificial Agents''' are bots or programs that autonomously collect information or perform a service based on user input or its environment.<ref name = "intelligentagents">Rouse, Margaret (2019). [https://searchenterpriseai.techtarget.com/definition/agent-intelligent-agent "Intelligent Agent"]. ''TechTarget SearchEnterpriseAI''. Retrieved April 22, 2019. </ref> Typical characteristics of agents can include adapting based on experience, problem solving, analyzing success and failure rates, as well as using memory-based storage and retrieval. <ref name = "intelligentagents"/> Humans are responsible for the design and the behavior of the agent, however the agent itself has the ability to interact with its environment freely within the scope of its granted domain. Luciano Floridi, who is most known for his work in the philosophy of information and information ethics, explains that this autonomy, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human inventors in that they do not acquire feelings or emotions in achieving their goals, but, according to Floridi, artificial agents are still classified as moral agents. An artificial agent's morality, actions, embedded values, and bias is ethically controversial. <ref name="Floridi">Floridi, Luciano; Sanders, J.W. (2004). [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16.722&rep=rep1&type=pdf "On the Morality of Artificial Agents"]. ''Minds and Machines''. '''14'''(3): 349-379. Retrieved April 22, 2019.</ref>
  
==Differentiation From Humans==
+
==Differences From Humans==
Compared to humans, computers are highly efficient in their ability to process complex calculations and complete repetitive tasks with minimal margins of error. Such realizations combined with needs to increase productivity lead to the birth of artificial agents. This is where the differentiation lies. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn just as we describe humans as being able to, they do so in a different way. Artificial agents do not experience emotion or feeling which, ultimately, could lead to larger issues when artificial agents take on moral tasks. Human beings are able to comprehend the impact of their actions, but artificial agents tend to be goal-driven in the sense that they will do whatever is necessary to reach the desired outcome.  
+
Compared to humans, computers are highly efficient in their ability to process complex calculations and complete repetitive tasks with minimal margins of error. Such realizations combined with needs to increase productivity lead to the birth of artificial agents. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn as humans can, they do so in a seemingly different way. Artificial agents currently do not experience emotions or feeling which, suggests they might have issues with moral tasks that depend on having emotional experiences. On the other hand, emotional experiences may be orthogonal to moral action, and therefore would not prevent an artificial agent from acting in a moral way.  
  
In 1950, Alan Turing tested a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. His findings showed that machines can not feel, therefore they are not human. This experiment is known today as the [https://en.wikipedia.org/wiki/Turing_test Turing Test]. Consciousness is something that humans have while artificial agents, likely, do not. It is a challenge to claim that an artificial agent is aware of its being. The Turing Test helps to prove this concept by quantifying the ability for a machine to act human-like. Thus, artificial agents differ from humans because they have the ability to make their own decisions, however without being fully aware of how or why, while humans on the other hand are able to consciously make decisions.<ref name="Himma">Himma, Kenneth. “Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?” 2009.</ref>
+
Human beings appear able to comprehend the impact of their actions and make deliberate choices, but artificial agents tend to be goal-driven in the sense that they will do whatever is necessary to reach the desired outcome. However, in a determinist worldview, the human ability to make choice may actually be illusory, and though it feels like we are making a choice, we in fact are simply following the same kind of goal-driven decision making that an artificial agent does.<ref>Hoefer, Carl (2016). [https://plato.stanford.edu/entries/determinism-causal/ "Causal Determinism"]. ''Stanford Encyclopedia of Philosophy''. Retrieved April 22, 2016.</ref> In fact, artificial agents may internally "feel" the same way humans do about their ability to make choices that maximize their utility function.
 +
 
 +
In 1950, computer scientist Alan Turing proposed a theoretical procedure for determining if a machine was able to think, originally known as the Imitation Game, but later popularized as the [https://en.wikipedia.org/wiki/Turing_test Turing Test]. In his seminal paper, Turing claims that any objections to whether a machine that passed the Turing Test by holding a conversation indistinguishable to a human interrogator from another human, could similarly apply to human beings who can also pass the Turing Test. For example, objections that an artificial agent that can hold a conversation does not actually have internal experience are symmetric to solipsistic arguments that other humans who can hold a conversation also do not necessarily have internal experience.<ref>Turing, Alan (1950). [https://www.csee.umbc.edu/courses/471/papers/turing.pdf "Computing Machinery and Intelligence"]. ''Mind''. '''49''': 433-460. Retrieved April 22, 2019.</ref> This suggests that there is no fundamental difference between human agents and artificial agents, and that once they have passed some threshold in either software or hardware, they too can be considered to have moral agency.
 +
 
 +
In contrast, Kenneth Himma argues that current artificial agents cannot pass the Turing Test, and as such, may not have internal experience. This lack of internal experience in turn suggests a lack of ability to deliberate about decisions, which Himma considers key to agenthood.<ref name="Himma">Himma, Kenneth (2009). "Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?". ''Ethics and Information Technology''. '''11'''(1): 19-29.</ref>
  
 
==Artificial Agents Three Criteria==
 
==Artificial Agents Three Criteria==
Floridi lays out three basic criteria of an agent. <ref name="Floridi"/> Those being:
+
Floridi lays out three basic criteria of an agent: <ref name="Floridi"/>  
 
# Interactivity
 
# Interactivity
 
# Autonomy
 
# Autonomy
Line 14: Line 19:
  
 
===Interactivity===
 
===Interactivity===
Interactivity refers to the idea that the agent and its environment have the ability to act upon each other. Input or output of a value is a common example of interactivity. Interactivity can also refer to action by agents and participants that are occurring at the same time <ref name="Floridi"/>.  
+
Interactivity refers to the ability of an agent and its environment to act upon each other. Input or output of a value is a common example of interactivity. Interactivity can also refer to action by agents and participants that are occurring at the same time <ref name="Floridi"/>. An artificial agent's ability to perform this interactivity often hinges on the algorithm to which it was programmed. These algorithms cause the agent to respond to input in varying forms, apply it to the current state they are in, and produce output or adjust states accordingly. Interactivity is essential in allowing artificial agents to enact the other two criteria.
  
 
===Autonomy===
 
===Autonomy===
Autonomy refers to the idea that the agent can change the state it is in without having a direct intervention from an outside source. There is a sense of complexity when it comes to autonomy. If an agent can perform internal transitions that change an agent’s state therefore causing the agent to have two or more states is autonomy <ref name="Floridi"/> <ref>“Artificial Intelligence”, Stanford Encyclopedia of Philosophy, 2018.</ref>.  
+
An autonomous agent is one with a certain degree of intelligence so that it can act on the user's behalf. This means that, without having a direct intervention from an outside source, an agent can change its state. However, having autonomy doesn't mean the agent can do whatever it pleases. It can only act and make decisions within the degree of intelligence with which it has been installed.<ref>Chang, Chia-hao; Yubao Chen (October 1996). "Autonomous Intelligent Agent and Its Potential Applications." ''Computers & Industrial Engineering''. '''31'''(1-2): 409-412.</ref> There is a sense of complexity when it comes to autonomy. If an agent can perform internal transitions that change its state, it is autonomous <ref name="Floridi"/> <ref>Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018). [https://plato.stanford.edu/entries/artificial-intelligence/ "Artificial Intelligence"]. ''Stanford Encyclopedia of Philosophy''. Retrieved April 22, 2019.</ref>. As an artificial agent continues to learn and adapt with more data, its level of autonomy will continue to increase as it becomes capable of making more and more decisions, along with increasing accuracy.
  
 
===Adaptability===
 
===Adaptability===
Adaptability refers to the idea that the agent has the ability to change to a different state without having a direct response with interaction. Adaptability comes after interactivity and autonomy if in the internal state, the agent has its transition rules stored <ref name="Floridi" />.  
+
Adaptability refers to the idea that the agent has the ability to change to a different state without having a direct response with interaction. Adaptability comes after interactivity and autonomy if in the internal state, the agent has its transition rules stored.<ref name="Floridi" /> As autonomy and interactivity increase, so does adaptability, furthering the autonomous agent's ability to emulate intelligence.
  
 
+
==Examples of Artificial Agents==
===Examples of Artificial Agents===
+
 
[[File:smart-thermo.jpg|thumb|right|200px|A Smart Thermostats controlling home temperature]]
 
[[File:smart-thermo.jpg|thumb|right|200px|A Smart Thermostats controlling home temperature]]
'''''Webbot:''''' <br />
+
'''''Web bots:''''' <br />
Web bots are widely used as filters for users' email accounts. Webbots satisfy the criteria to be considered as artificial agents in that they interact with its environment - in this case, the users' email - by blocking unwanted messages. This process is fully automated without users having to delete unwanted emails manually. Web bots are also constantly learning to adapt to users' preferences in order to improve the accuracy of their filters. <br />
+
Web bots are widely used as filters for users' email accounts. Web bots satisfy the criteria to be considered as artificial agents in that they interact with its environment - in this case, the users' email - by blocking unwanted messages. This process is fully automated without users having to delete unwanted emails manually. Web bots are also constantly learning to adapt to users' preferences in order to improve the accuracy of their filters. <br />
 
<br />
 
<br />
 
'''''Smart Thermostats:'''''<br />
 
'''''Smart Thermostats:'''''<br />
 
Smart thermostats provide solutions for maintaining optimal residential temperature. The device interacts with the residential environment by engaging in heating or cooling activities. The device is designed to operate with minimal human interaction to reduce the number of human errors and is able to adapt to its environment through sensors that allow the device to determine whether to suspend or engage in heating or cooling activities. <br />
 
Smart thermostats provide solutions for maintaining optimal residential temperature. The device interacts with the residential environment by engaging in heating or cooling activities. The device is designed to operate with minimal human interaction to reduce the number of human errors and is able to adapt to its environment through sensors that allow the device to determine whether to suspend or engage in heating or cooling activities. <br />
 
<br />
 
<br />
'''''Autonomous Vehicles'''''<br />
+
'''''Autonomous Vehicles:'''''<br />
Autonomous vehicles remove the burden of driving from their human controllers and place the responsibility on themselves to handle the various responsibilities involved in driving. In order to function, autonomous vehicles use a variety of computer vision techniques in order to adapt to changes in their environment. In conjunction with complex algorithm to handle incoming computer vision data, these artificial agent may in some cases outperform their human counterparts.
+
There are many levels of self-driving cars with varying degrees of required human input, but fully autonomous vehicles need no human interference. Autonomous vehicles remove the burden of driving from their human controllers and place the responsibility on themselves to handle the various responsibilities involved in driving. In order to function, autonomous vehicles use a variety of computer vision techniques in order to adapt to changes in their environment. In conjunction with complex algorithm to handle incoming computer vision data, these artificial agents may in some cases outperform their human counterparts.
  
 
==Learning and Intentionality==
 
==Learning and Intentionality==
Human beings often learn from experiences, this, however, is a quality that extends to artificial agents, as well. As artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn but also to interact with their environment without human assistance. Frances Grodzinsky postulates that as an artificial agent learns and becomes more complex, its future behavior becomes harder to predict.<ref name="Grodzinsky">Grodzinsky, Frances, Miller, Keith, Wofl, Marty. "The ethics of designing artificial agents" 2008.</ref> This becomes increasingly important as artificial agents become integrated with more risky facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agent's ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. <ref name="Grodzinsky" />
+
Human beings often learn from experiences, a quality that extends to artificial agents as well. As artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn but also to interact with their environment without human assistance. Frances Grodzinsky postulates that as an artificial agent learns and becomes more complex, its future behavior becomes harder to predict.<ref name="Grodzinsky">Grodzinsky, Frances; Miller, Keith; Wofl, Marty (September 2008). "The ethics of designing artificial agents". ''Ethics and Information Technology''. '''10'''(2-3): 115-121.</ref> This becomes increasingly important as artificial agents become integrated with riskier facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agent's ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. <ref name="Grodzinsky" />
  
 
===Artificial Agents in Gaming===
 
===Artificial Agents in Gaming===
[[File:atari.jpg|thumb|right|200px|DeepMind AI masters the classic Atari video games<ref>Hornyak, Tim. “Google's Powerful DeepMind AI Masters Classic Atari Video Games”2015.</ref>]]
+
[[File:atari.jpg|thumb|right|200px|DeepMind AI masters the classic Atari video games<ref>Hornyak, Tim (February26, 2015). [https://www.pcworld.com/article/2889432/google-ai-program-masters-classic-atari-video-games.html "Google's Powerful DeepMind AI Masters Classic Atari Video Games"]. ''PC World''. Retrieved April 22, 2019.</ref>]]
DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of an artificial agent’s actions is relatively slim, the growing use of artificial agents poses bigger issues. For example, one-day artificial agents may be making medical, financial, and even governmental decisions.<ref name="Johnson"> Johnson, Deborah, and Miller, Keith. “Un-making artificial moral agents” 2008.</ref> This makes the stakes higher especially because the creators can easily lose control of the actions of the agent.  
+
DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of an artificial agent’s actions is relatively slim, the growing use of artificial agents poses bigger issues. For example, one-day artificial agents may be making medical, financial, and even governmental decisions.<ref name="Johnson"> Johnson, Deborah; Miller, Keith (September 2008). "Un-making artificial moral agents". ''Ethics and Information Technology''. '''10'''(2-3): 123-133.</ref> This makes the stakes higher especially because the creators can easily lose control of the actions of the agent.  
  
 
For example, at [https://deepmind.com/ DeepMind] the programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. The agent acted in a way that the creator was unable to predict.<ref name="Grodzinsky" />
 
For example, at [https://deepmind.com/ DeepMind] the programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. The agent acted in a way that the creator was unable to predict.<ref name="Grodzinsky" />
  
 
==Ethics==
 
==Ethics==
The rise of revolutionary technologies demands a symmetric rise in information ethics, at least according to leading ICT ethicist Phillip Brey <ref>Brey, Phillip. Anticipating ethical issues in emerging IT. Dec 2012. Ethics and Information Technology. https://link.springer.com/article/10.1007/s10676-012-9293-y </ref>. The value that technology can provide to humans makes their integration into our lives both consistent and ever-expanding. James Hogan (author of ''Two Faces of Tomorrow'') discusses how we must control the amount of power we give artificial agents. It is important to understand and prioritize the fact that artificial agents are autonomous. The more power they are given, the less certain we can be about how they will act. <ref name="Hogan"> Hogan, James. “Two Faces of Tomorrow" 1979. </ref> Another major ethical concern with artificial agents is the proposition of them being moral agents has raised ethical controversy.  
+
The rise of revolutionary technologies demands a symmetric rise in information ethics, at least according to leading ICT ethicist Phillip Brey <ref name="Brey">Brey, Phillip (December 2012). "Anticipating ethical issues in emerging IT". ''Ethics and Information Technology''. '''14'''(4): 305-317.</ref> and Dartmouth Philosophy Professor, James Moor<ref>Moor, James H. (2005). [https://crown.ucsc.edu/academics/pdf-docs/moor-article.pdf "Why We Need Better Ethics for Emerging Technologies"]. ''Ethics and Information Technology''. '''7''': 111-119.</ref>. The value that technology can provide to humans makes their integration into our lives both consistent and ever-expanding. James Hogan (author of ''Two Faces of Tomorrow'') discusses how we must control the amount of power we give artificial agents. It is important to understand and prioritize the fact that artificial agents are autonomous. The more power they are given, the less certain we can be about how they will act. <ref name="Hogan">Hogan, James (1979). ''Two Faces of Tomorrow''. Baen Books. ISBN 978-0-671-87848-1.</ref> Another major ethical concern with artificial agents is the proposition of them being moral agents has raised ethical controversy.  
  
 
===Artificial Agents as Moral Agents===
 
===Artificial Agents as Moral Agents===
Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be intact. Moral agents can perform actions for good or for evil<ref>Floridi, Luciano. Minds and Machines. Aug 2004. On the Morality of Artificial Agents. https://link.springer.com/article/10.1023/B:MIND.0000035461.63578.9d </ref>. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.  
+
Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be intact. Moral agents can perform actions for good or for evil<ref name="Floridi"></ref>. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.  
  
 
An artificial agent can do things that have moral consequences.<ref name="Floridi" /> This insinuates that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, they recognize that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to make decisions on their own. A better understanding of a moral agent comes down to who is responsible for the undergone actions. <ref name="Himma" />
 
An artificial agent can do things that have moral consequences.<ref name="Floridi" /> This insinuates that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, they recognize that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to make decisions on their own. A better understanding of a moral agent comes down to who is responsible for the undergone actions. <ref name="Himma" />
Line 54: Line 58:
 
By nature, an artificial agent is just that—artificial, made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. To better understand this, it is important to know what responsibility entails. A responsible entity has intention and awareness of their actions.
 
By nature, an artificial agent is just that—artificial, made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. To better understand this, it is important to know what responsibility entails. A responsible entity has intention and awareness of their actions.
  
Himma makes a clear correlation here in discussing that society doesn't hold people with cognitive disabilities morally accountable in the same way that we hold other people accountable, because their disability interferes with their ability to comprehend moral consequences."<ref name="Himma" /> Likewise, we do not punish the artificial agent because it is unclear if they understand the difference between right and wrong, even if they can produce a moral outcome. We must keep the people who design artificial agents accountable for that agent's actions, otherwise, no one can be held accountable.<ref name="Johnson" /> Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent. Frances Grodzinsky echo's this sentiment placing the responsibility of artificial agents' behavior on the designer <ref> Grodzinsky, Frances S., et al. “The Ethics of Designing Artificial Agents.” Ethics and Information Technology, vol. 10, no. 2-3, 2008, pp. 115–121., doi:10.1007/s10676-008-9163-9.</ref>. When designing and planning the actions of artificial agents, it's important for the designer to establish what their intentions are, and to try to anticipate any behaviors that could result in unmoral interactions. For this reason, when creating artificial agents, designers must be careful when developing and designing the interactions.
+
Himma makes a clear correlation here in discussing that society doesn't hold people with cognitive disabilities morally accountable in the same way that we hold other people accountable, because their disability interferes with their ability to comprehend moral consequences."<ref name="Himma" /> Likewise, we do not punish the artificial agent because it is unclear if they understand the difference between right and wrong, even if they can produce a moral outcome. We must keep the people who design artificial agents accountable for that agent's actions, otherwise, no one can be held accountable.<ref name="Johnson" /> Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent. Frances Grodzinsky echo's this sentiment placing the responsibility of artificial agents' behavior on the designer <ref name="Grodzinsky"></ref>. When designing and planning the actions of artificial agents, it's important for the designer to establish what their intentions are, and to try to anticipate any behaviors that could result in unmoral interactions. For this reason, when creating artificial agents, designers must be careful when developing and designing the interactions .<ref name="Grodzinsky" /> .
  
 
===Artificial Agents and Bias===
 
===Artificial Agents and Bias===
[[File:Amazon.jpg|thumb|right|Amazon's recruiting AI was found biased against women<ref>Torres, Monica. “Amazon Reportedly Scraps AI Recruiting Tool That Was Biased against Women” 2018.</ref>]]
+
[[File:Amazon.jpg|thumb|right|Amazon's recruiting AI was found biased against women<ref>Torres, Monica (October 10, 2018). [https://www.theladders.com/career-advice/amazon-reportedly-scraps-ai-recruiting-tool-biased-against-women "Amazon Reportedly Scraps AI Recruiting Tool That Was Biased against Women"]. ''The Ladders''. Retrieved April 22, 2019.</ref>]]
 
Because artificial agents are human-made but still autonomous, major ethical issues arise when values are brought into play. Innately, human beings have opinions, and thus, as the creators of artificial agents, these opinions can slip into the technologies. As it has become a trend to apply artificial agents in the real world, the biases they pose inevitably influence certain groups of people.  
 
Because artificial agents are human-made but still autonomous, major ethical issues arise when values are brought into play. Innately, human beings have opinions, and thus, as the creators of artificial agents, these opinions can slip into the technologies. As it has become a trend to apply artificial agents in the real world, the biases they pose inevitably influence certain groups of people.  
  
'''Examples'''
+
====Examples====
[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing COMPAS] is an AI algorithm used by law enforcement departments to predict whether criminals are likely to commit another crime in the future. It was designed to take personal human bias out of the process of computing this sensitive metric, based on information including a criminal's age, gender, and previously committed crimes. However, tests of this system have found it to overestimate the likelihood of recidivism for black people at a much higher rate than it does for white people<ref>Larson, J., Mattu, S., Kirchner, L., and Angwin, J., [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm "How We Analyzed the COMPAS Recidivism Algorithm"], May 2016</ref> because it was designed by people who have their own biases.  
+
 
 +
[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing COMPAS] is an AI algorithm used by law enforcement departments to predict whether criminals are likely to commit another crime in the future. It was designed to take personal human bias out of the process of computing this sensitive metric, based on information including a criminal's age, gender, and previously committed crimes. However, tests of this system have found it to overestimate the likelihood of recidivism for black people at a much higher rate than it does for white people<ref>Larson, Jeff; Mattu, Surya; Kirchner, Lauren; Angwin, Julia (May 23, 2016). [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm "How We Analyzed the COMPAS Recidivism Algorithm"]. ''Propublica''. Retrieved April 22, 2019.</ref> because it was designed by people who have their own biases.  
 +
 
 +
The recruiting AI adopted by Amazon to review resumes was found to be biased against female applicants.<ref>Marr, Bernard (January 29, 2019). [https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-in-artificial-intelligence/#3e45110f7a12 "Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It"]. ''Forbes''. Retrieved April 22, 2019.</ref>
  
The recruiting AI adopted by Amazon to review resumes was found to be biased against female applicants.<ref>Marr, Bernard. “Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It” 2019.</ref>  
+
While the intention of these algorithms is to remove human bias from the equation, humans have moral consciousness, whereas it is unclear if artificial agents do. When artificial agents have their creator's biases built in and then go off on their own, these biases can be exploited and amplified. <ref name="Brey Values">Brey, Philip (2009). "Values in technology and disclosure computer ethics". ''Cambridge Handbook of Information and Computer Ethics''. Cambridge University Press. ISBN 9780511845239</ref>
  
While the intention of these algorithms is to remove human bias from the equation, humans have moral consciousness, whereas it is unclear if artificial agents do. When artificial agents have their creator's biases built in and then go off on their own, these biases can be exploited and amplified. <ref name="Brey">Brey, Philip. “Values in technology and disclosure computer ethics” 2012.</ref>
+
====Steps to Remove Bias====
 +
One main factor for bias in Artificial Agents is the data sample used to develop their decision making. Data can carry the most undesirable human traits even when data sets are collected from variously different sources. Through the machine learning process, the Artificial Agent can become bias without it being the intention of the developer. To ensure a less bias algorithm, ethicists should analyze the sample used for the data. Confirm that sources of different backgrounds and opinions are properly represented. Analyze the data that may instruct a form of bias and remove it. Lastly, the decision-making process of an artificial agent should be transparent. The decisions made should be traceable to understand how conclusions were made in order to dictate where bias occurs and allow for adjustments to be made to the data<ref>Eder, Sascha. “How Can We Eliminate Bias In Our Algorithms?” Forbes, Forbes Magazine, 28 June 2018, www.forbes.com/sites/theyec/2018/06/27/how-can-we-eliminate-bias-in-our-algorithms/#15f02f93337e.</ref>. Active research at MIT is currently underway to use Machine Learning techniques to analyze how black box Neural Networks make decisions, in a more human understandable way. This will greatly help reduce bias, but also improve Machine Learning algorithms.<ref>Anne McGovern, MIT News, Taking machine thinking out of the black box, http://news.mit.edu/2018/mit-lincoln-laboratory-adaptable-interpretable-machine-learning-0905, September 5, 2018</ref>
  
 
==See Also==
 
==See Also==
 
* [[Internet of things]]
 
* [[Internet of things]]
 
* [[Artificial Intelligence and Technology]]
 
* [[Artificial Intelligence and Technology]]
 +
* [[Algorithms]]
 +
* [[Bias in Information]]
  
 
==References==
 
==References==

Latest revision as of 13:26, 28 April 2019

Back • ↑Topics • ↑Categories

Logic Architecture for Artificial Agents

Artificial Agents are bots or programs that autonomously collect information or perform a service based on user input or its environment.[1] Typical characteristics of agents can include adapting based on experience, problem solving, analyzing success and failure rates, as well as using memory-based storage and retrieval. [1] Humans are responsible for the design and the behavior of the agent, however the agent itself has the ability to interact with its environment freely within the scope of its granted domain. Luciano Floridi, who is most known for his work in the philosophy of information and information ethics, explains that this autonomy, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human inventors in that they do not acquire feelings or emotions in achieving their goals, but, according to Floridi, artificial agents are still classified as moral agents. An artificial agent's morality, actions, embedded values, and bias is ethically controversial. [2]

Differences From Humans

Compared to humans, computers are highly efficient in their ability to process complex calculations and complete repetitive tasks with minimal margins of error. Such realizations combined with needs to increase productivity lead to the birth of artificial agents. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn as humans can, they do so in a seemingly different way. Artificial agents currently do not experience emotions or feeling which, suggests they might have issues with moral tasks that depend on having emotional experiences. On the other hand, emotional experiences may be orthogonal to moral action, and therefore would not prevent an artificial agent from acting in a moral way.

Human beings appear able to comprehend the impact of their actions and make deliberate choices, but artificial agents tend to be goal-driven in the sense that they will do whatever is necessary to reach the desired outcome. However, in a determinist worldview, the human ability to make choice may actually be illusory, and though it feels like we are making a choice, we in fact are simply following the same kind of goal-driven decision making that an artificial agent does.[3] In fact, artificial agents may internally "feel" the same way humans do about their ability to make choices that maximize their utility function.

In 1950, computer scientist Alan Turing proposed a theoretical procedure for determining if a machine was able to think, originally known as the Imitation Game, but later popularized as the Turing Test. In his seminal paper, Turing claims that any objections to whether a machine that passed the Turing Test by holding a conversation indistinguishable to a human interrogator from another human, could similarly apply to human beings who can also pass the Turing Test. For example, objections that an artificial agent that can hold a conversation does not actually have internal experience are symmetric to solipsistic arguments that other humans who can hold a conversation also do not necessarily have internal experience.[4] This suggests that there is no fundamental difference between human agents and artificial agents, and that once they have passed some threshold in either software or hardware, they too can be considered to have moral agency.

In contrast, Kenneth Himma argues that current artificial agents cannot pass the Turing Test, and as such, may not have internal experience. This lack of internal experience in turn suggests a lack of ability to deliberate about decisions, which Himma considers key to agenthood.[5]

Artificial Agents Three Criteria

Floridi lays out three basic criteria of an agent: [2]

  1. Interactivity
  2. Autonomy
  3. Adaptability

Interactivity

Interactivity refers to the ability of an agent and its environment to act upon each other. Input or output of a value is a common example of interactivity. Interactivity can also refer to action by agents and participants that are occurring at the same time [2]. An artificial agent's ability to perform this interactivity often hinges on the algorithm to which it was programmed. These algorithms cause the agent to respond to input in varying forms, apply it to the current state they are in, and produce output or adjust states accordingly. Interactivity is essential in allowing artificial agents to enact the other two criteria.

Autonomy

An autonomous agent is one with a certain degree of intelligence so that it can act on the user's behalf. This means that, without having a direct intervention from an outside source, an agent can change its state. However, having autonomy doesn't mean the agent can do whatever it pleases. It can only act and make decisions within the degree of intelligence with which it has been installed.[6] There is a sense of complexity when it comes to autonomy. If an agent can perform internal transitions that change its state, it is autonomous [2] [7]. As an artificial agent continues to learn and adapt with more data, its level of autonomy will continue to increase as it becomes capable of making more and more decisions, along with increasing accuracy.

Adaptability

Adaptability refers to the idea that the agent has the ability to change to a different state without having a direct response with interaction. Adaptability comes after interactivity and autonomy if in the internal state, the agent has its transition rules stored.[2] As autonomy and interactivity increase, so does adaptability, furthering the autonomous agent's ability to emulate intelligence.

Examples of Artificial Agents

A Smart Thermostats controlling home temperature

Web bots:
Web bots are widely used as filters for users' email accounts. Web bots satisfy the criteria to be considered as artificial agents in that they interact with its environment - in this case, the users' email - by blocking unwanted messages. This process is fully automated without users having to delete unwanted emails manually. Web bots are also constantly learning to adapt to users' preferences in order to improve the accuracy of their filters.

Smart Thermostats:
Smart thermostats provide solutions for maintaining optimal residential temperature. The device interacts with the residential environment by engaging in heating or cooling activities. The device is designed to operate with minimal human interaction to reduce the number of human errors and is able to adapt to its environment through sensors that allow the device to determine whether to suspend or engage in heating or cooling activities.

Autonomous Vehicles:
There are many levels of self-driving cars with varying degrees of required human input, but fully autonomous vehicles need no human interference. Autonomous vehicles remove the burden of driving from their human controllers and place the responsibility on themselves to handle the various responsibilities involved in driving. In order to function, autonomous vehicles use a variety of computer vision techniques in order to adapt to changes in their environment. In conjunction with complex algorithm to handle incoming computer vision data, these artificial agents may in some cases outperform their human counterparts.

Learning and Intentionality

Human beings often learn from experiences, a quality that extends to artificial agents as well. As artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn but also to interact with their environment without human assistance. Frances Grodzinsky postulates that as an artificial agent learns and becomes more complex, its future behavior becomes harder to predict.[8] This becomes increasingly important as artificial agents become integrated with riskier facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agent's ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. [8]

Artificial Agents in Gaming

DeepMind AI masters the classic Atari video games[9]

DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of an artificial agent’s actions is relatively slim, the growing use of artificial agents poses bigger issues. For example, one-day artificial agents may be making medical, financial, and even governmental decisions.[10] This makes the stakes higher especially because the creators can easily lose control of the actions of the agent.

For example, at DeepMind the programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. The agent acted in a way that the creator was unable to predict.[8]

Ethics

The rise of revolutionary technologies demands a symmetric rise in information ethics, at least according to leading ICT ethicist Phillip Brey [11] and Dartmouth Philosophy Professor, James Moor[12]. The value that technology can provide to humans makes their integration into our lives both consistent and ever-expanding. James Hogan (author of Two Faces of Tomorrow) discusses how we must control the amount of power we give artificial agents. It is important to understand and prioritize the fact that artificial agents are autonomous. The more power they are given, the less certain we can be about how they will act. [13] Another major ethical concern with artificial agents is the proposition of them being moral agents has raised ethical controversy.

Artificial Agents as Moral Agents

Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be intact. Moral agents can perform actions for good or for evil[2]. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.

An artificial agent can do things that have moral consequences.[2] This insinuates that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, they recognize that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to make decisions on their own. A better understanding of a moral agent comes down to who is responsible for the undergone actions. [5]

Responsibility for Artificial Agent’s Actions

By nature, an artificial agent is just that—artificial, made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. To better understand this, it is important to know what responsibility entails. A responsible entity has intention and awareness of their actions.

Himma makes a clear correlation here in discussing that society doesn't hold people with cognitive disabilities morally accountable in the same way that we hold other people accountable, because their disability interferes with their ability to comprehend moral consequences."[5] Likewise, we do not punish the artificial agent because it is unclear if they understand the difference between right and wrong, even if they can produce a moral outcome. We must keep the people who design artificial agents accountable for that agent's actions, otherwise, no one can be held accountable.[10] Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent. Frances Grodzinsky echo's this sentiment placing the responsibility of artificial agents' behavior on the designer [8]. When designing and planning the actions of artificial agents, it's important for the designer to establish what their intentions are, and to try to anticipate any behaviors that could result in unmoral interactions. For this reason, when creating artificial agents, designers must be careful when developing and designing the interactions .[8] .

Artificial Agents and Bias

Amazon's recruiting AI was found biased against women[14]

Because artificial agents are human-made but still autonomous, major ethical issues arise when values are brought into play. Innately, human beings have opinions, and thus, as the creators of artificial agents, these opinions can slip into the technologies. As it has become a trend to apply artificial agents in the real world, the biases they pose inevitably influence certain groups of people.

Examples

COMPAS is an AI algorithm used by law enforcement departments to predict whether criminals are likely to commit another crime in the future. It was designed to take personal human bias out of the process of computing this sensitive metric, based on information including a criminal's age, gender, and previously committed crimes. However, tests of this system have found it to overestimate the likelihood of recidivism for black people at a much higher rate than it does for white people[15] because it was designed by people who have their own biases.

The recruiting AI adopted by Amazon to review resumes was found to be biased against female applicants.[16]

While the intention of these algorithms is to remove human bias from the equation, humans have moral consciousness, whereas it is unclear if artificial agents do. When artificial agents have their creator's biases built in and then go off on their own, these biases can be exploited and amplified. [17]

Steps to Remove Bias

One main factor for bias in Artificial Agents is the data sample used to develop their decision making. Data can carry the most undesirable human traits even when data sets are collected from variously different sources. Through the machine learning process, the Artificial Agent can become bias without it being the intention of the developer. To ensure a less bias algorithm, ethicists should analyze the sample used for the data. Confirm that sources of different backgrounds and opinions are properly represented. Analyze the data that may instruct a form of bias and remove it. Lastly, the decision-making process of an artificial agent should be transparent. The decisions made should be traceable to understand how conclusions were made in order to dictate where bias occurs and allow for adjustments to be made to the data[18]. Active research at MIT is currently underway to use Machine Learning techniques to analyze how black box Neural Networks make decisions, in a more human understandable way. This will greatly help reduce bias, but also improve Machine Learning algorithms.[19]

See Also

References

  1. 1.0 1.1 Rouse, Margaret (2019). "Intelligent Agent". TechTarget SearchEnterpriseAI. Retrieved April 22, 2019.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Floridi, Luciano; Sanders, J.W. (2004). "On the Morality of Artificial Agents". Minds and Machines. 14(3): 349-379. Retrieved April 22, 2019.
  3. Hoefer, Carl (2016). "Causal Determinism". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2016.
  4. Turing, Alan (1950). "Computing Machinery and Intelligence". Mind. 49: 433-460. Retrieved April 22, 2019.
  5. 5.0 5.1 5.2 Himma, Kenneth (2009). "Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?". Ethics and Information Technology. 11(1): 19-29.
  6. Chang, Chia-hao; Yubao Chen (October 1996). "Autonomous Intelligent Agent and Its Potential Applications." Computers & Industrial Engineering. 31(1-2): 409-412.
  7. Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018). "Artificial Intelligence". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2019.
  8. 8.0 8.1 8.2 8.3 8.4 Grodzinsky, Frances; Miller, Keith; Wofl, Marty (September 2008). "The ethics of designing artificial agents". Ethics and Information Technology. 10(2-3): 115-121.
  9. Hornyak, Tim (February26, 2015). "Google's Powerful DeepMind AI Masters Classic Atari Video Games". PC World. Retrieved April 22, 2019.
  10. 10.0 10.1 Johnson, Deborah; Miller, Keith (September 2008). "Un-making artificial moral agents". Ethics and Information Technology. 10(2-3): 123-133.
  11. Brey, Phillip (December 2012). "Anticipating ethical issues in emerging IT". Ethics and Information Technology. 14(4): 305-317.
  12. Moor, James H. (2005). "Why We Need Better Ethics for Emerging Technologies". Ethics and Information Technology. 7: 111-119.
  13. Hogan, James (1979). Two Faces of Tomorrow. Baen Books. ISBN 978-0-671-87848-1.
  14. Torres, Monica (October 10, 2018). "Amazon Reportedly Scraps AI Recruiting Tool That Was Biased against Women". The Ladders. Retrieved April 22, 2019.
  15. Larson, Jeff; Mattu, Surya; Kirchner, Lauren; Angwin, Julia (May 23, 2016). "How We Analyzed the COMPAS Recidivism Algorithm". Propublica. Retrieved April 22, 2019.
  16. Marr, Bernard (January 29, 2019). "Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It". Forbes. Retrieved April 22, 2019.
  17. Brey, Philip (2009). "Values in technology and disclosure computer ethics". Cambridge Handbook of Information and Computer Ethics. Cambridge University Press. ISBN 9780511845239
  18. Eder, Sascha. “How Can We Eliminate Bias In Our Algorithms?” Forbes, Forbes Magazine, 28 June 2018, www.forbes.com/sites/theyec/2018/06/27/how-can-we-eliminate-bias-in-our-algorithms/#15f02f93337e.
  19. Anne McGovern, MIT News, Taking machine thinking out of the black box, http://news.mit.edu/2018/mit-lincoln-laboratory-adaptable-interpretable-machine-learning-0905, September 5, 2018