Artificial Agents are bots or programs that autonomously collect information or perform a service based on user input or its environment. Typical characteristics of agents can include adapting based on experience, problem solving, analyzing success and failure rates, as well as using memory-based storage and retrieval.  Humans are responsible for the design and the behavior of the agent, however the agent itself has the ability to interact with its environment freely within the scope of its granted domain. Luciano Floridi, who is most known for his work in the philosophy of information and information ethics, explains that this autonomy, exhibited by the artificial agents, allows for the agents to learn and adapt entirely on their own. Artificial agents differ from their human inventors in that they do not acquire feelings or emotions in achieving their goals, but, according to Floridi, artificial agents are still classified as moral agents. An artificial agent's morality, actions, embedded values, and bias is ethically controversial. 
- 1 Differences From Humans
- 2 Artificial Agents Three Criteria
- 3 Examples of Artificial Agents
- 4 Learning and Intentionality
- 5 Ethics
- 6 See Also
- 7 References
Differences From Humans
Compared to humans, computers are highly efficient in their ability to process complex calculations and complete repetitive tasks with minimal margins of error. Such realizations combined with needs to increase productivity lead to the birth of artificial agents. Humans make artificial agents, but artificial agents are not human. Though artificial agents can adapt and learn as humans can, they do so in a seemingly different way. Artificial agents currently do not experience emotions or feeling which, suggests they might have issues with moral tasks that depend on having emotional experiences. On the other hand, emotional experiences may be orthogonal to moral action, and therefore would not prevent an artificial agent from acting in a moral way.
Human beings appear able to comprehend the impact of their actions and make deliberate choices, but artificial agents tend to be goal-driven in the sense that they will do whatever is necessary to reach the desired outcome. However, in a determinist worldview, the human ability to make choice may actually be illusory, and though it feels like we are making a choice, we in fact are simply following the same kind of goal-driven decision making that an artificial agent does. In fact, artificial agents may internally "feel" the same way humans do about their ability to make choices that maximize their utility function.
In 1950, computer scientist Alan Turing proposed a theoretical procedure for determining if a machine was able to think, originally known as the Imitation Game, but later popularized as the Turing Test. In his seminal paper, Turing claims that any objections to whether a machine that passed the Turing Test by holding a conversation indistinguishable to a human interrogator from another human, could similarly apply to human beings who can also pass the Turing Test. For example, objections that an artificial agent that can hold a conversation does not actually have internal experience are symmetric to solipsistic arguments that other humans who can hold a conversation also do not necessarily have internal experience. This suggests that there is no fundamental difference between human agents and artificial agents, and that once they have passed some threshold in either software or hardware, they too can be considered to have moral agency.
In contrast, Kenneth Himma argues that current artificial agents cannot pass the Turing Test, and as such, may not have internal experience. This lack of internal experience in turn suggests a lack of ability to deliberate about decisions, which Himma considers key to agenthood.
Artificial Agents Three Criteria
Floridi lays out three basic criteria of an agent: 
Interactivity refers to the ability of an agent and its environment to act upon each other. Input or output of a value is a common example of interactivity. Interactivity can also refer to action by agents and participants that are occurring at the same time . An artificial agent's ability to perform this interactivity often hinges on the algorithm to which it was programmed. These algorithms cause the agent to respond to input in varying forms, apply it to the current state they are in, and produce output or adjust states accordingly. Interactivity is essential in allowing artificial agents to enact the other two criteria.
An autonomous agent is one with a certain degree of intelligence so that it can act on the user's behalf. This means that, without having a direct intervention from an outside source, an agent can change its state. However, having autonomy doesn't mean the agent can do whatever it pleases. It can only act and make decisions within the degree of intelligence with which it has been installed. There is a sense of complexity when it comes to autonomy. If an agent can perform internal transitions that change its state, it is autonomous  . As an artificial agent continues to learn and adapt with more data, its level of autonomy will continue to increase as it becomes capable of making more and more decisions, along with increasing accuracy.
Adaptability refers to the idea that the agent has the ability to change to a different state without having a direct response with interaction. Adaptability comes after interactivity and autonomy if in the internal state, the agent has its transition rules stored. As autonomy and interactivity increase, so does adaptability, furthering the autonomous agent's ability to emulate intelligence.
Examples of Artificial Agents
Web bots are widely used as filters for users' email accounts. Web bots satisfy the criteria to be considered as artificial agents in that they interact with its environment - in this case, the users' email - by blocking unwanted messages. This process is fully automated without users having to delete unwanted emails manually. Web bots are also constantly learning to adapt to users' preferences in order to improve the accuracy of their filters.
Smart thermostats provide solutions for maintaining optimal residential temperature. The device interacts with the residential environment by engaging in heating or cooling activities. The device is designed to operate with minimal human interaction to reduce the number of human errors and is able to adapt to its environment through sensors that allow the device to determine whether to suspend or engage in heating or cooling activities.
There are many levels of self-driving cars with varying degrees of required human input, but fully autonomous vehicles need no human interference. Autonomous vehicles remove the burden of driving from their human controllers and place the responsibility on themselves to handle the various responsibilities involved in driving. In order to function, autonomous vehicles use a variety of computer vision techniques in order to adapt to changes in their environment. In conjunction with complex algorithm to handle incoming computer vision data, these artificial agents may in some cases outperform their human counterparts.
Learning and Intentionality
Human beings often learn from experiences, a quality that extends to artificial agents as well. As artificial agents encounter different computational situations, they are able to modify their actions. This not only exhibits their ability to learn but also to interact with their environment without human assistance. Frances Grodzinsky postulates that as an artificial agent learns and becomes more complex, its future behavior becomes harder to predict. This becomes increasingly important as artificial agents become integrated with riskier facets. When artificial agents are able to learn and adapt on their own, they can surpass their human creators which poses major problems should the designer lose control of the agent. This idea that an agent can break free from its creators intent hinders the agent's ability to be considered intentional. Intentionality of an artificial agent requires that it is essentially predictable, but does not imply that has consciousness. Therefore, an artificial agent has intentionality so long as it does not surpass its creator's desired outcomes. Once the human inventor loses control of the artificial agent, the agent loses intentionality and can become dangerous. 
Artificial Agents in Gaming
DeepMind is a company that has taken artificial agents and applied them to gaming. The employees there have created a gaming system that aims to play Atari games. CEO of DeepMind, Demis Hassabis explains how his staff has made an algorithm that should learn on its own through experience. After a few hundred attempts at playing, the artificial agent learns how to win the games in the most efficient manner. Though in gaming, the severity of an artificial agent’s actions is relatively slim, the growing use of artificial agents poses bigger issues. For example, one-day artificial agents may be making medical, financial, and even governmental decisions. This makes the stakes higher especially because the creators can easily lose control of the actions of the agent.
For example, at DeepMind the programmers made an artificial agent that was able to complete the game in such a way that the human creators had never thought of themselves. The agent acted in a way that the creator was unable to predict.
The rise of revolutionary technologies demands a symmetric rise in information ethics, at least according to leading ICT ethicist Phillip Brey  and Dartmouth Philosophy Professor, James Moor. The value that technology can provide to humans makes their integration into our lives both consistent and ever-expanding. James Hogan (author of Two Faces of Tomorrow) discusses how we must control the amount of power we give artificial agents. It is important to understand and prioritize the fact that artificial agents are autonomous. The more power they are given, the less certain we can be about how they will act.  Another major ethical concern with artificial agents is the proposition of them being moral agents has raised ethical controversy.
Artificial Agents as Moral Agents
Morality is the ability to distinguish between right and wrong. Often, this implies an understood foundation of law meaning that there are consequences for certain actions and praise for others. In order to be considered moral, this ability to punish or honor should be intact. Moral agents can perform actions for good or for evil. In the case of artificial agents, it is unclear if repercussions and rewards are applicable.
An artificial agent can do things that have moral consequences. This insinuates that artificial agents can decipher between right and wrong, or can at least provide a right or wrong outcome. Kenneth Himma does not go so far as to say that artificial agents are moral agents. Rather, they recognize that artificial agents can only be moral agents if there is evidence that the artificial agent understands morality and is able to make decisions on their own. A better understanding of a moral agent comes down to who is responsible for the undergone actions. 
Responsibility for Artificial Agent’s Actions
By nature, an artificial agent is just that—artificial, made by humans. Due to this inherent fact, it is assumed that human beings are to blame for the actions of their produced agents. To better understand this, it is important to know what responsibility entails. A responsible entity has intention and awareness of their actions.
Himma makes a clear correlation here in discussing that society doesn't hold people with cognitive disabilities morally accountable in the same way that we hold other people accountable, because their disability interferes with their ability to comprehend moral consequences." Likewise, we do not punish the artificial agent because it is unclear if they understand the difference between right and wrong, even if they can produce a moral outcome. We must keep the people who design artificial agents accountable for that agent's actions, otherwise, no one can be held accountable. Johnson recognizes that even though it may be the actions of the artificial agent that are deemed right or wrong, it is still the responsibility of the creator to understand the risk in designing an autonomous agent. Frances Grodzinsky echo's this sentiment placing the responsibility of artificial agents' behavior on the designer . When designing and planning the actions of artificial agents, it's important for the designer to establish what their intentions are, and to try to anticipate any behaviors that could result in unmoral interactions. For this reason, when creating artificial agents, designers must be careful when developing and designing the interactions . .
Artificial Agents and Bias
Because artificial agents are human-made but still autonomous, major ethical issues arise when values are brought into play. Innately, human beings have opinions, and thus, as the creators of artificial agents, these opinions can slip into the technologies. As it has become a trend to apply artificial agents in the real world, the biases they pose inevitably influence certain groups of people.
COMPAS is an AI algorithm used by law enforcement departments to predict whether criminals are likely to commit another crime in the future. It was designed to take personal human bias out of the process of computing this sensitive metric, based on information including a criminal's age, gender, and previously committed crimes. However, tests of this system have found it to overestimate the likelihood of recidivism for black people at a much higher rate than it does for white people because it was designed by people who have their own biases.
The recruiting AI adopted by Amazon to review resumes was found to be biased against female applicants.
While the intention of these algorithms is to remove human bias from the equation, humans have moral consciousness, whereas it is unclear if artificial agents do. When artificial agents have their creator's biases built in and then go off on their own, these biases can be exploited and amplified. 
Steps to Remove Bias
One main factor for bias in Artificial Agents is the data sample used to develop their decision making. Data can carry the most undesirable human traits even when data sets are collected from variously different sources. Through the machine learning process, the Artificial Agent can become bias without it being the intention of the developer. To ensure a less bias algorithm, ethicists should analyze the sample used for the data. Confirm that sources of different backgrounds and opinions are properly represented. Analyze the data that may instruct a form of bias and remove it. Lastly, the decision-making process of an artificial agent should be transparent. The decisions made should be traceable to understand how conclusions were made in order to dictate where bias occurs and allow for adjustments to be made to the data. Active research at MIT is currently underway to use Machine Learning techniques to analyze how black box Neural Networks make decisions, in a more human understandable way. This will greatly help reduce bias, but also improve Machine Learning algorithms.
- Rouse, Margaret (2019). "Intelligent Agent". TechTarget SearchEnterpriseAI. Retrieved April 22, 2019.
- Floridi, Luciano; Sanders, J.W. (2004). "On the Morality of Artificial Agents". Minds and Machines. 14(3): 349-379. Retrieved April 22, 2019.
- Hoefer, Carl (2016). "Causal Determinism". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2016.
- Turing, Alan (1950). "Computing Machinery and Intelligence". Mind. 49: 433-460. Retrieved April 22, 2019.
- Himma, Kenneth (2009). "Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?". Ethics and Information Technology. 11(1): 19-29.
- Chang, Chia-hao; Yubao Chen (October 1996). "Autonomous Intelligent Agent and Its Potential Applications." Computers & Industrial Engineering. 31(1-2): 409-412.
- Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018). "Artificial Intelligence". Stanford Encyclopedia of Philosophy. Retrieved April 22, 2019.
- Grodzinsky, Frances; Miller, Keith; Wofl, Marty (September 2008). "The ethics of designing artificial agents". Ethics and Information Technology. 10(2-3): 115-121.
- Hornyak, Tim (February26, 2015). "Google's Powerful DeepMind AI Masters Classic Atari Video Games". PC World. Retrieved April 22, 2019.
- Johnson, Deborah; Miller, Keith (September 2008). "Un-making artificial moral agents". Ethics and Information Technology. 10(2-3): 123-133.
- Brey, Phillip (December 2012). "Anticipating ethical issues in emerging IT". Ethics and Information Technology. 14(4): 305-317.
- Moor, James H. (2005). "Why We Need Better Ethics for Emerging Technologies". Ethics and Information Technology. 7: 111-119.
- Hogan, James (1979). Two Faces of Tomorrow. Baen Books. ISBN 978-0-671-87848-1.
- Torres, Monica (October 10, 2018). "Amazon Reportedly Scraps AI Recruiting Tool That Was Biased against Women". The Ladders. Retrieved April 22, 2019.
- Larson, Jeff; Mattu, Surya; Kirchner, Lauren; Angwin, Julia (May 23, 2016). "How We Analyzed the COMPAS Recidivism Algorithm". Propublica. Retrieved April 22, 2019.
- Marr, Bernard (January 29, 2019). "Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It". Forbes. Retrieved April 22, 2019.
- Brey, Philip (2009). "Values in technology and disclosure computer ethics". Cambridge Handbook of Information and Computer Ethics. Cambridge University Press. ISBN 9780511845239
- Eder, Sascha. “How Can We Eliminate Bias In Our Algorithms?” Forbes, Forbes Magazine, 28 June 2018, www.forbes.com/sites/theyec/2018/06/27/how-can-we-eliminate-bias-in-our-algorithms/#15f02f93337e.
- Anne McGovern, MIT News, Taking machine thinking out of the black box, http://news.mit.edu/2018/mit-lincoln-laboratory-adaptable-interpretable-machine-learning-0905, September 5, 2018