Deontology

From SI410
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Deontology is a school of thought that our actions in life should be judged based on a certain set of rules instead of the consequences that those actions bring to others. In this sense, Deontology is much like living life constrained by laws, much like the Ten Commandments or Asimov's Laws on how AI should behave. In this school of thought, the action itself being produced is what's being judged for morality and is valued higher than the consequences. Deontology creates very black/white boundaries for people/AI systems to act in and can be attributed to "Golden Rules" that are put into place to maintain order. Not only is the individual solely responsible for synchronously making decisions that harmonize with each other but also to be held accountable/punishable for poorly judged decisions. However, this school of thought may not fit perfectly for the development of artificial intelligence. What happens when one of Asimov's laws contradicts the other? [1]

Background

History

Asimov's Rules of AI:

0. Robots may not harm humanity or allow humanity harm

1. Robots may not injure human beings or allow them to come to harm.

2. Robots must obey orders give except where orders conflict 0th Law

3. Robots must protect their own existence if it doesn't conflict with 0th or 1st law.

As AI systems were actually designed for use, it was clear that these issues were well-defined in their niches, and shouldn't cause Artificial Intelligence to necessarily trap us all in a Matrix void. However, solely constraining AI only off a set of rules can formulate to more risks in the future. Computer programs used for automation need some inputs via thermal or motion sensors in the case of autonomous vehicles, the automation process of linking them together and have them work in unison to create an overall map of the surrounding environment, and the output that the vehicle acts upon based on the environment. However, it is impossible for people to create computer programs that would never harm humans and leads to another deontology-based issue: targeted advertising. [2]

Deon(AI).png

Community of All Ethics

Deontology isn't the only school of thought considered in AI and is one of three schools of thought when considering AI: utilitarianism, Deontology, and virtue ethics. Deontology creates the foundation of rules that the system can build off of more such as what to do and not to do, and then using virtue ethics for the AI to learn and incorporate new information. Deontology, Utilitarianism, and virtue ethics /differ greatly in what value is emphasized from each approach and some combination of the three could work best in this future ethical grey area of AI. [3]

Ethical Issues

Controversial issues such as targeted-advertising by big data corporations are becoming more and more common as technologies develop, and although may not seem ethical. The service-provider compares the potential of a user buying new products and taking apart of growth/revenue as a reason to share user data with third-party companies in order to target their spending habits.

Targeted Advertising

Some, might view the advertisements as warranted by User-License agreements signed before usage of the platform but the large collection of millions of user data may not be ethically permissible by these agreements. These strategic advertisements are used to target human emotions and read data based off user responses and if the user is interested or not. This data can be sold to other companies to target better advertisements in order to generate more profits. Also, not all advertisements are catered to the audience, and inappropriate advertisements for gambling or prostitution may be exposed to children on the internet and can cause irreversible psychological damage to the child. [4] The fast food industry has also received scrutiny for their immoral advertising towards children and teenagers, and advertisement ethicality in younger age groups is much lower than that of adults. Few ads in the fast food industry held up to the TARES (Truthfulness, Authenticity, Respect, Equity, and Socially Responsible) test for ethical persuasion and from the deontological-ethical perspective the main topic of these advertisements has shifted from the product they are lobbying to the moral responsibility that these franchises must be held accountable for and how these messages can influence the young population. Deliberate ads targeting young children and minority races have seen a drastic increase since 2009 with children ages 6-11 seeing nearly 59% more ads for Subway, 26% for McDonald's, and 10% more for Burger King along with data showing African American children and teenagers are exposed to 50% more fast food ads than Caucasian children and teenagers. [5] [6]

McD Ads.gif

Automation

Automation has long been viewed from a utilitarianism stand point and when developing technologies such as autonomous cars that involve artificial intelligence to make real-life decisions have been based off the consequences that these technologies produce. One example being in autonomous vehicles that is guaranteed to crash in front of two people and can either decide to hit the two people in front and protect the driver or to swerve and endanger the driver. From a deontological standpoint, killing the driver to save the two people in front is morally wrong and the vehicle would save the driver, but from the view of utilitarianism, the car would swerve to avoid the two people and save two instead of one. Automation involves much more than individual decisions and instead of viewing these situations as individual, break them down into different parts and evaluate from there. In the creation of autonomous vehicles and automation, a path is generated based on the user's environment and because this is a physical representation, a deontological constraint can be used to guarantee the path is being traced. Two ethical frameworks deontology and consequentialism (utilitarianism) are used and considered parallel to each other in terms of cost and boundaries. [7]

TSLA Sensors.png

Controversy

Although the nature of Deontology is to be straight-forward and only focus on the action, oftentimes it is in human nature to only care about the outcomes of a situation and therefore lying, although immoral may be the best option at times. By only focusing on the action itself, the whole other part which is the consequence and the person are not being considered at all. Proper ethical consideration takes into account all three of these sub-categories and looks to use them together much like in the process of automation. Ethics in general must be considered in all technologies as to how they will integrate into everyday human society. As artificial intelligence technologies continue to advance, their capabilities will increase exponentially as well, leaving little room for programming error and heavy consideration of ethics.

References

  1. https://aistrategyblog.com/category/deontological-ethics/
  2. https://www.andrew.cmu.edu/course/80-136/gips.html
  3. https://arxiv.org/pdf/1701.07769.pdf
  4. https://www.graphyonline.com/archives/archivedownload.php?pid=IJJMC-124
  5. https://www.tandfonline.com/doi/abs/10.1080/08900523.2013.792700?scroll=top&needAccess=true&journalCode=hmme20
  6. https://healthland.time.com/2010/11/08/study-fast-food-ads-target-kids-with-unhealthy-food-and-it-works/
  7. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7588150