Difference between revisions of "Deontology"
(Created page with "{{Nav-Bar|Topics#R}}<br> '''Deontology''' is school of thought that our actions in life should be judged upon based on a certain set of rules instead of the consequences that...") |
(No difference)
|
Revision as of 21:10, 13 March 2020
Deontology is school of thought that our actions in life should be judged upon based on a certain set of rules instead of the consequences that those actions bring to others. In this sense, Deontology is much like living life constrained by laws, much like the Ten Commandments or Asimov's Laws on how AI should behave. In this school of thought, the action itself being produced is what's being judged for morality and is valued higher than the consequences. Deontology creates very black/white boundaries for people/AI systems to act in and can be attributed to "Golden Rules" that are put into place to maintain order. Not only is the individual solely responsible for synchronously making decisions that harmonize with each other but also to be held accountable/punishable for poorly judged decisions. [1]
Contents
Background
History
Asimov's Rules of AI: 0. Robots may not harm humanity or allow humanity harm 1. Robots may not injure human beings or allow them to come to harm. 2. Robots must obey orders give except where orders conflict 0th Law 3. Robots must protect their own existence if it doesn't conflict with 0th or 1st law. As AI systems were actually designed for use, it was clear that these issues were well-defined in their niches, and shouldn't cause Artificial Intelligence to necessarily trap us all in a Matrix void. However, solely constraining AI only off a set of rules can formulate to more risks in the future.
Community of All Ethics
Deontology isn't the only school of thought considered in AI and is one of three schools of thought when considering AI: utilitarianism, Deontology, and virtue ethics. Deontology creates the foundation of rules that the system can build off of more such as what to do and not to do, and then using virtue ethics for the AI to learn and incorporate new information. Deontology, Utilitarianism, and virtue ethics differ greatly in what value is emphasized from each approach and some combination of the three could work best in this future ethical grey area of AI. [2]
Ethical Issues
Controversial issues such as targeted-advertising by big data corporations are becoming more and more common as technologies develop, and although may not seem ethical. The service-provider compares the potential of a user buying new products and taking apart of growth/revenue as a reason to share user data with third-party companies in order to target their spending habits. Some, might view the advertisements as warranted by User-License agreements signed before usage of the platform but the large collection of millions of user data may not be ethically permissible by these agreements. [3]
Controversy
Although the nature of Deontology is to be straight-forward and only focus on the action, oftentimes it is in human nature to only care about the outcomes of a situation and therefore lying, although immoral may be the best option at times. By only focusing on the action itself, the whole other part which is the consequence and the person are not being considered at all. Proper ethical consideration takes into account all three of these sub-categories and looks to use them together.