Difference between revisions of "Utilitarian Philosophy"

From SI410
Jump to: navigation, search
(Utilitarianism in AI)
Line 70: Line 70:
  
 
==See Also==
 
==See Also==
{{resource|
 
 
*[[Jeremy Bentham]]
 
*[[Jeremy Bentham]]
}}
+
*[[Algorithms]]
  
 
==References==
 
==References==

Revision as of 19:40, 22 April 2019

Utilitarian philosophy or utilitarianism is a form of consequentialism that derives moral behavior from the outcome of one's actions. It declares as ethical an action which maximizes the total utility for all individuals directly affected by the actual consequences of said action. An early proponent of utilitarianism, Jeremy Bentham, referred to this belief as the "greatest happiness principle" or "the greatest happiness for the greatest number." Utilitarian philosophy raises ethical issues regarding the concept of "the greater good", moral agency, and value ethics.

Principles of Utilitarianism

Values

What constitutes utility has been an area of contention for many years and continues to be so. However, most classical utilitarians would argue that well-being, happiness and pleasure are the core values utilitarianism attempts to maximize. Because of this, utilitarianism is often compared with hedonism.[1]

Principle of Utility

As the prototypical consequentialist moral theory, classical utilitarianism focuses on the direct and actual consequences of an action to determine if an act is morally correct, rather than the intent of the actor, possible or intended consequences of the act, the circumstances of the act, or the situation before the act. Furthermore, it uses the principle of utility as a measure to judge the morality of an act. This principle states that the morality of the act depends on its ability to maximize total net utility, rather than simply merely improve utility or maximize per capita utility.[2][3]

Impartiality Principle

Another key principle relied upon by classical utilitarianism is the impartiality principle which states that no single person's utility holds higher precedence than another's, nor does their utility depend on if it is evaluated from the perspective of the agent or from the observer.[4] The well being, happiness or pleasure of two individuals ought to be treated as equal in the utilitarian calculus. Furthermore, classical utilitarianism suggests that this impartiality be considered universally, i.e. for all people, rather than only the agent, the agent's immediate social network, or any other subgroup of individuals.[3] However, there is some ongoing debate spearheaded by Princeton University Philosopher, Peter Singer as to whether the impartiality principle should also apply to other species of animals as well.

Twentieth Century Utilitarian Philosophy

Utilitarianism can also be considered a form of consequentialism. Consequentialism is an ethical theory that focuses on the result to judge the act. Based on this view, it supports the act if it results in good outcomes, as being morally correct.

In the past two centuries, numerous variants of Bentham's original form of Utilitarianism have emerged. Each unique form aims to answer the practical question of how one should live their life in order to achieve the highest ethical good.

Act Utilitarianism

Act utilitarianism, or classical utilitarianism, suggests that in any given situation you should choose the action that produces the greatest good for the greater number. An example of this would be spending one's time on activities that are more likely to maximize well-being, such as volunteering at a soup kitchen, as opposed to an activity that would not maximize utility or one that would only benefit one's self. As volunteering at a soup kitchen would undoubtedly bring about greater well-being in the world than an activity that would only benefit one's self, volunteering ought to be the more morally right activity to pursue. Such a consideration would be seen as act utilitarianism.

Rule Utilitarianism

Rule utilitarianism suggests that we must live by rules that produce the greatest good for the greatest number. Rule utilitarianism has its roots in Kantian universalizability, which is derived from the categorical imperative, and states that one should act according to a maxim that they would desire everyone else to act on as well. In the context of rule utilitarianism, this means that one should follow rules that, if followed by everyone, would maximize utility. For example, "I should volunteer when I have free time" is a rule that maximizes utility when followed by everyone, because it compels everyone to volunteer.

In contrast, consider a hypothetical situation in which a doctor makes the decision to kill one patient, harvest their organs, and use them to save 5 other dying people. Under classical utilitarianism, the direct consequence of this action is an increase in utility, so the doctor should perform this act. However, under rule utilitarianism, the indirect consequences of universally following the rule "doctors should kill their patients to save others" can be assessed instead instead of the single act. Now the rule utilitarian can consider the loss of utility from living in a society that fears going to the doctor because they might be killed, and such a rule reveals itself as not maximizing utility after all. Therefore, under rule utilitarianism, a doctor should not follow such a rule.[3]

Preference Utilitarianism

Preference utilitarianism recognizes that classical utilitarianism uses pleasure as a proxy for the things that humans actually desire, in an attempt to simplify the concept of utility. By contrast, preference utilitarianism recognizes the vast set of desires humans have that may have little or nothing to do with pleasure or pain, and quantifies utility as the satisfaction or fulfillment of preferences and disutility as the frustration of desires or preferences. For example, consider the case of a devout monk who wishes to undergo mortification of the flesh via self-flagellation in order to feel connected to a higher purpose. Although the pain outweighs the pleasure of such an act, the monk displays a preference for doing so. A classical utilitarian attempting to maximize utility in the form of pleasure has a moral obligation to prevent the monk from undertaking this act, in spite of the monk's desires. However, a preference utilitarian recognizes that the pleasure/pain dichotomy is insufficient to map the entirety of human desires, and maximizes utility by allowing the monk to fulfill their preference.[3]

Classical Utilitarians

Classic Utilitarians, like Jeremy Bentham and John Stuart Mill, identified good with pleasure. They can be equated with Epicurus, and are hedonists about value. Similarly, they believed we must maximize good and bring about the greatest amount of good for society as a whole.[5] Bentham and Mill were contemporaries. Mill would eventually be taught by Bentham after being approached by Mill's father.

Jeremy Bentham

Jeremy Bentham

Jeremy Bentham was an English philosopher and radical political man. He was primarily known for his moral philosophy which includes utilitarianism, specifically the overall happiness created for society affected by specific actions. Bentham was influenced by philosophers like John Locke and David Hume during the Enlightenment, which is what lead him to develop ethical theory grounded by the empiricist account of human nature. His account of motivation and value was hedonistic in the sense that he believed pleasure and pain are fundamentally valuable and ultimately motivate humans. According to Bentham, happiness is a matter of gaining pleasure and lacking pain.[6]

Led by Bentham and James Mill, the father of his future star pupil, John Stuart Mill, the group included David Ricardo, George Grote, Sir William Molesworth, John Austin, and Francis Place.

Both Bentham and Mill led this group in the name of political reform, believing that the current system under which the country was ruled by was antiquated. Many of their beliefs included universal male suffrage and economic policy that was not favorable to the aristocrats.

John Stuart Mill

John Stuart Mill

John Stuart Mill was born in 1806 and died in 1873. He deeply influenced the British thought and political discourse of his time. His work was composed of a compilation of logic, epistemology, economics, social and political philosophy, ethics, metaphysics, religion, and current affairs. His more well-known publications are A System of Logic, Principles of Political Economy, On Liberty, Utilitarianism, The Subjection of Women, Three Essays on Religion, and his autobiography. His educational development was fostered mainly by his father who inspired him to learn several languages from a young age. Mill felt the influence of historicism, French social thought, and Romanticism, in the form of thinkers like Coleridge, the St. Simonians, Thomas Carlyle, Goethe, and Wordsworth. This inspired his search for a new philosophical radicalism that would be more open to limiting reform forced by culture and history. He would emphasize the growth of humanity, including the cultivation of dispositions of feeling and imagination, something he missed out on in his own education. [7]

Ethical Issues

The Trolley Problem

The Trolley Problem was put forth in 1978 by philosopher Phillipa Foot, and describes a theoretical quandary where one is in a position to prevent a tragedy by making the choice of whether or not to pull a lever. A runaway trolley is headed for five incapacitated people stuck on the track, and cannot be stopped. If you pull the lever, the trolley will follow an alternate route only hitting one incapacitated person. Is it ethical to pull the lever? Do you have a moral duty to pull the lever?[8]

In order to make the decision, utilitarian philosophy can be applied. As utilitarianism describes as moral those actions with consequences that maximize total utility, then surely switching the lever to kill one instead of five is what you ought to do. But how can that utility be measured? Should we accept utilitarianism's impartiality principle and conclude that the happiness of five people continuing to live is better than one? If, for example, the five were all depressive and preferred to die more than the one preferred to live, should this knowledge affect our utilitarian calculus when deciding how utility should be maximized? Or if the one was a child and the five were all elderly, would this impact our utilitarian calculation? Consider also the impact on the actor's utility: would total utility decrease if they pulled the lever and forever considered themselves a murderer for causing the death of the one?

The trolley problem is further complicated by a slight variation in which the switch is replaced by a fat bystander. Now, if you push the bystander onto the tracks, the trolley will stop before it can hit the other five. From a utilitarian perspective, the problem is unchanged: the utility of one is weighed against the utility of five. But now, our moral intuitions do not clearly line up with the utilitarian solution. Furthermore, the utilitarian solution does not line up with either the virtue ethics or deontological solutions, which consider pushing the fat man to be an act of murder.[9]

The moral calculus of classical utilitarianism seems simple at first, but contains many hidden assumptions and subjective qualifiers that are difficult to quantify. The trolley problem and its solutions will soon have real-world implications. Autonomous vehicles will need to make similar ethical judgments regarding the lives of bystanders and passengers when unpreventable accidents occur.[10]

Virtue Ethics

In his work "Utilitarianism", John Stuart Mill addresses his white, Christian audience by arguing against the notion that utilitarianism is immoral. Many Christians perceived utilitarianism as a godless doctrine, however, Mill contended that utilitarian ethics embody the preachings and practices of Christianity in its principle concern for the wellbeing and happiness of others.[11] Utilitarianism strives to take objective approaches in ethical dilemmas, however, much of its ethical philosophy can be attributed to the values of the philosophers themselves. Many utilitarian philosophers believed in deism and evaluating decisions based on a moral standard of happiness.[11] This view, however, aligns more centrally with European and American ideals, and has been criticized for its limitations. With the advent of the computer revolution, Shannon Vallor has urged that virtue ethics should replace utilitarian beliefs, which put too much emphasis on user experience and wellbeing.[12]. In contrast with the utilitarian philosophy, virtue ethics places emphasis on improving one's character and maximizing virtues, such as honesty and empathy. For example, if one were examining the ethical effects of Facebook under a utilitarian perspective one would might consider how Facebook usage correlates with self reported psychological well-being. Meanwhile a virtue ethics perspective would consider how whether Facebook user's act honestly on the platform and whether they are able to connect with users deep enough to emphasize with them.

Utilitarianism in AI

Utilitarianism is the basis for decision making for modern artificial intelligence systems. AI agents are given a utility function, which maps preferences for world-states onto a set of cardinal or ordinal numbers. The agent then makes decisions by taking actions that maximize their utility function, just as a classical utilitarian would when deciding which actions are ethical and which are not.[13]

As philosopher Nick Bostrom explains however, an AI's intelligence and its utility function are orthogonal. That is to say, an agent of any level of intelligence is able to pursue any possible goal, though to greater or lesser success depending on their intelligence. Therefore, as artificially intelligence systems become more powerful and exert significantly more influence over our lives and the world, it is important that an agent's utility function be specifically designed to align with human values, as it is unlikely that human value-oriented goals will emerge from arbitrary utility functions naturally. For example, consider an utilitarian agent whose utility function is maximized only by the production of paperclips. As such an agent grows is able to exert more influence on the world, such production becomes an existential risk to humans, as the agent doesn't care about humans, the planet, the economy, or anything else other than taking actions to create a world-state in which the maximum possible number of paperclips is produced. Similarly, Bostrom warns that any other superficially human-friendly utility function, such as "make humans smile" could result in perverse instantiation, such as an artificial agent paralyzing human facial muscles into rictus grins, therefore maximizing its utility function in the simplest way.[14]

Wireheading

Wireheading refers to the process of using intracranial self-stimulation to directly activate the pleasure center of the brain and maximize individual happiness forever. In 1954, Dr. James Olds and Dr. Peter Milner discovered that rats provided with a lever to electrically stimulate their own brains would do nothing else but pull the lever constantly, thousands of time an hour, choosing stimulation over food, sleep, and even sex.[15] Later, Dr. Robert G. Heath applied the same technique to humans, and found similar results from deep brain stimulation. In multiple cases, patients self-stimulated over 1000 times in 3 hours, and protested when the unit was disconnected.[16] Unlike other forms of pleasure, brain stimulation reward has not been found to saturate over time.[15] Wireheading is the natural evolution of this technology to personal devices that would allow the user to experience continuous pleasure and happiness, while numb to the pain of the outside world.

As classical utilitarianism demands that agents take actions that maximize total utility, or in other words, maximize total pleasure, it could be argued that classical utilitarians have a moral duty to wirehead, in order to maximize their own pleasure. However, this argument can also be extended to others; the classical utilitarian has a moral obligation to maximize utility universally, not just for oneself. Therefore, they must act to ensure a world where everyone becomes wireheaded, regardless of their consent, permanently maximizing pleasure for all people.

However, such a conclusion is deeply at odds with our moral intuitions, and can be considered a failure mode of classical utilitarianism. Firstly, wireheading itself seem repugnant, existing as a lifeless stimulation-junkie on life-support, effectively dead to the world while the brain generates infinite, pointless pleasure. This suggests a flaw in the classical utilitarians' conception of utility as mere pleasure, fundamentally insufficient to capture the true depth of human preferences. Secondly, forcing others to experience pleasure at the cost of their agency suggests an error in the utilitarian concept of universal utility and with the prerogative to maximize utility at all costs.

Note that this moral outcome is largely avoided by preference utilitarianism, which treats preference as the root of utility rather than simple hedonism. Under preference utilitarianism, if you don't prefer a world in which you and everyone else is wireheaded, and everyone else has a similar preference, you can increase universal utility by ensuring that such a world does not come into existence.

See Also

References

  1. Moore, Andrew (2018). "Hedonism". The Stanford Encyclopedia of Philosophy. Retrieved April 21, 2019.
  2. Cavalier, Robert (2002). "Online Guide to Ethics and Morality". Carnegie Mellon Philosophy Department. Retrieved April 21, 2019.
  3. 3.0 3.1 3.2 3.3 Sinnott-Armstrong, Walter (2015). "Consequentialism". The Stanford Encyclopedia of Philosophy. Retrieved April 21, 2019.
  4. Jollimore, Troy (2018). "Impartiality". The Stanford Encyclopedia of Philosophy. Retrieved April 21, 2019.
  5. Driver, Julia (2014). "The History of Utilitarianism". The Stanford Encyclopedia of Philosophy. Retrieved April 21, 2019.
  6. Sweet, William. "Jeremy Bentham". Internet Encyclopedia of Philosophy. Retrieved April 21, 2019.
  7. Heydt, Colin. "John Stuart Mill". Internet Encyclopedia of Philosophy. Retrieved April 21, 2019.
  8. Thomson, Judith Jarvis (May, 1985). "The Trolley Problem." The Yale Law Journal. 94 (6): 1395-1415. Retrieved from https://www.jstor.org/stable/796133 on April 21, 2019.
  9. Singer, Peter (2005). "Ethics and Intuitions". The Journal of Ethics. Retrieved from http://www.utilitarian.net/singer/by/200510--.pdf on April 21, 2019.
  10. Awad, Edmond, et. al. (October 24, 2018). "The Moral Machine experiment". Nature. 563: 59-64. Retrieved April 21, 2019.
  11. 11.0 11.1 Mill, John Stuart (1863). "Utilitarianism". Cambridge University Press. Retrieved April 21, 2019.
  12. Vallor, Shannon (August 2009). "Social networking technology and the virtues" p. 157-170
  13. Russell, Stuart; Norvig, Peter (2016). "Chapter 16: Making Simple Decisions". Artificial Intelligence: A Modern Approach (3rd ed.). Pearson. pp. 610-636. ISBN 978-1-292-15396-4.
  14. Bostrom, Nick (2014). "8.3: Malignant Failure Modes". Superintelligence: Paths, Dangers, Strategies. Oxford University Press. pp. 119-126. ISBN 978-0199678112
  15. 15.0 15.1 Olds, James; Milner, Peter (December 1954). "Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain". Journal of Comparative and Physiological Psychology. 47: 419-427. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/13233369 on April 21, 2019.
  16. Heath, Robert G. (January 1972). "Pleasure and brain activity in man". Journal of Nervous and Mental Disease. 154(1): 3-18. Retrieved April 21, 2019.