Difference between revisions of "Bias in Everyday Appliances"

From SI410
Jump to: navigation, search
Line 40: Line 40:
 
=== Wearables/Smart Watches ===
 
=== Wearables/Smart Watches ===
 
[[File:VW-Fit-13-best-heart-rate-monitors-4157709-4e4f76ee8116475288df0d081bd55ae4.jpg|250px|thumb|right|Wearable measuring an individual's heart rate.]]
 
[[File:VW-Fit-13-best-heart-rate-monitors-4157709-4e4f76ee8116475288df0d081bd55ae4.jpg|250px|thumb|right|Wearable measuring an individual's heart rate.]]
Wearables, or smart watches, are the non-invasive blood oxygen level measuring products which also can provide sleep, exercise and health measurements. Some well-known examples of wearables include Fitbit and Apple Watch. The data received from these devices are often factored into medical decisions or even used as a measurement device in some clinical trials. Due to its cheaper use and greater resistance to motion error, many smart watches use solely green light to monitor users' measurements. Despite these benefits of green light, however, they send out shorter wavelengths, creating difficulty when trying to take accurate readings of people with darker skin tones. The skin tone scale utilized in these devices is also outdated—most devices use the Fitzpatrick Scale, which was developed in the 1970s. Not only is this skin tone scale used not objective enough, these devices are often calibrated using the sample population the developers have access to, which happens to consist of a high proportion of White and Asian males. The resulting racial biases lead to the collection of erroneous data, causing disadvantages for individuals who seek to obtain proper access to healthcare. For instance, it was found that fitness trackers were more successfully able to detect a heart murmur and hypoxia for White people than Black people <ref name="bar"/>.
+
Wearables, or smart watches, are non-invasive blood oxygen level measuring products which also can provide sleep, exercise and health measurements. Some well-known examples of wearables include Fitbit and Apple Watch. The data received from these devices are often factored into medical decisions or even used as a measurement device in some clinical trials. Due to its cheaper use and greater resistance to motion error, many smart watches solely use green light to monitor users' measurements. Despite the benefits of green light, however, they send out shorter wavelengths, creating difficulty when trying to take accurate readings of people with darker-skin tones. The skin tone scale utilized in these devices is also outdated—most devices use the Fitzpatrick Scale which was developed in the 1970s. Not only is this skin tone scale used not objective enough, these devices are often calibrated using the sample population the developers have access to, which happens to consist of a high proportion of White and Asian males. The resulting racial biases lead to the collection of erroneous data, causing disadvantages for individuals who seek to obtain proper access to healthcare. For instance, it was found that fitness trackers were more successfully able to detect a heart murmur and hypoxia for White people than Black people <ref name="bar"/>.
  
 
=== Remote Plethysmography ===
 
=== Remote Plethysmography ===

Revision as of 03:51, 9 February 2023

Bias in Everyday Appliances

Bias is generally defined as a natural inclination towards or against something or someone—regardless of whether or not one is conscious of the biases they hold, one may propose that it is crucial to be wary of them due to the rash decisions and harm they can bring [1]. Biases in everyday appliances occur when builders and engineers of these appliances fail to either utilize diverse sets of training data, or thoroughly think of the social consequences of their designs [2] [3]. Some appliance domains of frequent use include: facial recognition, speech recognition, automatic soap dispensers and medical devices. Although these tools may have been made with the intention to facilitate life, through these domains, many types of biases have arose, with some prevalent ones being racial bias and gender bias. In order to combat these biases, engineers and researchers are seeking more ways to incorporate greater representation in training data—ensuring that the data being used is accurate and informative—and modify existing systems to be more inclusive of certain ethnic groups and genders [2][4].

Types of Biases Referenced Below

Racial Bias

Racial bias is a common form of implicit bias and is defined to be a personal and potentially illogical judgment made solely on an individual’s race [5]. This type of bias can be found in almost all domains of everyday appliances, as further elaborated in the sections below.

Gender Bias

Gender bias is another form of implicit bias and is defined to be the tendency to prefer one gender over another [6]. Gender bias may be present in everyday appliances if developers happen to solely test the appliance within their team, where the diversity of the results would be limited by genders of the developers.

Physical Bias

This is the first way Achuta Kadambi, assistant professor at University of California, Los Angeles' Samueli School of Engineering, claims is how racial and gender bias can negatively affect medical devices. Physical bias refers to the inherent bias present in the mechanics of the medical device.

Computational Bias

The second way racial and gender bias can enter into the realm of medical devices is through computational bias, which refers to the bias present in the software or data sets used to develop the devices.

Interpretation Bias

According to Kadambi, the third and final way racial and gender bias can affect medical devices is by interpretation bias, which refers to the bias present in the user of the device—specifically when clinicians apply race-based standards to the outputs of medical devices and tests [2].

Selection or Sample Bias

This bias occurs when a dataset does not reflect the realities of the environment the system will be used in [4].

Measurement Bias

Measurement bias refers to the bias that occurs when the data collected for training does not represent the real world accurately, or when faulty measurements result in ill-fitted data [4].

Facial Recognition

Facial Recognition

Facial recognition is a widely implemented software in today's society that is used to match faces in images, from both photos and video stills, against an existing database of facial identities [7]. The usage of this technology can be commonly seen in phones (e.g. to unlock lock screens or to verify app purchases) and, starting from the early 2000s, in law enforcement (e.g. to identify criminals) [8]. Despite there being much improvement to this technology from when it was first used, facial recognition software is still far from perfect. Not only was iPhone’s facial recognition feature unable to correctly distinguish between two colleagues in China, Google Photos also mistakenly tagged two black friends in a selfie as gorillas [3]. Moreover, in law enforcement systems, these facial recognition systems are still more prone to misidentifying African Americans than other races, which are errors that could lead to innocent people being incarcerated or claimed as potential suspects. Authors Claire Garvie and Jonathan Frankle propose that the racial makeup of the development team and types of photos available in the databases used can heavily influence the accuracy of facial recognition systems. This statement is supported by the fact that algorithms developed in East Asian countries recognized East Asian faces far more readily than Caucasians, and similarly, algorithms developed in Western and European countries recognized Caucasian facial characteristics with greater accuracy than those of East Asian descent [8].

Speech Recognition

Alexa (left) and Siri (right)

Speech Recognition is a software that enables programs to process human speech into a written format [9]. Some common examples of speech recognition software include Apple's Siri and Amazon's Alexa. Such technologies and programs occasionally have difficulty when interpreting the vernacular and speech patterns of people whose first language is not English or are from underrepresented groups. Author Claudia Lopez Lloreda highlights one study which indicated that programs from leading technology companies displayed significant race disparities, where audio from Black speakers were twice as likely to be incorrectly transcribed compared to those from White speakers [10].

Automatic Soap Dispensers

Automatic soap dispenser failing to work for a darker-skin toned user

Automatic soap dispensers use near-infrared technology to detect hand motions, and the invisible light that is reflected back from the skin is what triggers the sensor to pump out the soap. If the user's skin is of darker tone, however, not enough light is reflected back to activate the sensor, leaving darker-skin toned users soapless, and sometimes, waterless. Some examples of automatic soap dispensers displaying implicit racial bias include the time when an automatic soap dispenser at Marriott Hotel was unable to detect a Black customer's hand and the instance when Kadambi had to ask a lighter-skinned individual to activate the soap dispenser (and faucet) for him. Results like these can arise if companies fail to test their products on a large range of skin tones [2][3].

Medical Devices

Pulse Oximeters

Pulse oximeters are devices that use technology similar to that of soap dispensers, where light is transmitted through the skin and tissue to measure oxygen levels in a person's blood. The inherent racial bias perpetuated by these devices is evident by the study where low oxygen levels were three times more likely to be missed in Black patients than in White patients [2][4]. This generally shows how non-White patients may be hampered from receiving fair and necessary treatment.

Wearables/Smart Watches

Wearable measuring an individual's heart rate.

Wearables, or smart watches, are non-invasive blood oxygen level measuring products which also can provide sleep, exercise and health measurements. Some well-known examples of wearables include Fitbit and Apple Watch. The data received from these devices are often factored into medical decisions or even used as a measurement device in some clinical trials. Due to its cheaper use and greater resistance to motion error, many smart watches solely use green light to monitor users' measurements. Despite the benefits of green light, however, they send out shorter wavelengths, creating difficulty when trying to take accurate readings of people with darker-skin tones. The skin tone scale utilized in these devices is also outdated—most devices use the Fitzpatrick Scale which was developed in the 1970s. Not only is this skin tone scale used not objective enough, these devices are often calibrated using the sample population the developers have access to, which happens to consist of a high proportion of White and Asian males. The resulting racial biases lead to the collection of erroneous data, causing disadvantages for individuals who seek to obtain proper access to healthcare. For instance, it was found that fitness trackers were more successfully able to detect a heart murmur and hypoxia for White people than Black people [4].

Remote Plethysmography

Remote plethysmography is recently developed technology that measures heart rates through the analysis of live or recorded video; however, it works less well for people of color when programmed to pick up blush like changes in the skin. In order to make this technology more inclusive, a group of researchers at the Massachusetts Institute of Technology devised a remote plethysmograph that reads tiny changes in head motion which occur when the heart beats instead. Kadambi's research team is also aiming to combat more of the physical, computational, and interpretation bias referenced earlier by analyzing video images with thermal wavelengths rather than visible light [2].

Chest X-rays

An artificial intelligence system which was built to identify and analyze chest x-rays for 14 different lung and chest diseases worked less well for women. This was because the system was mainly trained on male chest scans, making it inevitable for gender bias to infiltrate the system. Later, when the system was trained on a more gender-balanced sample, however, the model was found to produce the best overall results. As shown by the modifications made to this system, Kadambi suggests that adding more variety to the training data can help improve performance levels of such medical devices [2].

References

  1. Sussex Publishers. (n.d.). Bias. Psychology Today. Retrieved January 27, 2023, from https://www.psychologytoday.com/us/basics/bias
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Wallis, C. (2021, June 1). Fixing Medical Devices That Are Biased against Race or Gender. Scientific American. Retrieved January 27, 2023, from https://www.scientificamerican.com/article/fixing-medical-devices-that-are-biased-against-race-or-gender/
  3. 3.0 3.1 3.2 Goethe, T. S. (2019, March 2). Bigotry Encoded: Racial Bias in Technology. Reporter. Retrieved January 27, 2023, from https://reporter.rit.edu/tech/bigotry-encoded-racial-bias-technology
  4. 4.0 4.1 4.2 4.3 4.4 Turner, A., & Stainwright, S. (2022, May 3). Racist soap dispensers and other tales of Health Measurement Bias. Actuaries Digital. Retrieved January 27, 2023, from https://www.actuaries.digital/2022/04/28/racist-soap-dispensers-and-other-tales-of-health-measurement-bias/
  5. Williams, S. A. S. (1970, January 1). Bias, Race. SpringerLink. Retrieved January 27, 2023, from https://link.springer.com/referenceworkentry/10.1007/978-0-387-79061-9_329
  6. Reiners, B. (2022, September 29). What Is Gender Bias in the Workplace? Built In. Retrieved January 27, 2023, from https://builtin.com/diversity-inclusion/gender-bias-in-the-workplace
  7. Mohanakrishnan, R. (2021, September 2). Top 11 Facial Recognition Software in 2021. Spiceworks. Retrieved January 27, 2023, from https://www.spiceworks.com/it-security/identity-access-management/articles/facial-recognition-software/
  8. 8.0 8.1 Frankle, J., & Garvie, C. (2016, April 7). Facial-Recognition Software Might Have a Racial Bias Problem. The Atlantic. Retrieved January 27, 2023, from https://www.theatlantic.com/technology/archive/2016/04/the-underlying-bias-of-facial-recognition-systems/476991/
  9. What is speech recognition? IBM. (n.d.). Retrieved January 27, 2023, from https://www.ibm.com/topics/speech-recognition
  10. Lloreda, C. L. (2020, July 5). Speech Recognition Tech Is Yet Another Example of Bias. Scientific American. Retrieved January 27, 2023, from https://www.scientificamerican.com/article/speech-recognition-tech-is-yet-another-example-of-bias/