Gender bias in the Online Job Search

From SI410
Revision as of 17:11, 1 April 2021 by Ctaketa (Talk | contribs) (Addition of image showing Facebook lookalike audience feature)

Jump to: navigation, search
There are many stages to the hiring process, and algorithms are involved in almost all of them. [1]

The advancement of technology and algorithms has led to companies recruiting and hiring potential employees through the use of big data and artificial intelligence, which can amplify gender bias in the online job search [2]. Gender bias refers to the “unfair difference” in the way that both men and women are treated [3]. In the context of the online job search, gender bias is defined as the advantage male job seekers have versus female job seekers. The use of algorithms in the online job search process can perpetuate existing biases which result in ethical implications regarding blocked job opportunities for females, and the disadvantage for female job seekers due to systematic gender roles.

Evidence of Bias

Process of Facebook's lookalike audience tool. [4]

Job Recruitment Algorithms

Job recruitment algorithms have been found to reinforce and perpetuate unconscious human gender bias [5]. Because job recruitment algorithms are trained on real-world data – and the real world is biased – the algorithms amplify this bias on a larger scale. Decision-making algorithms are “designed to mimic how a human would…choose a potential employee” and without careful consideration, algorithms can intensify bias in recruiting [6].

Utilizing real-world data to shape algorithms leads to algorithms producing biased outcomes. Algorithms make predictions by analyzing past data [2]. If the past data includes biased judgments, then the algorithm’s predictions will also be biased. Facebook’s tool called “lookalike audience” allows advertisers – in this case, employers – to input a “source audience” that will dictate who Facebook advertises jobs to, based on a person’s similarities with this “source audience” [2]. This tool is meant to help employers predict which users are most likely to apply for jobs. If an employer provides the lookalike tool with a dataset that does not include a lot of women, then Facebook will not advertise the job to women. Employers might use this tool to deliberately exclude certain groups, but there is also a possibility for employers to be unaware of the bias of their “source audience” [2].

Job Recommendation Algorithms

Job recommendation algorithms within online platforms are built to find and reproduce patterns in user behavior, updating predictions or decisions as job seekers and employers interact [7]. If the system within the platform recognizes that the employer interacts with mostly men, then the algorithm will look for those characteristics in potential job applicants and replicate the pattern [7]. This pattern picked up by the algorithm can happen without specific instruction from the employer, which leads to biases going unnoticed.

Algorithms Extending Human Bias

Personal, human bias extends into algorithmic bias. A study conducted at the University of California, Santa Barbara found that people’s own underlying biases were bigger determinants of their likelihood to apply to jobs than any gendered job posting [8]. Underlying human biases need to be reduced in order to work towards gender neutrality in the job market [8]. Humans choose the data to train algorithms with, and the "choice to use certain data inputs over others can lead to discriminatory outcomes" [9]. Hiring algorithms can be an extension of "our opinions embedded in code" and further research highlights that algorithms reproduce existing societal, human bias [9]. The people constructing hiring algorithms are in the tech industry, which is not very diverse, leading to the algorithms being trained on non-diverse data, and therefore extending human gender bias into the online job market [9]. While creating algorithms, "biases creep in because human bias [influences] the algorithm" [10]. Humans are the ones building the biased algorithms, therefore it is up to humans to notice the biases and fix them [10].

Ethical Implications

Algorithms Blocking Opportunities

Amazon's hiring algorithm was modeled after real-world data, where there are more men in the tech field. The difference in gender of employees is shown in the figure above [11].

Algorithms in the online job search do not outright reject job seekers. Instead, they block certain groups of job seekers from seeing opportunities they are qualified for; as legal scholar Pauline Kim stated, “not informing people of a job opportunity is a highly effective barrier” to job seekers [7]. Qualified candidates cannot apply for a job if they have not been shown the opportunity.

Amazon’s algorithmic recruiting tool was trained with 10 years’ worth of resumes that were sent to Amazon; however, because technology is a male-dominated field, most of the resumes were from male applicants, leading the algorithm to downvote women [11]. This method of training taught the algorithm that men were preferred, therefore penalizing candidates who included the word “women’s” in their resume. For example, if the candidate listed an activity as “women’s team captain, ” their resume would be downgraded in the system [11]. Amazon has since scrapped this recruiting algorithm [12].

Traditional Gender Roles Affect Outcomes

It has been studied that women can have a better keyword match on their resume, yet not be selected for a job if a man has more experience than them [6]. These hiring algorithms that are built and trained by humans do not take into account the time women must take off of work to have children or to take care of children [6]. The author of a study done by the University of Melbourne recounts that “women have less experience because they take time [off work] for caregiving, and that algorithm is going to bump men up and women down based on experience” [5]. Because women are more likely to experience a disruption in their career due to children, they will be viewed as a lesser candidate by the algorithm, even if they have more relevant experience than a male candidate [6]. Hiring algorithms do not take into account gender roles, which includes women taking time off to give birth. This replicates and reinforces gender bias in the online hiring process [6].

Reducing Bias

Rethinking How Algorithms are Built

Vendors that build recruitment algorithms to target specific job seekers need to think beyond the minimum compliance requirements; they have to consider whether or not the algorithm they are building is leading to more fair hiring outcomes [7]. Additionally, the people stating that their algorithms will reduce bias in the hiring process have to build and test their algorithms while keeping that goal in mind, or else the technology will continue to undermine the online job search process [7]. Re-thinking algorithms and how to build them will begin to reduce bias in the job search, as many factors need to be considered.

One specific aspect of gender bias can be found in word embeddings. Word embeddings are binary, pre-trained models that assign words or phrases specific representations and meanings; these models have been found to reflect societal bias [13]. For example, words like "receptionist" and "nurse" are linked to "women" and "she" whereas words like "doctor" and "computer scientist" are linked to "men" and "he". Researchers have explored potential solutions to de-bias word embeddings by using methods such as building a genderless framework as well as teaching the algorithm gender-neutral word embeddings [14]. Such methods aim to minimize the difference between gendered words (i.e. male versus female) and maximize "the difference between the gender direction and other neutral dimensions” [14][15]. This allows algorithms to use or neglect the gender dimensions.

Balancing Humans and Algorithms

Implementing a balance between predictive algorithms and human insight is a promising solution for employers looking to use algorithms in their hiring process while reducing bias [9]. Using artificial intelligence and algorithms to parse through large amounts of data or applicants works well for processing. Balancing the processing by algorithms with the "human ability to recognize more intangible realities of what that data might mean" is the second step in the process of limiting algorithmic bias [9]. For a partnership between humans and algorithms to be successful within companies, they need to consciously and deliberately implement new practices [9]. Both algorithms and humans still need to be held accountable for reducing bias, and working together would encourage a good short-term solution to the phenomenon of gender bias in the online job search [9].

Unintended Consequences of Bias Reduction

It has been suggested that algorithms de-biased in terms of gender could still produce the same biased outcome. Algorithms may still use online proxies in their scoring process to produce discriminatory results as these proxies serve as stand-ins for protected groups, like gender [16][17]. For example, while algorithms may be de-biased on the terms of gender specifically, the algorithm may use stand-in factors such as height or weight as a proxy to determine a candidate's gender [16]. This leaves room for the algorithm to produce bias results based on a candidate's gender.

Although there is a statistical process that is known to eliminate proxy discrimination, the process requires the algorithmic model to include "training data information on legally prohibited characteristics” [18]. Even if such legally prohibited information is obtained, characteristics would then be measured on their predictive power of the target variable; this could result in unintended amplification of the initial proxy discrimination [18]. For example, if height was measured as a highly predictive characteristic and height was used as a proxy for gender, then the algorithm would intentionally discriminate on the predictive basis of height thus unintentionally discriminating against gender.

References

  1. Bogen, M. & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf
  2. 2.0 2.1 2.2 2.3 Kim, P. T. (2019). Big data and artificial intelligence: New challenges for workplace equality. University of Louisville Law Review, 57(2), 313-328.
  3. Cambridge Dictionary.(n.d.). Gender Bias. In Cambridge English Dictionary. https://dictionary.cambridge.org/us/dictionary/english/gender-bias
  4. facebook.com
  5. 5.0 5.1 Hanrahan, C. (2020, December 2. Job recruitment algorithms can amplify unconscious bias favouring men, new research finds. The ABC News. https://www.abc.net.au/news/2020-12-02/job-recruitment-algorithms-can-have-bias-against-women/12938870
  6. 6.0 6.1 6.2 6.3 6.4 Cheong, M., et al. (n.d.). Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance. The University of Melbourne. https://about.unimelb.edu.au/__data/assets/pdf_file/0024/186252/NEW-RESEARCH-REPORT-Ethical-Implications-of-AI-Bias-as-a-Result-of-Workforce-Gender-Imbalance-UniMelb,-UniBank.pdf
  7. 7.0 7.1 7.2 7.3 7.4 Bogen, M. (2019, May 6). All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
  8. 8.0 8.1 Tang, S., et al. (2017). Gender Bias in the Job Market: A Longitudinal Analysis. ACM on the Human-Computer Interaction. . https://dl.acm.org/doi/epdf/10.1145/3134734
  9. 9.0 9.1 9.2 9.3 9.4 9.5 9.6 Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570
  10. 10.0 10.1 Rosenbaum, E. (2018, May 30). Silicon Valley is stumped: A.I. cannot always remove bias from hiring. Consumer News and Business Channel. https://www.cnbc.com/2018/05/30/silicon-valley-is-stumped-even-a-i-cannot-remove-bias-from-hiring.html
  11. 11.0 11.1 11.2 Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  12. Amazon scrapped 'sexist AI' tool. (2018, October 10). British Broadcasting Corporation. https://www.bbc.com/news/technology-45809919
  13. GeeksforGeeks. (2020, October 14). Word Embeddings in NLP. https://www.geeksforgeeks.org/word-embeddings-in-nlp/
  14. 14.0 14.1 Sun, T., et al. (2019, June 21). Mitigating Gender Bias in Natural Language Processing: Literature Review. Cornell University. https://arxiv.org/pdf/1906.08976.pdf
  15. Bolukbasi, T., et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Advances in Neural Information Processing Systems 29 (NIPS 2016)
  16. 16.0 16.1 Larson, J., Mattu, S., & Angwin, J. (2015, August 31). Unintended Consequences of Geographic Targeting. Technology Science. https://techscience.org/a/2015090103/
  17. Zarsky, T. Z. (2014). Understanding Discrimination in the Scored Society. Washington Law Review, 89(4), 1375-1412. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2550248
  18. 18.0 18.1 Prince, A. E.R. & Schwarcz, D. (2020). Proxy Discrimination in the Age of Artificial Intelligence and Big Data. Iowa Law Review, 105(3). https://ilr.law.uiowa.edu/print/volume-105-issue-3/proxy-discrimination-in-the-age-of-artificial-intelligence-and-big-data/