Gender bias in the Online Job Search

From SI410
Revision as of 19:23, 9 April 2021 by Ptse (Talk | contribs)

Jump to: navigation, search
There are many stages to the hiring process, and algorithms are involved in almost all of them. [1]

The advancement of technology and algorithms has led to companies recruiting and hiring potential employees through the use of big data and artificial intelligence, which can amplify gender bias in the online job search [2]. Gender bias refers to the “unfair difference” in the way that both men and women are treated [3]. In the context of the online job search, gender bias is defined as the advantage male job seekers have versus female job seekers. The use of algorithms in the online job search process can perpetuate existing biases which result in ethical implications regarding blocked job opportunities for females, and the disadvantage for female job seekers due to systematic gender roles.

Artificial Intelligence in the Recruitment Process

Artificial intelligence is being utilized in recruiting algorithms. The AI technology is learning how to recruit most effectively for the company using it [4]. The introduction of AI into the recruiting process makes candidates 14% more likely to pass interviews and receive offers, 18% more likely to accept the job, and they are 12% less likely to inform recruiters of competing job offer during the negotiation process [5]. This technology is mainly used with four general types of recruitment activities: outreach, screening, assessment, and coordination [6].

Outreach

Artificial intelligence helps companies find the right applicants. These potential applicants consist of active candidates (those who are deliberately searching for a new job) and passive candidates (those who are not actively searching but would show interest for the right opportunity). The AI uses data from websites like LinkedIn, Facebook, Twitter, etc. to match candidates to jobs. After enough training, the AI will have learned the most efficient way to word and present jobs to potential candidates. [6]

Screening

AI is also used for the resume screening process of recruitment. AI can help companies reduce the time-to-hire. Hilton Hotels & Resorts implemented a screening tool that used AI and found an 88% decline in time-to-hire, or 42 days to 5 days. These AI screening tools can be more effective than humans because they can infer key terms from natural language. For example, instead of searching for the keyword, ‘persistence’, the AI can infer this term from other phrases or wording. [6]

Assessing

AI assessments are used to narrow a candidate pool after the resume screening step. AI can be used in varying types of assessments. It can range from realistic chatbot conversations in situational judgement tests to making decisions based on an applicant’s responses to test questions [7]. These assessments can be used as an initial interview or to informally test candidates for desired traits like an individual’s risk propensity. [6]

Coordination

It is beneficial for companies to make their hiring process as positive of an experience as possible because a candidate might not be the right fit today, but they could be perfect in a year. AI can facilitate with this because it creates a more seamless, digital experience. Companies can use chatbots to update candidates on where they are in the process, fill in information gaps like the candidate’s potential start date, and answer candidate questions. [6] Also, a company’s openness with their usage of AI in the recruitment process increased the likelihood of a job application, as well as having a positive experience with the process [8].

Evidence of Bias

Process of Facebook's lookalike audience tool. [9]

Job Recruitment Algorithms

Job recruitment algorithms have been found to reinforce and perpetuate unconscious human gender bias [10]. Because job recruitment algorithms are trained on real-world data – and the real world is biased – the algorithms amplify this bias on a larger scale. Decision-making algorithms are “designed to mimic how a human would…choose a potential employee” and without careful consideration, algorithms can intensify bias in recruiting [11].

Utilizing real-world data to shape algorithms leads to algorithms producing biased outcomes. Algorithms make predictions by analyzing past data [2]. If the past data includes biased judgments, then the algorithm’s predictions will also be biased. Facebook’s tool called “lookalike audience” allows advertisers – in this case, employers – to input a “source audience” that will dictate who Facebook advertises jobs to, based on a person’s similarities with this “source audience” [2]. This tool is meant to help employers predict which users are most likely to apply for jobs. If an employer provides the lookalike tool with a dataset that does not include a lot of women, then Facebook will not advertise the job to women. Employers could use this tool to deliberately exclude certain groups, but employers could also be unaware of the bias of their “source audience” [2].

Job Recommendation Algorithms

Job recommendation algorithms within online platforms are built to find and reproduce patterns in user behavior, updating predictions or decisions as job seekers and employers interact [12]. If the system within the platform recognizes that the employer interacts with mostly men, then the algorithm will look for those characteristics in potential job applicants and replicate the pattern [12]. This pattern picked up by the algorithm can happen without specific instruction from the employer, which leads to biases going unnoticed.

Another problem that is encountered with job recommendation algorithms is the “lack of publicly available information.”[13] While companies can say that they are working to account for biases in the algorithm, the general public does not actually know how these problems are being tackled. However, the reason why this information is not public is due to the sensitivity of employee data that is used to train the algorithms. Because of this, the information that we can gather about practices in industry is limited.[13]

Algorithms Extending Human Bias

Personal, human bias extends into algorithmic bias. A study conducted at the University of California, Santa Barbara found that people’s own underlying biases were bigger determinants of their likelihood to apply to jobs than any gendered job posting [14]. Underlying human biases need to be reduced to work towards gender neutrality in the job market [14]. Humans choose the data to train algorithms with, and the "choice to use certain data inputs over others can lead to discriminatory outcomes" [15]. Hiring algorithms can be an extension of "our opinions embedded in code" and further research highlights that algorithms reproduce existing societal, human bias [15]. The people constructing hiring algorithms are in the tech industry, which is not very diverse. This leads to algorithms that are trained on non-diverse data, which extends human gender bias into the online job market [15]. While creating algorithms, "biases creep in because human bias [influences] the algorithm" [16]. Humans build biased algorithms, so it is up to humans to notice the biases and fix them [16].

Ethical Implications

Algorithms Blocking Opportunities

Amazon's hiring algorithm was modeled after real-world data, where there are more men in the tech field. The difference in gender of employees is shown in the figure above [17].

Algorithms in the online job search do not outright reject job seekers. Instead, they block certain groups of job seekers from seeing opportunities they are qualified for; as Pauline Kim, a legal scholar, stated, “not informing people of a job opportunity is a highly effective barrier” to job seekers [12]. Qualified candidates cannot apply for a job if they have not been shown the opportunity.

Amazon’s algorithmic recruiting tool was trained with 10 years’ worth of resumes that were sent to Amazon; however, because technology is a male-dominated field, most of the resumes were from male applicants, leading the algorithm to downvote women [17]. This method of training taught the algorithm that men were preferred, therefore penalizing candidates who included the word “women’s” in their resume. For example, if the candidate listed an activity as “women’s team captain, ” their resume would be downgraded in the system [17]. Amazon has since scrapped this recruiting algorithm [18].

A test was also conducted on an ad-serving algorithm that displayed ads for jobs related to STEM. The algorithm was supposedly designed to display these ads to men and women equally and was tested in 191 different countries. The results showed that the ads were shown to around 20% more men than women. One of the explanations to the algorithm choosing to show the ads to more men than women comes from an economic standpoint. Online advertisers are constantly competing for users’ attention. A study has shown that on average, it is more expensive to get the attention of females over males when advertising online.[19]

Traditional Gender Roles Affect Outcomes

It has been studied that women can have a better keyword match on their resume, yet not be selected for a job if a man has more experience than them [11]. These hiring algorithms that are built and trained by humans do not take into account the time women must take off of work to have children or to take care of children [11]. The author of a study done by the University of Melbourne recounts that “women have less experience because they take time [off work] for caregiving, and that algorithm is going to bump men up and women down based on experience” [10]. Because women are more likely to experience a disruption in their career due to children, they will be viewed as a lesser candidate by the algorithm, even if they have more relevant experience than a male candidate [11]. Hiring algorithms do not take into account gender roles, which include women taking time off to give birth. This replicates and reinforces gender bias in the online hiring process [11].

Reducing Bias

Basic graphic of how researchers observed and evaluated gender bias of word embeddings in algorithms. [20]

Rethinking How Algorithms are Built

Vendors that build recruitment algorithms to target specific job seekers need to think beyond the minimum compliance requirements; they have to consider whether or not the algorithm they are building is leading to more fair hiring outcomes [12]. Additionally, the people stating that their algorithms will reduce bias in the hiring process have to build and test their algorithms while keeping that goal in mind, or else the technology will continue to undermine the online job search process [12]. Re-thinking algorithms and how to build them will begin to reduce bias in the job search, as many factors need to be considered.

One specific aspect of gender bias can be found in word embeddings. Word embeddings are binary, pre-trained models that assign words or phrases specific representations and meanings; these models have been found to reflect societal bias [21]. For example, words like "receptionist" and "nurse" are linked to "women" and "she" whereas words like "doctor" and "computer scientist" are linked to "men" and "he". In addition, there has been a “prevailing idealized concept of femininity” in the past.[22] The term “female” appears more than twice the amount of times the term “male” appears. This suggests that in most contexts, males are the default assumption over females when referring to someone.[22] Researchers have explored potential solutions to de-bias word embeddings by using methods such as building a genderless framework as well as teaching the algorithm gender-neutral word embeddings [20]. Such methods aim to minimize the difference between gendered words (i.e. male versus female) and maximize "the difference between the gender direction and other neutral dimensions” [20][23]. This allows algorithms to use or neglect the gender dimensions.

Balancing Humans and Algorithms

Implementing a balance between predictive algorithms and human insight is a promising solution for employers looking to use algorithms in their hiring process while reducing bias [15]. Using artificial intelligence and algorithms to parse through large amounts of data or applicants works well for processing. Balancing the processing by algorithms with the "human ability to recognize more intangible realities of what that data might mean" is the second step in the process of limiting algorithmic bias [15]. For a partnership between humans and algorithms to be successful within companies, they need to consciously and deliberately implement new practices [15]. Both algorithms and humans still need to be held accountable for reducing bias, and working together would encourage a good short-term solution to the phenomenon of gender bias in the online job search [15].

Unintended Consequences of Bias Reduction

It has been suggested that algorithms de-biased in terms of gender could still produce the same biased outcome. Algorithms may still use online proxies in their scoring process to produce discriminatory results as these proxies serve as stand-ins for protected groups, like gender [24][25]. For example, while algorithms may be de-biased on the terms of gender specifically, the algorithm may use stand-in factors such as height or weight as a proxy to determine a candidate's gender [24]. This leaves room for the algorithm to produce bias results based on a candidate's gender.

Although there is a statistical process that is known to eliminate proxy discrimination, the process requires the algorithmic model to include "training data information on legally prohibited characteristics” [26]. Even if such legally prohibited information is obtained, characteristics would then be measured on their predictive power of the target variable; this could result in unintended amplification of the initial proxy discrimination [26]. For example, if height was measured as a highly predictive characteristic and height was used as a proxy for gender, then the algorithm would intentionally discriminate on the predictive basis of height thus unintentionally discriminating against gender.

References

  1. Bogen, M. & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf
  2. 2.0 2.1 2.2 2.3 Kim, P. T. (2019). Big data and artificial intelligence: New challenges for workplace equality. University of Louisville Law Review, 57(2), 313-328.
  3. Cambridge Dictionary.(n.d.). Gender Bias. In Cambridge English Dictionary. https://dictionary.cambridge.org/us/dictionary/english/gender-bias
  4. (2021, February 26). Ai for recruiting: A definitive guide for hr professionals. Ideal. https://ideal.com/ai-recruiting/#:~:text=AI%20for%20recruiting%20is%20the,repetitive%2C%20high%2Dvolume%20tasks.
  5. Cowgill, B. (2018). Bias and productivity in humans and algorithms: Theory and evidence from resume screening. Columbia Business School, Columbia University, 29. http://conference.iza.org/conference_files/MacroEcon_2017/cowgill_b8981.pdf
  6. 6.0 6.1 6.2 6.3 6.4 Black, J., & Esch, P. (2019, December 31). Ai-enabled recruiting: What is it and how should a manager use it? https://www.sciencedirect.com/science/article/pii/S0007681319301612
  7. (n.d.). Artificial intelligence (ai) in assessment. Aon. https://assessment.aon.com/en-us/online-assessment/ai-in-assessment
  8. Esch, P., Black, J., & Ferolie, J. (2018, September 17). Marketing ai recruitment: The next phase in job application and selection. https://www.sciencedirect.com/science/article/pii/S0747563218304497#sec5
  9. Facebook. (n.d.). https://www.facebook.com/
  10. 10.0 10.1 Hanrahan, C. (2020, December 2. Job recruitment algorithms can amplify unconscious bias favouring men, new research finds. The ABC News. https://www.abc.net.au/news/2020-12-02/job-recruitment-algorithms-can-have-bias-against-women/12938870
  11. 11.0 11.1 11.2 11.3 11.4 Cheong, M., et al. (n.d.). Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance. The University of Melbourne. https://about.unimelb.edu.au/__data/assets/pdf_file/0024/186252/NEW-RESEARCH-REPORT-Ethical-Implications-of-AI-Bias-as-a-Result-of-Workforce-Gender-Imbalance-UniMelb,-UniBank.pdf
  12. 12.0 12.1 12.2 12.3 12.4 Bogen, M. (2019, May 6). All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
  13. 13.0 13.1 Raghavan M., Barocas S., Kleinberg J., Levy K. (6 December 2019). "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices" Retrieved on 9 April 2021
  14. 14.0 14.1 Tang, S., et al. (2017). Gender Bias in the Job Market: A Longitudinal Analysis. ACM on the Human-Computer Interaction. . https://dl.acm.org/doi/epdf/10.1145/3134734
  15. 15.0 15.1 15.2 15.3 15.4 15.5 15.6 Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570
  16. 16.0 16.1 Rosenbaum, E. (2018, May 30). Silicon Valley is stumped: A.I. cannot always remove bias from hiring. Consumer News and Business Channel. https://www.cnbc.com/2018/05/30/silicon-valley-is-stumped-even-a-i-cannot-remove-bias-from-hiring.html
  17. 17.0 17.1 17.2 Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  18. Amazon scrapped 'sexist AI' tool. (2018, October 10). British Broadcasting Corporation. https://www.bbc.com/news/technology-45809919
  19. Lambrecht A., Tucker C. (2019). "Algorithmic Bias? An Empirical Study of Apparent GenderBased Discrimination in the Display of STEM Career Ads" Retrieved on 9 April 2021.
  20. 20.0 20.1 20.2 Sun, T., et al. (2019, June 21). Mitigating Gender Bias in Natural Language Processing: Literature Review. Cornell University. https://arxiv.org/pdf/1906.08976.pdf
  21. GeeksforGeeks. (2020, October 14). Word Embeddings in NLP. https://www.geeksforgeeks.org/word-embeddings-in-nlp/
  22. 22.0 22.1 Leavy S., Meaney G., Wade K., Greene D. (12 July 2020) "Mitigating Gender Bias in Machine Learning Data Sets" Retrieved on 9 April 2021
  23. Bolukbasi, T., et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Advances in Neural Information Processing Systems 29 (NIPS 2016)
  24. 24.0 24.1 Larson, J., Mattu, S., & Angwin, J. (2015, August 31). Unintended Consequences of Geographic Targeting. Technology Science. https://techscience.org/a/2015090103/
  25. Zarsky, T. Z. (2014). Understanding Discrimination in the Scored Society. Washington Law Review, 89(4), 1375-1412. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2550248
  26. 26.0 26.1 Prince, A. E.R. & Schwarcz, D. (2020). Proxy Discrimination in the Age of Artificial Intelligence and Big Data. Iowa Law Review, 105(3). https://ilr.law.uiowa.edu/print/volume-105-issue-3/proxy-discrimination-in-the-age-of-artificial-intelligence-and-big-data/