Gender bias in the Online Job Search
The advancement of technology and algorithms has led to companies recruiting and hiring potential employees through the use of big data and artificial intelligence, which can amplify gender bias in the online job search . Gender bias refers to the “unfair difference” in the way that both men and women are treated . In the context of the online job search, gender bias is defined as the advantage male job seekers have versus female job seekers. The use of algorithms in the online job search process can perpetuate existing biases which result in ethical implications regarding blocked job opportunities for females, and the disadvantage for female job seekers due to systematic gender roles.
- 1 Artificial Intelligence in the Recruitment Process
- 2 Evidence of Bias
- 3 Ethical Implications
- 4 Algorithmic Discrimination Towards Minorities
- 5 Reducing Bias
- 6 References
Artificial Intelligence in the Recruitment Process
Companies utilize AI technology in their recruiting algorithms to recruit new employees more effectively. . The introduction of AI into the recruiting process makes candidates 14% more likely to pass interviews and receive offers, 18% more likely to accept the job, but they are 12% less likely to inform recruiters of competing job offer during the negotiation process . This technology is mainly used within four general types of recruitment activities: outreach, screening, assessment, and coordination .
Artificial intelligence helps companies find applicants that are the best fit. These potential applicants consist of active candidates (those who are deliberately searching for a new job) and passive candidates (those who are not actively searching but would show interest in the right opportunity). The AI uses data from websites like LinkedIn, Facebook, Twitter, etc. to match candidates to jobs. After enough training, the AI will have learned the most efficient way to word and present jobs to potential candidates. 
AI is also used for the resume screening process. AI can help companies reduce the time-to-hire. Hilton Hotels & Resorts implemented a screening tool that used AI and found an 88% decline in time-to-hire, or 42 days to 5 days. These AI screening tools can be more effective than humans because they can infer key terms from natural language. For example, instead of searching for the keyword, ‘persistence’, the AI can infer this term from other phrases or wording. 
AI assessments are used to narrow a candidate pool after the resume screening step. AI can be used in varying types of assessments. It can range from realistic chatbot conversations in situational judgement tests to making decisions based on an applicant’s responses to test questions . These assessments can be used as an initial interview or to informally test candidates for desired traits, such as an individual’s risk propensity. 
It is beneficial for companies to make their hiring process as positive of an experience as possible because a candidate might not be the right fit today, but they could be perfect in a year. AI can facilitate this because it creates a more seamless, digital experience. Companies can use chatbots to update candidates on where they are in the process, fill in information gaps like the candidate’s potential start date, and answer candidate questions.  Also, a company’s openness with their usage of AI in the recruitment process increased the likelihood of a job application, as well as having a positive experience with the process .
Evidence of Bias
Job Recruitment Algorithms
Job recruitment algorithms have been found to reinforce and perpetuate unconscious human gender bias . Because job recruitment algorithms are trained on real-world data – and the real world is biased – the algorithms amplify this bias on a larger scale. Decision-making algorithms are “designed to mimic how a human would…choose a potential employee” and without careful consideration, algorithms can intensify bias in recruiting .
Utilizing real-world data to shape algorithms leads to algorithms producing biased outcomes. Algorithms make predictions by analyzing past data . If the past data includes biased judgments, then the algorithm’s predictions will also be biased. Facebook’s tool called “lookalike audience” allows advertisers – in this case, employers – to input a “source audience” that will dictate who Facebook advertises jobs to, which is based on a person’s similarities with this “source audience” . This tool is meant to help employers predict which users are most likely to apply for jobs. If an employer provides the lookalike tool with a dataset that does not include a lot of women, then Facebook will not advertise the job to women. Employers could use this tool to deliberately exclude certain groups, but employers could also be unaware of the bias of their “source audience” .
Job Recommendation Algorithms
Job recommendation algorithms within online platforms are built to find and reproduce patterns in user behavior, updating predictions or decisions as job seekers and employers interact . If the system within the platform recognizes that the employer interacts with mostly men, then the algorithm will look for those characteristics in potential job applicants and replicate the pattern . This pattern picked up by the algorithm can happen without specific instruction from the employer, which leads to biases going unnoticed.
Another problem that is encountered with job recommendation algorithms is the “lack of publicly available information.” While companies can say that they are working to account for biases in the algorithm, the general public does not actually know how these problems are being tackled. However, the reason why this information is not public is due to the sensitivity of employee data that is used to train the algorithms. Because of this, the information that we can gather about practices in the industry is limited.
Algorithms Extending Human Bias
Personal, human bias extends into algorithmic bias. A study conducted at the University of California, Santa Barbara found that people’s own underlying biases were bigger determinants of their likelihood to apply to jobs than any gendered job posting . Underlying human biases need to be reduced to work towards gender neutrality in the job market . Humans choose the data to train algorithms with, and the "choice to use certain data inputs over others can lead to discriminatory outcomes" . Hiring algorithms can be an extension of "our opinions embedded in code" and further research highlights that algorithms reproduce existing societal, human bias . The people constructing hiring algorithms are in the tech industry, which is not very diverse. This leads to algorithms that are trained on non-diverse data, which extends human gender bias into the online job market . While creating algorithms, "biases creep in because human bias [influences] the algorithm" . Humans build biased algorithms, so it is up to humans to notice the biases and fix them .
Algorithms Blocking Opportunities
Algorithms in the online job search do not outright reject job seekers. Instead, they block certain groups of job seekers from seeing opportunities they are qualified for; as Pauline Kim, a legal scholar, stated, “not informing people of a job opportunity is a highly effective barrier” to job seekers . Qualified candidates cannot apply for a job if they have not been shown the opportunity.
Amazon’s algorithmic recruiting tool was trained with 10 years’ worth of resumes that were sent to Amazon; however, because technology is a male-dominated field, most of the resumes were from male applicants, leading the algorithm to downvote women . This method of training taught the algorithm that men were preferred, therefore penalizing candidates who included the word “women’s” in their resume. For example, if the candidate listed an activity as “women’s team captain, ” their resume would be downgraded in the system . Amazon has since scrapped this recruiting algorithm .
A test was also conducted on an ad-serving algorithm that displayed ads for jobs related to STEM. The algorithm was supposedly designed to display these ads to men and women equally and was tested in 191 different countries. The results showed that the ads were shown to around 20% more men than women. One of the explanations to the algorithm choosing to show the ads to more men than women comes from an economic standpoint. Online advertisers are constantly competing for users’ attention. A study has shown that on average, it is more expensive to get the attention of females over males when advertising online.
Traditional Gender Roles Affect Outcomes
It has been studied that women can have a better keyword match on their resume, yet not be selected for a job if a man has more experience than them . These hiring algorithms that are built and trained by humans do not take into account the time women must take off of work to have children or to take care of children . The author of a study done by the University of Melbourne recounts that “women have less experience because they take time [off work] for caregiving, and that algorithm is going to bump men up and women down based on experience” . Because women are more likely to experience a disruption in their career due to children, they will be viewed as a lesser candidate by the algorithm, even if they have more relevant experience than a male candidate . Hiring algorithms do not take into account gender roles, which include women taking time off to give birth. This replicates and reinforces gender bias in the online hiring process .
Algorithmic Discrimination Towards Minorities
The financial success of digital advertising platforms is somewhat due to the exact targeting features they offer</ref>. Researchers and journalists found that advertisers can target and exclude particular groups of users seeing their ads for a product, service, or job position. Similarly, a slight amount of attention has been granted to the implications of the platform's ad delivery process, comprised of the platform's choices about which users see which ads.
Developers meant for targeted advertising be efficient in matching advertisers to their customers. However, targeted advertising infrastructure can be leveraged and abused by malicious advertisers to reach people susceptible to false stories, stoke grievances efficiently, and incite social conflict. Since non-targeted and non-vulnerable people do not see the targeted ads, malicious ads are likely to go unreported and their effects undetected. 
A team led by Muhammad Ali and Piotr Sapiezynski at Northeastern University ran a series of otherwise identical ads with slight variations in the available budget, headline, text, or image. They found that those subtle tweaks had significant impacts on the audience reached by each ad, most notably when the ads were for jobs or real estate. Postings for preschool teachers and secretaries, for example, were shown to a higher fraction of women. In contrast, algorithms showed postings for janitors and taxi drivers to a higher proportion of minorities. Ads about homes for sale were also shown to more white users, while advertisers showed ads for rentals to more minorities.
Bias is about more than just advertising job positions. In 2016, the rental platform Airbnb faced accusations that hosts on their site were discriminating by refusing reservations for black users. To address this, the company has said it will put new anti-discrimination clauses in place, change booking policies, and punish hosts who improperly reject potential guests. Ride-hailing companies have faced similar accusations of discrimination by those using their platforms. 
Algorithms carry the bias people have forward into technology, which disproportionately affects minorities and benefits those already in power. While the united states have laws that prevent employers from explicitly advertising job positions to a specific demographic, making it illegal to discriminate, computers can discriminate via targeting without detection due to their "black box" like infrastructure. As people become more educated on the topic of algorithms, awareness and regulation will follow.
Rethinking How Algorithms are Built
Vendors that build recruitment algorithms to target specific job seekers need to think beyond the minimum compliance requirements; they have to consider whether or not the algorithm they are building is leading to more fair hiring outcomes . Additionally, the people stating that their algorithms will reduce bias in the hiring process have to build and test their algorithms while keeping that goal in mind, or else the technology will continue to undermine the online job search process . Re-thinking algorithms and how to build them will begin to reduce bias in the job search, as many factors need to be considered.
One specific aspect of gender bias can be found in word embeddings. Word embeddings are binary, pre-trained models that assign words or phrases specific representations and meanings; these models have been found to reflect societal bias . For example, words like "receptionist" and "nurse" are linked to "women" and "she" whereas words like "doctor" and "computer scientist" are linked to "men" and "he". In addition, there has been a “prevailing idealized concept of femininity” in the past. The term “female” appears more than twice the amount of times the term “male” appears. This suggests that in most contexts, males are the default assumption over females when referring to someone. Researchers have explored potential solutions to de-bias word embeddings by using methods such as building a genderless framework as well as teaching the algorithm gender-neutral word embeddings . Such methods aim to minimize the difference between gendered words (i.e. male versus female) and maximize "the difference between the gender direction and other neutral dimensions” . This allows algorithms to use or neglect the gender dimensions.
Balancing Humans and Algorithms
Implementing a balance between predictive algorithms and human insight is a promising solution for employers looking to use algorithms in their hiring process while reducing bias . Using artificial intelligence and algorithms to parse through large amounts of data or applicants works well for processing. Balancing the processing by algorithms with the "human ability to recognize more intangible realities of what that data might mean" is the second step in the process of limiting algorithmic bias . For a partnership between humans and algorithms to be successful within companies, they need to consciously and deliberately implement new practices . Both algorithms and humans still need to be held accountable for reducing bias, and working together would encourage a good short-term solution to the phenomenon of gender bias in the online job search .
While aiming to strike a balance between humans and algorithms is a promising solution, there is an important question to answer. What is considered “fair” when trying to find qualified candidates for a job. Most conversations have been geared towards “treating similar individuals similarly.” However, a study has shown that a “model cannot conform to more than a few group fairness metrics at the same time.”
Unintended Consequences of Bias Reduction
It has been suggested that algorithms de-biased in terms of gender could still produce the same biased outcome. Algorithms may still use online proxies in their scoring process to produce discriminatory results as these proxies serve as stand-ins for protected groups, like gender . For example, while algorithms may be de-biased on the terms of gender specifically, the algorithm may use stand-in factors such as height or weight as a proxy to determine a candidate's gender . This leaves room for the algorithm to produce bias results based on a candidate's gender.
Although there is a statistical process that is known to eliminate proxy discrimination, the process requires the algorithmic model to include "training data information on legally prohibited characteristics” . Even if such legally prohibited information is obtained, characteristics would then be measured on their predictive power of the target variable; this could result in unintended amplification of the initial proxy discrimination . For example, if height was measured as a highly predictive characteristic and height was used as a proxy for gender, then the algorithm would intentionally discriminate on the predictive basis of height thus unintentionally discriminating against gender.
- Bogen, M. & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf
- Kim, P. T. (2019). Big data and artificial intelligence: New challenges for workplace equality. University of Louisville Law Review, 57(2), 313-328.
- Cambridge Dictionary.(n.d.). Gender Bias. In Cambridge English Dictionary. https://dictionary.cambridge.org/us/dictionary/english/gender-bias
- (2021, February 26). Ai for recruiting: A definitive guide for hr professionals. Ideal. https://ideal.com/ai-recruiting/#:~:text=AI%20for%20recruiting%20is%20the,repetitive%2C%20high%2Dvolume%20tasks.
- Cowgill, B. (2018). Bias and productivity in humans and algorithms: Theory and evidence from resume screening. Columbia Business School, Columbia University, 29. http://conference.iza.org/conference_files/MacroEcon_2017/cowgill_b8981.pdf
- Black, J., & Esch, P. (2019, December 31). Ai-enabled recruiting: What is it and how should a manager use it? https://www.sciencedirect.com/science/article/pii/S0007681319301612
- (n.d.). Artificial intelligence (ai) in assessment. Aon. https://assessment.aon.com/en-us/online-assessment/ai-in-assessment
- Esch, P., Black, J., & Ferolie, J. (2018, September 17). Marketing ai recruitment: The next phase in job application and selection. https://www.sciencedirect.com/science/article/pii/S0747563218304497#sec5
- Facebook. (n.d.). https://www.facebook.com/
- Hanrahan, C. (2020, December 2. Job recruitment algorithms can amplify unconscious bias favoring men, new research finds. The ABC News. https://www.abc.net.au/news/2020-12-02/job-recruitment-algorithms-can-have-bias-against-women/12938870
- Cheong, M., et al. (n.d.). Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance. The University of Melbourne. https://about.unimelb.edu.au/__data/assets/pdf_file/0024/186252/NEW-RESEARCH-REPORT-Ethical-Implications-of-AI-Bias-as-a-Result-of-Workforce-Gender-Imbalance-UniMelb,-UniBank.pdf
- Bogen, M. (2019, May 6). All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
- Raghavan M., Barocas S., Kleinberg J., Levy K. (6 December 2019). "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices" Retrieved on 9 April 2021
- Tang, S., et al. (2017). Gender Bias in the Job Market: A Longitudinal Analysis. ACM on the Human-Computer Interaction. . https://dl.acm.org/doi/epdf/10.1145/3134734
- Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570
- Rosenbaum, E. (2018, May 30). Silicon Valley is stumped: A.I. cannot always remove bias from hiring. Consumer News and Business Channel. https://www.cnbc.com/2018/05/30/silicon-valley-is-stumped-even-a-i-cannot-remove-bias-from-hiring.html
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
- Amazon scrapped 'sexist AI' tool. (2018, October 10). British Broadcasting Corporation. https://www.bbc.com/news/technology-45809919
- Lambrecht A., Tucker C. (2019). "Algorithmic Bias? An Empirical Study of Apparent GenderBased Discrimination in the Display of STEM Career Ads" Retrieved on 9 April 2021.
- Ali, Muhammad. "Discrimination through Optimization: How Facebook's Ad Delivery Can Lead to Skewed Outcomes." ArXiv.Org, 3 Apr. 2019, arxiv.org/abs/1904.02095.
- Ribeiro, Filipe N., et al. "On microtargeting socially divisive ads: A case study of russia-linked ad campaigns on facebook." Proceedings of the Conference on Fairness, Accountability, and Transparency. 2019.
- Hao, Karen. "Facebook's Ad-Serving Algorithm Discriminates by Gender and Race." MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2019/04/05/1175/facebook-algorithm-discriminates-ai-bias.
- White, Gillian. "The Trouble With Discrimination In Online Advertising." The Atlantic, 7 Mar. 2017, www.theatlantic.com/business/archive/2017/03/facebook-ad-discrimination/518718.
- Sun, T., et al. (2019, June 21). Mitigating Gender Bias in Natural Language Processing: Literature Review. Cornell University. https://arxiv.org/pdf/1906.08976.pdf
- GeeksforGeeks. (2020, October 14). Word Embeddings in NLP. https://www.geeksforgeeks.org/word-embeddings-in-nlp/
- Leavy S., Meaney G., Wade K., Greene D. (12 July 2020) "Mitigating Gender Bias in Machine Learning Data Sets" Retrieved on 9 April 2021
- Bolukbasi, T., et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Advances in Neural Information Processing Systems 29 (NIPS 2016)
- Silberg J., Manyika J. (6 June 2019). "Tackling bias in artificial intelligence (and in humans)" Retrieved on 9 April 2021
- Larson, J., Mattu, S., & Angwin, J. (2015, August 31). Unintended Consequences of Geographic Targeting. Technology Science. https://techscience.org/a/2015090103/
- Zarsky, T. Z. (2014). Understanding Discrimination in the Scored Society. Washington Law Review, 89(4), 1375-1412. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2550248
- Prince, A. E.R. & Schwarcz, D. (2020). Proxy Discrimination in the Age of Artificial Intelligence and Big Data. Iowa Law Review, 105(3). https://ilr.law.uiowa.edu/print/volume-105-issue-3/proxy-discrimination-in-the-age-of-artificial-intelligence-and-big-data/