Did you just search ‘How can my resume beat the bots’ on Google? If yes, you are absolutely searching for something very important in this digital world. With machines competing to take over our jobs, we are already on heels to outperform them. Special tech courses are also being taken by institutions to make humans not run out of talents compared to artificial intelligence. It is no wonder that AI in recruitment is another scary subject, to begin with. Recently, LinkedIn was caught up in controversy over AI bias in the recruitment process. Following the unfortunate event, many are questioning the reliability of AI in the recruitment process.
‘Machines are making humans lazy’ is a phrase that makes absolute sense when it comes to recruitment. Earlier, the whole HR team would work together on recruitment routines. They lay down the applications, manually sort them based on experience, background, talent, etc. When computers came into existence, everything became digital, eliminating the lengthy segregation process. However, as technology evolved, the whole recruitment process was taken over by artificial intelligence. Recruiters are increasingly using artificial intelligence to make the first round of cuts and to determine whether a job position is even advertising to you. Trained on data collected from previous or similar applications, the Artificial intelligence tools used in recruitment can cut down on the efforts recruiters need to expend in order to make a hire. According to a LinkedIn report, around 67% of hiring managers and recruiters said that artificial intelligence saved their time. Although these positive aspects might feel appealing, the tech industry is cornered to measure the negatives of AI in recruitment and act accordingly.
What happened with LinkedIn?
Although the incident happened very long back, it came to light recently. LinkedIn discovered that the recommendation algorithms it uses to match job candidates with their relevant opportunities were producing biased results. The AI algorithm was ranking the candidates based on how likely they might apply for a position or respond to a recruiter. As a result, AI in recruitment showed severe bias by referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities.
LinkedIn is one of the major recruitment sites that many of us use on a daily basis. Even if we don’t apply for a job, we follow some pages and companies to acquire information about emerging trends. But on a general scale, LinkedIn is a pure recruitment site at its core and matches qualified candidates with available positions. In order to organize the vacancies and the candidates, like many platforms, LinkedIn also adopted artificial intelligence-driven recommendation algorithms. Algorithms, often called matching engines, process information from job applicants and employers to compile a recommendation list for every individual. LinkedIn didn’t want to bring any form of AI bias in its recruitment process. Therefore, the company designed an algorithm that excludes a person’s name, age, gender, and ethnicity as including those that could introduce discrimination. But what was unexpected is the bias it created over behavioral patterns. The AI algorithm learned that men are more likely to apply for jobs that require work experience beyond their qualifications, while women often only choose jobs whose qualifications match the requirements of the position. Unintentionally, the results on LinkedIn echoed the internal bias of the AI algorithm resulting in controversy. When LinkedIn discovered the problem, it didn’t play dumb. Instead, the company built another AI program to counter the bias.
Today, many job sites including CareerBuilder, ZipRecruiter, and Monster are using AI in recruitment process. They are taking very different approaches to address AI bias on their own platform. But if any internal bias similar to LinkedIn goes unnoticed for a long time, it’ll bring serious implications to recruitment and society. Therefore, the recruitment sites should possibly take precautions to keep away AI bias.