In 2016, Goldman Sachs received approximately 250,000 job applications from students and graduates. That means not only tough competition for the applicants, but a headache for the Goldman Sachs human resources (HR) team.
For example, if a team of five HR staff worked 12 hours everyday, including weekends, and spent five minutes on every application, they would take nearly 12 months to complete the sifting of all the applications.
For reasons like this, businesses are now increasingly moving to algorithms, or applicant tracking systems (ATS), to screen applicants.
If an applicant submits a generic CV they may fall at the first hurdle, simply because the CV does not contain certain keywords. For example, if the initials PM are used for project management, then make sure they appear in the CV. Simply put, the application has to be written around the keywords found in the job description – seems like a no-brainer, but you would be surprised how many people don’t follow this simple advice.
In the era of LinkedIn, it is also important to ensure the LinkedIn profile and CV match, or at least reinforce each other.
Unfortunately, an ATS may not be the only ‘computer-based’ problem a candidate will face. An increasing number of businesses are now utilising artificial intelligence (AI) video interviews as the next stage of the selection process. During the video interview candidates are quizzed while an AI programme analyses their facial expressions (maintaining
eye contact with the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about or slouch in their seat are likely to fail – because, of course, slouching means you can’t do the job!
Only if they pass that test will an applicant actually meet a human being.
The algorithms behind these selection stages are also prone to bias, just like humans, but not necessarily for the same reasons. For example, on Facebook, young women are a more valuable demographic group because they control a high share of household spending and, thus, ads targeting them are more expensive. Consequently, the algorithms naturally target pages where the return on investment is highest – for men, not women – meaning the ads were less likely to be shown to women than men.
If a business discovers that the output of an AI system is discriminatory, they need to work out why, and then adjust the algorithm until the effect disappears (Agrawal, 2018). Besides recruitment and selection, worries about AI have also crept up in other areas such as criminal justice and insurance.
A business will face a legal and reputational risk if their recruitment and selection process turns out to be unfair, and, they also need to consider whether these AI programmes do more than just simplify the process.
For instance, do successful candidates have long and productive careers?
After all, staff churn is one of the biggest recruitment costs that businesses face.
Agrawal, A., Gans, J. & Goldfarb, A. (2018) Prediction Machines: The Simple Economics of Artificial Intelligence. Boston: Harvard Business Review Press.