Artificial intelligence (AI) is one of the big, defining trends of the coming decades. Tech investor Catherine Wood believes it will spur the rise of the next Google, Microsoft or Amazon. Tech firms and businesses in areas that in the past would have seemed distant from tech trends, have invested larger and larger sums into AI. The IDC believes that investment across the globe into AI will double to $110 billion within the next two years. One of the most fascinating applications of AI is in the hiring process. AI is able to sift through vast volumes of resumes, analyze them using keywords provided by recruiters, determine the best matches based on those keywords and what the recruiter wants and suggest who is best fit for a particular job. AI is immensely useful in all phases of recruitment and it does this while avoiding the enormous time that human beings waste. AI has already become a critical part of the hiring process in IKEA, Intel, Unilever and Vodafone. There is a saying, “garbage in, garbage out”, which has been turned by some to, “discriminatory data in, discriminatory inferences out”. The rise of AI has given pause for concern to people concerned about AI bias and the ethical implications of using AI. The question for recruiters is, “How can AI be used ethically in hiring?”
At the beginning, it seemed that AI was a great way of avoiding the problems of bias and very low accuracy that plagued the hiring process. Machines, it was said, were free from bias. Yet, AI seems as riddled with racial biases, gender biases and other biases as any human being. This makes using them a delicate balancing act. We want the speed and accuracy of AI, without the biases and discrimination. These concerns have been extensively discussed in academic literature.
We have seen a growing body of frameworks and guidelines that map a way toward an ethical use of AI in a vast array of applications including hiring. Furthermore, there are certain principles that have been established to ensure that AI programs are ethical. The volume of literature is vast but certain basics have been established.
- Candidates must benefit:Designers must ask themselves if the method being proposed benefits job seekers and candidates. Attention must be paid to whether or not previously marginalized groups are better represented.
- There must be informed consent: job seekers and candidates must enter into their relationship with recruiters and employers within an informed and transparent framework that they display sufficient understanding
- Does accuracy improve? A new method or tool must display incremental accuracy than those tools or methods that came before it. This is a question of both predictive accuracy and overall utility.
- Explainability: one of the concerns about AI is that it can lead to insights whose logic nobody can understand. This makes it impossible to explain what is going on. Explainability is immensely important, as we have discussed at Ethics Demystified. Without it, using AI becomes more akin to a faith-based activity than one founded on scientific principles.