By Joy Redmond.

Photo Source CNN
AI (artificial intelligence), machine learning and predictive analytics are hot topics in every industry but the implications of big data for Human Resources is even more contentious. Sceptics and scaremongers’ main concern is whether the bots will take our jobs. If I were you, I’d be more concerned whether the bots will give you a job. Having spent most of the last decade in HRTech, I can tell you today that the machines are increasingly more embedded in the recruitment process and are making far more decisions about your suitability for a role that you might think.
Given the time-consuming nature of the hiring process, it is not surprising that machine learning software is playing its part in recruitment. According to ResumeterPro, up to 72% of applications get rejected by applicant tracking systems before a live person even has a chance to review them while an article in Personnel Today showed how companies like Alexander Mann Solutions are using AI software to automate manual processes, such as interview scheduling and authorising job offers. Chatbots are another example of emerging AI on the market, which are used for initial candidate interactions to improve careers site retention or to schedule interviews and screen candidates with basic questions.
It is predicted that about 90% of the tasks recruiters do will likely be automated in the years ahead. Whatever reports and research you read, it seems that AI is here to stay so let’s take a look at some the key arguments for and against its adoption namely:
- Bubbles and Biases
- The ‘Human’ in Human Resources
- Big Data and the Law
Bubbles and Biases
“Algorithms may be trained to predict outcomes which are themselves the result of previous discrimination. The high-performing group may be non-diverse and hence the characteristics of that group may more reflect their demographics than the skills or abilities needed to perform the job. The algorithm is matching people characteristics, rather than job requirements.”
Dr. Kathleen Lundquist
Organisational Psychologist, President and CEO APTMetrics
Data-driven decision-making is only as good as the data used and while the algorithms themselves may be objective, they are built on previous decisions that can be tainted with biases (both conscious and unconscious).
Recent ‘unpredicted’ global events have resulted in much discussion on filter bubbles/ echo chambers and so within recruitment, we must ask the following questions:
- How large and diverse is the training data the algorithm uses to learn?
- How objective and quantifiable are the factors that equate to a ‘good’ hire?
- How sure are you that your historical hiring behaviour was 100% objective?
In order for an algorithm to effectively select candidates, the traits and interview behaviour of interviewees who went on to become top performers are codified and then perpetuated. My concern would be that historical interview data only mirrors past recruitment personae rather than broadening the scope, and companies that use AI as a filter may be doing themselves and tomorrow’s candidates a disservice. Diversity in the workplace is not a Public Relations angle but a strategic imperative. Companies with multiple viewpoints and experience are more innovative, less risk averse and more likely to thrive.
Focus on the ‘Human’ in Human Resources
“AI software enhances processes, but recruitment is ultimately about people so you can’t take human interaction away entirely.”
Becky Mossman
While common buzzwords and phrases such as ‘human capital’ and ‘your best resource’ are used casually, one should not underestimate the value experienced recruiters bring to the recruitment process and the organisations they represent.
Proponents for the use of predictive analytics within recruitment suggest that the removal of humans from the selection process eliminates potential human biases, offering instead fact-led objective hiring. A preference for algorithms to select candidates that have either been missed or forced out of the recruitment process by humans suggests very little appreciation or trust in the role and professionalism of recruiters. While there is always potential for human error (such as unconscious bias), a little investment in training and adjustments to the recruitment process will go a long way to address unconscious bias at both an individual and company level.
Placing one’s trust in machine learning above years of human experience in candidate selection is naïve and I suspect that algorithms may be too prescriptive. Predictive analytics uses historical data to predict future outcomes i.e. it is ‘matching’ candidate characteristics for previous success indicators in a given role. However, anecdotal evidence from interviewers on an asynchronous video interviewing system has shown us how recruiters have identified candidates in video interviews that were not a ‘good fit’ for the job at hand but would be ideal for a different role within the organisation. No algorithm could make this qualitative leap and the recruiters’ instincts proved sound with the candidates being successfully placed elsewhere.
With the current skills shortage and the associated war on talent, companies need to be flexible and design roles to cater for brilliant people rather than adhering to rigid roles and responsibilities of a specific job. The use of algorithms to select candidates might well be a backward step, reminiscent of the division of labour ideology, and no longer relevant in today’s smart economy.
Big Data and the Law
Whether predictive analytics discriminates has emerged as one of the key points with proponents suggesting it will improve equal employment opportunities. Critics, on the other hand, maintain that "absent careful safeguards, demographic, sensitive health or genetic information is at risk for being incorporated in the Big Data analytics technologies that employers are beginning to use.”
The only certainty is that there is neither legal precedence nor clear legislation to guide either candidates or employers regarding legal challenges and how algorithms make hiring decisions. Cathy O’Neil, author of the book, “Weapons of Math Destruction,” cautioned against the wholesale replacement of human HR teams with algorithms. “A lot of what HR does is repetitive, and companies want to increase quality and decrease costs,” she said. “It sounds like a win-win, but hiring algorithms and processes are highly regulated, and people haven’t spent enough time thinking about the question of whether these algorithms that are now a large part of the process are following the law which prohibits discrimination based on race, mental health, and other factors.” O’Neil advised companies to audit their algorithms for legality. “Stop using your algorithm unless you have evidence from an outside audit that it’s legal because your company will be on the hook for violations.” O’Neil concludes that “there are a host other issues to deal with as machine learning emerges including data privacy and security. One of the first steps for HR this year is separating the grand claims about machine learning from what’s really possible and beneficial for workers and their companies.”
Furthermore, in terms of employment equality, I would have some concerns about machine learning’s accuracy or effectiveness in the assessment of non-standard applications. Will there be software patches to assess candidates with disabilities (such as ASD) who may not meet standard visual or verbal criteria but have much to offer an organisation?
Conclusion
“To date human intelligence has no match in the biological and artificial worlds for sheer versatility, with the abilities ‘to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories.”
Nils J. Nilsson
On a personal note, in every single job I’ve ever had, I’ve ended up doing something completely different to what I was originally hired to do. I come in under some loose ‘marketing’ title but I might end up in product development or user research i.e. generally what I think needs doing in order for true marketing to happen. I find my role within an organisation and often in an interview or pitch, I get offered something different based on that two-way interaction. I’m not convinced an algorithm would be able to decode me!
PS. This a shorter version of one of my college submissions. I have 3 pages of references so decided not to go all Harvard on you. Ping me if you want the full list of sources/citations.