To Reap AI’s Rewards, Employers Must Know its Dangers

The very real dangers surrounding artificial intelligence in human resources environments should not deter companies that are looking to bring the burgeoning technology into their organizations. They just need to fully vet vendors.
The very real dangers surrounding artificial intelligence in human resources environments should not deter companies that are looking to bring the burgeoning technology into their organizations. They just need to fully vet vendors.

Employers that implement artificial intelligence technology for human resources purposes and expect it to run effectively on its own will be setting themselves up for more problems than they had hoped to solve.

This was a key message that U.S. Equal Employment Opportunity Commission (EEOC) head Keith Sonderling delivered during a recent webinar, “Artificial Intelligence in the Workforce.” The EEOC is responsible for enforcing federal laws that make it illegal to discriminate against a job applicant or an employee because of the person’s race, color, religion, sex (including pregnancy, transgender status, and sexual orientation), national origin, age (40 or older), disability, or genetic information.

The webinar, hosted by law firm Conn Maciel Carey, delved into the nuances of AI technology that employers need to be aware of as it plays an increasingly prevalent role in such HR practices as recruitment, interviewing, and workforce management.

“Employers can’t adopt a ‘set it and forget it’ approach to HR technologies because inaccurate, incomplete, or unrepresentative data will only amplify, rather than minimize, bias decision-making,” Sonderling said.

While most mainstream conversations about AI tend to focus on things like bots and machines replacing workers in many different industries, from doctors and lawyers to warehouse workers and superstore shelf stockers, Sonderling pointed out that he was more focused on another, more immediate, role that AI is playing in the employment landscape – that of HR.

Pandemic Spurs AI Adoption

The COVID-19 pandemic greatly accelerated the rate at which employers are adopting AI to manage their workforces, he said. Employers, especially those that need to hire rapidly and in large numbers, are turning to AI-driven technologies such as resume screening programs, automated interviews, and mobile hiring apps to rebuild their workforces after millions of employees were displaced by the pandemic.

If not used proactively and properly, however, AI applications can easily result in discrimination lawsuits that can prove costly in both a monetary and reputational sense, Sonderling stressed.

Today, AI is making nearly all the types of decisions once made by HR personnel, he noted – a development that is both good and bad.

“Carefully designed and properly used, I believe AI has the potential to advance diversity, inclusion, and accessibility in the workplace by mitigating the risk of unlawful discrimination.”

AI, he said, can help eliminate bias from the earliest stages of the hiring process. An AI-enabled resume screening program, for example, can be taught to disregard variables that have no bearing on job performance, such as an applicant’s name.

At the same time, poorly designed and carelessly implemented AI can discriminate on a scale and magnitude far greater than any individual HR professional.

“That’s because the predictions AI makes about specific applicants are only as sound as the training data on which the algorithms rely,” Sonderling said.

AI’s Algorithmic Appeal

AI is so appealing to employers, he noted, because it is based on data. It eliminates one of the biggest challenges to effective HR management: human tastes.

At the same time, the apparently objective nature of algorithmic decision-making can result in a technological bias on the part of the user: “An over-reliance, if not blind trust, that the robots will always get it right,” as Sonderling put it.

The commissioner illustrated the potential for disaster by describing an AI-driven resume screening tool that Amazon tested between 2015 and 2017. Programmers fed the algorithm a data set consisting of resumes belonging to Amazon’s current employees, along with resumes that had been submitted to the firm in the prior 10 years. Using machine learning, the program was able to identify patterns in the historic data set and then use those patterns to rate new applicants on a scale of one to five, based on their resumes.

However, because the vast majority of resumes in the data set belonged to men, the program automatically downgraded resumes that had certain word combinations, such as women’s sports teams, women’s clubs, and the names of women’s colleges.

“This was not proof of a misogynistic intent on the AI,” Sonderling noted. “It was a function of the data fed into the AI in the first place.”

Due to this problem, Amazon did not put the tool into practice.

Micro-Targeting Top Talent

AI also promises to reduce talent acquisition to a science by helping employers identify and target highly qualified candidates who may not even be actively looking for a job. Sonderling compared this tactic to commercial companies serving up online ads to people whose browsing habits suggest they may be interested in purchasing their products.

“But micro-targeting ads to an audience is one thing when you are trying to sell running shoes. It is quite another when you’re advertising employment opportunities.”

Problems will arise if the algorithm’s training data skews heavily toward people of one race, sex, religion, or national origin. These protected characteristics may come to play an improper role in the algorithm’s predictions about the members of its target audience.

“It may downgrade proxies for race and gender, such as the names of historically black colleges or the names of women’s sports teams. Again, not because the computer is intentionally targeting and discriminating against these people, but because they simply were not represented in the training data.”

As a result, people may never know they were denied equal opportunity in the workplace because they were never told that a workplace opportunity existed in the first place.

The Bot Will See You Now

Employers are also starting to use AI bots to conduct job interviews, some of which utilize facial recognition and facial analysis technology.

“Serious concerns about racial discrimination arise if an interview bot cannot even identify the face it’s analyzing because the candidate has dark skin,” Sonderling pointed out.

When researchers at MIT’s Gender Shades project tested the dominant commercial gender classification algorithms, the findings were “startling,” according to Sonderling. Algorithms achieves an accuracy rate of over 99 percent in identifying light-skinned males. However, the accuracy rate for identifying dark-skinned females ranged only between 65 and 79 percent.

The researchers attributed this disparity to the underrepresentation of dark-skinned women in the training data on which the algorithms relied.

Sonderling stressed that these very real dangers surrounding AI in human resources environments should not deter businesses from looking to bring the burgeoning technology into their organizations. In turning to AI to help achieve their intended objectives, however, employers should press vendors for details about how they go about testing for disparate impact discrimination and what testing they do.

“…Employers should do their due diligence when it comes to vetting AI,” he warned.

Leave a Reply

Your email address will not be published. Required fields are marked *