Avoiding adverse impact with diversity and inclusion recruiting tech

Jeanette Maister March 14, 2019

Let's be honest about potentially the greatest fear people will have on the rise of AI in the talent acquistion function. The fear is this: can we be using advanced tech such as AI to combat bias when the tech is likely to be coded by some white dudes in San Francisco?

The short answer is: yes.

The longer answer is: keep reading, because this is something people are discussing a lot in the trade show, event, conference world. They need to be discussing it. It's important.

Understanding adverse impact

The U.S. Equal Employment Opportunity Commission defines adverse impact as "a substantially different rate of selection in hiring which works to the disadvantage of members of a race, sex or ethnic group."

Oftentimes this is framed up around the 4/5ths, or 80 percent rule: "the selection rate for any group is substantially less (i.e., usually less than 4/5ths or 80%) than the selection rate for the highest group."

Adverse impact is a legal requirement for US employers with 15 or more employees (20 employees for age discrimination cases) to remain compliant with their recruiting.

Obviously it's massively important for organizations to stay compliant in their recruiting practices.

We all also know there's a tendency to throw tech at these issues with the goal of a quick solve. That's a bit trickier with AI. As a rising technology, there are still unethical use cases there that aren't fully fleshed out in talent acquisition. We've long believed that AI, data science, and algorithms can almost wholly eliminate bias in recruiting, but now we're at the point where we need to ask: Is that really true?

What exactly happened with Amazon?

As you may have heard, Amazon had to scrap its secret AI recruiting project. 

The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon.

“Everyone wanted this holy grail,” said someone working internally on the project. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

Easily finding quality candidates would be nice, right?

But Amazon’s computer models were trained to screen candidates based on patterns in resumes submitted to the company over 10 years. Most came from men, a reflection of male dominance across the tech industry. In essence, then, Amazon’s system taught itself that male candidates were preferable.

It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.

Clearly, this is a major adverse impact issue from one of the biggest, best-known brands in the world.

It gets a little scary when you consider this stat. CareerBuilder did a survey in 2017 discovering that 55 percent of U.S. hiring managers said AI would be a regular part of their work within the next five years.

Are we heading towards a future we're not in control of, with adverse impact lawsuits left and right?

Thankfully not.

Using recruitment software to minimize adverse impact

One of the great promises of AI is the ability to interact with large data sets better than a human being could.

What AI can do is analyze an organization's historical hiring data and diversity recruitment metrics, and then identify instances of bias related to age, gender, race, education, or previous employer. Now HR professionals can more easily see their blind spots and course-correct in the future - ensuring they are focusing on the quality of hire based purely on core competencies.

AI programs can also analyze sourcing and screening rates for protected classes. It can ignore specific demographics to stay compliant. After the demographics are essentially redacted, AI can test for adverse impact using the 4/5th rule discussed above.

The power of AI is too great to ignore

As Pew Internet live events have been discussing for years, the power of AI for advanced decision-making is too great. We need it in our organizations. The biggest decision many organizations ultimately make is around hiring. After all, it represents your largest amount of money going out, in the form of eventual salaries.

It would stand to reason that being scientific about hiring -- which organizations don't always have a great track record of -- would make a lot of sense to hiring leaders. That logic is fueling the rise of AI (and likely avoiding another "AI Winter").

But yes, there are concerns about bias and adverse impact and we need to keep discussing those possibilities to make sure we avoid them. The tech is getting there, however.

Just last week I spoke at the TATech Leadership Summit on AI & machine learning in talent acquisition and based on the discussion - there's no question it's going to be a force for good in recruiting.

At Oleeo, we're already pretty far along at reducing adverse impact using advanced machine learning tech.

With our Intelligent Selection technology, diversity compliance is built into the product. The playing field is evened out for diverse candidates.

Our customers can proactively limit disparate impact and stay in compliance with established EEOC selection rates. They monitor anonymized data insights on gaps in talent pools and talent pipelines. They focus attraction efforts on candidate relationship management to obtain better representation and diversity hiring, without sacrificing quality of hire.

This is the future: better decisions, more compliant decisions, and less bias in our decisions. It's what all of us in talent acquistion tech should be continually navigating towards.