AI powered app review home page banner
Introducing Gem AI-powered App Review
Use AI to automatically surface the best applicants in your inbound stack.
AI powered app review home page banner

Articles

AI RecruitingBest PracticesTalent Leadership

Ethics and AI in Recruiting: How to Stay Compliant and Efficient

Melissa

Melissa Suzuno

HR Insights Writer

Posted on

October 17, 2023

There’s clearly a lot of hype — and a lot of confusion — when it comes to artificial intelligence (AI), especially in relation to talent acquisition and hiring decisions. Recent legislation in places like New York City is also getting a lot of press and causing HR professionals to worry about whether they’re at risk of breaking the law. 

The promise of this technology is that it promotes speed, efficiency, and a seamless hiring process. But the downside is that algorithmic bias can lead to unintended consequences. You might have heard, for example, of how Amazon had unintentionally trained its AI to reject female candidates. As Reuters put it, “In effect, Amazon’s system taught itself that male candidates were preferable.” This is just one well-publicized case, but you can imagine how similarly biased data sets could have implications for candidates from other typically underrepresented groups. 

Other AI, like facial recognition technology, has received similar scrutiny. FastCompany reported that HireVue, a video interview and assessment company, removed its facial analysis component after finding that it had less correlation to job performance than other parts of the assessment. And several of the new laws that regulate AI are specific to the use of facial recognition technology.

When evaluating potential AI use cases for your own company, it can be helpful to consider the broad category it fits into: automating manual tasks to save time or influencing hiring decisions. With routine, time-consuming tasks like scheduling interviews, AI can help recruiters perform these tasks at a fraction of the time, allowing you to focus on more strategic work. But when it comes to influencing actual hiring decisions, this is where algorithmic bias can come into play and you may be putting your company at risk. 

If you’re feeling overwhelmed by all the information and wondering the best way for your organization to move forward, you’re not alone. To help you consider your own approach to adopting AI, we hosted a discussion, “Embracing the AI Shift: Why AI is the Key to Unlocking Recruiting Efficiency,” at the Gem Talent Summit. We invited co-hosts of The Chad & Cheese Podcast Chad Sowash and Joel Cheesman to moderate a panel with Dr. Mona Sloane, Research Professor from the University of Virginia and Keith Sonderling, Commissioner from the United States Equal Employment Opportunity Commission. 


We’ll be sharing the highlights and hot takes from their conversation here. You can also view the full conversation at the end of this article.

What exactly do we mean by AI?

To kick off the conversation, Dr. Sloane shared that there’s no universal standard definition of the term “artificial intelligence” or “AI,” but she suggested the following: computer systems that use predictive analytics to support decision-making, or to put it even more simply, “an automated decision tool.” 

Commissioner Sonderling added that it’s worth making the distinction between the types of AI that have been around for many years now like machine learning. We’ve likely come across recommendations based on machine learning (think any time you go to stream a movie or music or order something from an online store that helpfully recommends other items you might want to add to your cart). But the newer type of generative AI (like ChatGPT) has broader implications for the workforce. Because there’s a possibility of workforce displacement, generative AI is leading to greater discussions of the ethical implications of this technology.

What are some of the laws being passed?

Commissioner Sonderling admits that it takes time to make changes at the federal level (where his organization, the Equal Employment Opportunity Commission, operates. This is why we’re seeing states and even cities rush in to regulate AI. He pointed out a few of the recent laws that have been passed, including:

  • Maryland is prohibiting employers from using facial recognition services during interviews without the candidate’s consent.

  • Illinois is also regulating employers’ use of AI analysis of candidates’ video interviews.

  • New York City is regulating the use of automated employment decision tools by requiring a bias audit and notifying candidates that this type of tool will be used

As HR practitioners and vendors, what can we expect?

Whether you’re in one of the states or cities that has passed laws regarding AI in recruiting or not, Commissioner Sonderling reminds us that federal law still applies, no matter where you’re based in the US. If you’re only following the local laws, that may give you a false sense of security. For example, the local laws may only refer to some protected categories like race or sex, while federal laws also include categories like disability or religion. So the big takeaway from Commissioner Sonderling is to ensure that your company remains compliant with both local and federal regulations.

Dr. Sloane observes that HR practitioners are sometimes using these tools in ways that are different from what the vendors intended. Her big piece of advice? Know exactly how AI is coming into decision-making at your organization. She adds, “AI doesn’t fully replace decision-making, especially in low-volume HR. It just reconfigures it.” 

Final thoughts and hot takes

While generally HR is seen as slow to adapt, Dr. Sloane says this is an instance where HR is moving quickly. With the expanded responsibilities of HR professionals and fewer team members to take them on, AI promises to save time and give HR professionals more agency in their roles. 

HR analyst Kyle Lagunas had to pipe in from the audience to say it’s the first time he’s ever heard someone use the word “fast” to describe HR’s approach to adopting technology. 

Commissioner Sonderling reminded us that ultimately, employers are the ones who are liable in discrimination cases — not the software vendors. This may change in the future, but currently the responsibility falls solely on employers to ensure they’re not discriminating against candidates based on protected categories. To help you evaluate the risks and benefits of AI, the commissioner recommends looking at it on a case by case basis. With any tool you’re using, ask how it’s designed and how it’s implemented: “If it’s properly designed and carefully implemented, it can actually do what it’s supposed to do. If it’s not properly designed or implemented, for each use of AI, there’s potential risk and harm.” 

To close out the discussion, Chad recommended that anyone with decision-making power — CHROs, Heads of People, etc. — bring in experts to help you make decisions about which software to use and how to stay compliant.

If you registered for Gem Talent Summit already, click this link. If you haven't, you can register for the on-demand session here.

Share

Request Demo Image

Learn more about Gem AI