Companies are increasingly feeling a kind of FOMO (that is, fear of missing out) when it comes to artificial intelligence in hiring, says Adam Vassar, head of talent science and learning design at CodeSignal, the AI-driven skills assessment platform. Just six months ago, many employers were still taking a wait-and-see approach to AI adoption, and some were playing defense against job seekers who were using AI to complete assessments and interviews.
Not anymore. Vassar says his clients are asking to pilot new programs. “They’re less afraid about being the first one to make a mistake, and more concerned about the fear of missing out and being left behind,” he said. “It’s been exciting to implement these products, see what problems we can solve, and get real data behind it. The possibilities are limitless.”
Vassar joined California-based employment attorney Heather Bussing for a From Day One webinar about fostering trust and ensuring compliance in the age of AI. Together, they outlined how the technology is being deployed, and the risks employers should be aware of.
Artificial Intelligence Enters the Recruiting Workflow
Companies are deploying AI at three key points in the hiring process, says Vassar.
First, top-of-funnel screening. Recruiters are overwhelmed by hundreds, and sometimes thousands, of applications per role and are automating early phone-screen interviews to manage the largest end of the funnel. Second are skills simulations. Companies are building high-fidelity simulations to test on-the-job abilities, giving hiring managers stronger signals about candidate fit.

And finally: interview training. AI is being used to train human interviewers by enforcing consistent, structured practices. “We’ve been teaching humans for years,” Vassar said. “Now we’re prompting AI to follow those rules–and they’re much better at it.” And together, they’re getting pretty good.
Vassar is adamant that none of this is designed to, nor could it, replace recruiters. Rather, it gives recruiters a team of AIs they can delegate to, freeing recruiters for the higher-order work of decision-making, judgment, and relationship-building. Recruiters are overworked and under-resourced. New tools help them move faster and potentially improve hiring quality.
The Legal Questions, and What Matters
Many employers still hesitate to adopt AI because they worry about legal exposure. But the risks aren’t new, Bussing says. They’re the same ones that apply to humans: bias and discrimination.
“All the data used to train these systems is based on what humans have done, and it is going to be biased too,” she said. “We just need to keep holding AI to the same high standard.”
That means employers must regularly audit hiring outcomes–job offers, promotions, and retention rates–through a human and an AI lens. Do your outcomes reflect your applicant pool? Are certain groups over- or under-represented? Some jurisdictions, like New York City, require regular audits; Illinois requires notifying candidates when AI is used; Maryland requires notice and consent before using video analysis.
But disclosure requirements have limitations, and employers should be aware, lest they consider it a box-checking exercise with no impact to the candidate. “If you look at the power dynamics in hiring, it’s not a real choice,” Bussing said. A candidate can refuse AI screening, but that may mean giving up the chance at the job.
Ensuring Fairness With AI in Recruitment
Employers can take steps to create more equitable processes. Asking for diverse candidate slates is one step, and assembling diverse interview panels is another. “We are naturally designed to prefer people who look like us and feel like us,” Bussing said. If employers want better diversity, or simply a more diverse skill set, they need recruiters and hiring managers who know how to look for it. Beyond legal compliance, Vassar added, there’s a moral obligation.
In this spirit, CodeSignal has adopted its own rigorous fairness standards regardless of jurisdiction. To test itself, the company asks candidates to voluntarily disclose demographic data so it can evaluate outcomes by gender and other factors. “We want that data. We starve for that data,” Vassar said.
CodeSignal created its own version of the “Pepsi Challenge”: a blind comparison of AI interview outcomes versus human interview outcomes using the same rubric. Vassar expected wide gaps. “But we found alignment,” said Vassar. In some cases, the overlap between humans and AI was about 85%. This is a good sign, he says.
“Humans still need to be in the loop,” Bussing said. “And we have to call out the reality of the situation, not pretend we can come up with a magic formula, and presto: change.” The future of hiring will hinge on disciplined oversight: humans checking the machines, machines checking the humans, and both held to the same rigorous standards. The goal isn’t to make hiring perfect, but to make it fairer and more consistent. A better outcome for both employer and candidate.
Editor’s note: From Day One thanks our partner, CodeSignal, for sponsoring this webinar.
Emily McCrary-Ruiz-Esparza is an independent journalist and From Day One contributing editor who writes about business and the world of work. Her work has appeared in the Economist, the BBC, The Washington Post, Inc., and Business Insider, among others. She is the recipient of a Virginia Press Association award for business and financial journalism. She is the host of How to Be Anything, the podcast about people with unusual jobs.
(Photo by SmileStudioAP/iStock)
The From Day One Newsletter is a monthly roundup of articles, features, and editorials on innovative ways for companies to forge stronger relationships with their employees, customers, and communities.