NYC is regulating the use of AI in hiring. Here’s what it all means – Employee Benefit News

From resume screeners to virtual interviews, technology has never been more critical to the hiring process. But is it working for everyone? 

Beginning on January 1, 2023, New York City employers will face newly-implemented restrictions when using artificial intelligence and machine-learning tools in recruiting efforts. The new law will require employers to limit their use of AI tools that might replace human decisions about prospective candidates, as well as mandate an annual “bias audit” that will require companies to publicly disclose their hiring metrics.

“This is really important legislation that is going to force a greater degree of transparency in the way that AI and algorithm-based approaches are being used in business,” says John Winner, CEO of Kizen, a sales and marketing platform developer. “As AI and new technologies are being used more, they need to be explained and understood by the people that are using them.”

Read more: California law will require companies to post salary ranges on job listings

The concern, Winner explains, is that an unchecked algorithm can easily begin to produce bias. For example, if a company limits an AI tool’s search parameters to a certain area code, it could unintentionally create a racial disparity, potentially excluding candidates from the hiring pool; employers would need to manually add a layer of information to empower the machine to consider and support ethnic diversity. 

“This forces people who use algorithms or machine learning to do the second stage of work, which is to explain what it is that influenced that output or score, that ‘yes’ or ‘no’ [to an applicant],” Winner says. “A bias audit will expose unintended bias, created if the algorithm is looking at certain pieces of data that it shouldn’t be.” 

The law defines the term “bias audit” as an “impartial evaluation by an independent auditor,” according to Jenn Betts, office managing shareholder of the Philadelphia branch of law firm Ogletree Deakins. The bias audits must include, but aren’t limited to, assessing the tool’s impact based on race, ethnicity and sex.  

“The New York City law is groundbreaking,” Betts says. “It is the first in the country to impose specific process requirements on a wide array of automated employment decision-making tools. Previously, certain states like Illinois and Maryland had regulated niche uses of artificial intelligence, but no jurisdiction has thus far attempted the kind of broad law NYC will soon have in place.”

Read more: Can recruiting be like online dating? A new platform is testing it out

Though the law does not go into effect until January,  there are many ways employers can prepare for the change now, Betts says, such as evaluating what tools they are using that may be implicated by the law, determining whether auditing has previously been performed on relevant tools — and if no auditing has previously been performed, taking steps to secure an audit. 

For organizations that are not based in New York but have offices or clientele, these regulations will still apply, Winner explains — similar to some of the laws California has implemented around consumer protection, which ended up having national reach and impact. 

“Regulations like this create such a massive positive ripple effect,” Winner says. “When you have a major city like New York saying, we need to make sure that this technology is transparent and that it’s doing what we want it to do, it takes our society to the next level.”

Spread the love

Leave a Reply

Your email address will not be published.