AI regulation in the field of recruitment and selection in the workplace

By joining Illinois and Maryland, on November 10, 2021, the New York City Council approved a measure, Int. 1894-2020A (the “Bill”), to regulate employers’ use of “automated employment decision-making tools” for the purpose of limiting inequalities in employment and promotions. The bill, which awaits Mayor DeBlasio’s signature, is due to enter into force on 1 January 2023. If the mayor does not sign the bill within thirty days of council approval (ie no later than 10 December), without a veto, it will become law.

Automated hiring decision tools

The bill defines “automated hiring decision-making tool” as “any computational process derived from machine learning, statistical modeling, data analysis or artificial intelligence” that scores, classifies or otherwise makes a recommendation that is used to significantly assist or replace the decision-making process of the individual. The bill exempts automated tools that do not have a significant impact on individuals, such as a junkmail filter, firewall, calculator, spreadsheet, database, dataset, or other compilation of data. It is unclear whether passive recruitment tools, such as LinkedIn’s proposed jobs, are covered by the bill.

The bill only applies to decisions to screen candidates for employment or employees for promotion in New York City, and does not apply to other employment-related decisions.

Employer requirements under the AI ​​Bill

The bill prohibits employers or temp agencies from using the automated decision-making tools to screen candidates or employees for hiring decisions, unless: (1) the tool has undergone an independent bias audit no more than one year before its use; and (2) a summary of the results of the audit as well as the date of distribution of the tool to which the audit applies has been made publicly available on the employer’s or employment service’s website. The bill is unclear as to whether and when the bias audit should be updated, or whether a new bias audit should be obtained prior to each “use” by the employer.

The bill defines an acceptable “bias audit” as an impartial evaluation conducted by an independent auditor that includes testing the tool to assess its diverse impact on persons of any federal EEO-1 “component 1 category”, i.e. whether the tool would have a different impact based on race, ethnicity, or gender.

In addition, New York City employers using automated hiring decision-making tools must notify each employee or candidate residing in New York City of the following:

  • at least ten working days before such use that the tool will be used to assess or evaluate the individual and allow a candidate to request an alternative process or accommodation;

  • at least ten working days before such use, the job qualifications and characteristics that the tool will use to assess or evaluate the individual; and

  • if not published on the employer’s website, and within 30 days of a written request from a candidate or employee, information on the type of data collected for the tool and the source of such data.

Although the bill allows candidates to request an “alternative process or accommodation”, it is silent as to what obligations, if any, an employer must assume upon receipt of a request.

Employers or employment agencies that do not comply with any of the requirements of the bill may be subject to a fine of up to $ 500 for an initial violation by the New York City Corporation Counsel or the Department of Consumer Affairs. Employers can be fined $ 500 to $ 1,500 for each subsequent violation.

In anticipation of the probable date of enactment of the bill on January 1, 2023, employers using automated hiring decision-making tools in their hiring and promotion practices in New York City should take steps to ensure compliance. This includes:

  • to ensure that an automated employment decision tool has undergone an independent “bias audit” no more than one year before the use of the tool, specifically to determine whether the tool would have a different impact based on race, ethnicity or gender;

  • publishing the audit results on the employer’s public website;

  • to develop a method for informing staff and candidates about the use of the tool and about the qualifications and characteristics that the tool will assess or evaluate; and

  • carefully design job assessments to ensure that only key knowledge, skills and abilities are taken into account, and consider potential causes of inequalities.

Employers seeking to implement AI tools in a responsible and ethical manner in recruiting and selecting talent should not limit their efforts to complying with this bill alone. While not a requirement under the New York City measure, employers should also consider the availability of the tools for people with disabilities and others. They should also consider whether and how to use passive recruitment tools that may not comply with the bill’s notice requirements for candidates.

Employers may also wish to consult with advisers before implementing any digital employment or promotion-based platform to ensure compliance with this and other recently adopted employment laws regarding AI.

* Kamil Gajda, Assistant Attorney – Awaiting Admission (Not Admitted to Law Firm) at the firm’s New York office, contributed to the preparation of this position.

© 2021 Epstein Becker & Green, PC. All rights reserved.National Law Review, Volume XI, Number 322

Leave a Comment