01-20-2022
Beginning January 1, 2023, New York City employers and employment agencies that use automated tools to make employment decisions will have to look carefully at those how those tools screen out individuals. With the growing use of AI, there is concern about unconscious bias baked into the algorithms used to make hiring and promotion decisions.
The NYC law covers “automated employment decision tools,” which are defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence (AI), that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making” for making employment decisions impacting “natural persons.” For employers to use these tools, they must subject the tool to a “bias audit.” These audits should assess the tool’s “disparate impact” based on an individual’s gender, race, or ethnicity. Employers must post summaries of the results of these assessments on their websites.
In addition, employers must let candidates and employees know that an automated decision tool will be used in connection with their assessment or evaluation at least 10 days before. They must specifically disclose the “job qualifications and characteristics that such automated employment decision tool will use” to assess the employee or candidate. Candidates may ask for an alternative selection process or accommodation. The city may assess civil penalties of up to $500 for a first violation and between $500 and $1,500 for each subsequent violation for employers failing to comply. The law also creates a cause of action for candidates or employees.