For more information please call  800.727.2766


Navigating the Use of Artificial Intelligence in Employment Decisions

Artificial Intelligence (“AI”) tools are increasingly being used by employers to, among other things, streamline the work of recruiting and evaluating candidates for vacant positions and for monitoring employee productivity. In this article, we will define artificial intelligence and how it may be used in the employment context. We will also determine why human resources professionals need to be familiar with the advantages and disadvantages of using AI in the employment context, and most importantly, what the HR department in your company should be doing to prevent the pitfalls that can occur with the use of AI tools in making day-to-day decisions.

What is Artificial Intelligence and How is it Used in the Employment Process?

Artificial Intelligence is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1] AI “research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.”[2]

The COVID-19 pandemic caused a major shift in the labor market, resulting in a marked increase in people working at home and a similar rise in companies recruiting and hiring using on-line technologies. Many companies are increasingly using AI in their hiring processes, including to scan resumes of applicants for educational and experience backgrounds, as well as other characteristics.[3] Companies also use video interviewing software that measures individual speech patterns or facial expressions to evaluate a candidate’s traits, e.g. competence, conscientiousness or interpersonal skills. In addition, some employers are using AI to scan social media pages to identify potential candidates who the company may wish to contact with an automated solicitation or application. In many cases, employers may use AI in an effort to find and reach out to qualified diverse candidates. Employers may also use AI to monitor employee job performance based on key-strokes or administer online tests to applicants or employees to assess job skills. One advantage of using AI, according to some HR professionals, is that it allows employers to process more resumes or applications than they otherwise would if humans were doing the scanning and evaluation.[4] In addition, some employers believe that taking out the human factor may not only create more efficiency in the recruitment and employee evaluation processes but also reduce bias and increase fairness.

Why Do Companies Need to Know About AI?

While the fact that a machine is doing the screening would seem to eliminate bias and increase the opportunities for companies to locate candidates who meet its requirements or to measure employee performance, AI has been found to be only as good as the humans who program and create the algorithms underlying the systems, which can result in errors. In addition, some HR professionals believe that using AI detracts from the human element and eliminates a more interpersonal process. “[W]ith AI, machines work to replicate human decisionmaking [sic]. Often the bias in AI systems is the human behavior it emulates. When employers seek to simply automate and replicate their past hiring decisions, rather than hire based on a rigorous analysis of job-related criteria, this can perpetuate historic bias.”[5]

In using AI, employers may be unknowingly screening out candidates or evaluating employees based on protected classifications and could run the risk of being in violation of federal, state and local laws prohibiting discrimination in hiring and other employment related decisions. Title VII of the Civil Rights Act (“Title VII”), as well as many other fair employment practice laws, prohibit discrimination on the basis of certain characteristics, including race, national origin, ethnicity, gender and religion. The Age Discrimination in Employment Act (the “ADEA”), and similar laws, protect individuals from discrimination in employment on the basis of their age. The Americans with Disabilities Act (the “ADA”), and other similar state and municipal laws prohibit employers from discriminating against otherwise qualified employees or applicants on the basis of disability and require employers to provide reasonable accommodations to those disabled applicants or employees in order for them to perform the essential functions of the job at issue. Title VII, the ADEA, the ADA and many state and local laws not only protect individuals from being subjected to individual discriminatory conduct, but they also prohibit disparate impact discrimination, which means that although an employment practice is neutral on its face, if it has the effect of harming older workers, disabled persons  or minority individuals, it may still violate the law. Therefore, “[a]n employer relying on AI to sort through applications could inadvertently disqualify an applicant or group of applicants based on a protected trait.”[6] For example, studies have shown that people who live closer to the office are likely to be happier at work. Because of that study, an employer may decide to use an AI screening application that disqualifies applicants outside of a certain geographic radius. The use of that algorithm would disqualify those who do not live in that geographic area. Such an algorithm could result in inadvertent discrimination against residents of neighborhoods that are predominantly populated by a particular racial or ethnic group and are not targeted by the AI tool.[7] AI also “can result in bias when a company tries to hire workers based on the profiles of successful employees at the company. If the majority or all the high-performing employees who are used as examples are men, any AI algorithm evaluating what makes a person successful might unlawfully exclude women.”[8]

While there are currently no federal laws that specifically prohibit or govern the use of AI by employers, in May of 2022, the U.S. Equal Employment Opportunity Commission (“EEOC”) issued a technical guidance document, discussing, among other things, “how existing ADA requirements may apply to the use of artificial intelligence in employment related decision making . . ..” In that document, the EEOC noted that among the “most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA” are where the employer does not provide a reasonable accommodation to a disabled job applicant or employee to be rated fairly and accurately, and where the algorithmic decision-making tool intentionally or unintentionally screens out an individual with a disability, even if that individual could perform the job at issue with a reasonable accommodation.[9] Charlotte Burrows, Chair of the EEOC, said that while the Commission “totally recognize[s] that there’s enormous potential to streamline things. . . we cannot let these tools become a high-tech path to discrimination.”[10] The EEOC’s guidance document provides the following examples, among others, of how AI can screen out otherwise qualified individuals with a disability from the hiring process – where an employer uses video games to measure a candidate’s memory, a candidate who is blind may not score well on the assessment and may be rejected but may still have a very good memory and be able to perform the essential functions of the job, or where the employer’s test measures the ability to ignore distractions and a candidate with post-traumatic stress disorder may be rated poorly, but with a reasonable accommodation, like noise-cancelling headphones or a quiet workstation, the candidate could be fully successful if hired into the job.[11]

Some jurisdictions have enacted or are considering legislation to regulate the use of AI in the employment context. In 2022 alone, legislation regulating the use of AI in general was introduced in 17 states. Effective January 2020, Illinois employers have been required to notify applicants in writing and obtain their consent if AI may be used to analyze facial expressions during a job interview, and those employers using AI as a screening tool must provide applicants with detailed information about the AI tool used and how it will be used to evaluate them. The Illinois law was recently amended to require employers that rely solely on AI to determine whether an applicant will qualify for an in-person interview to gather and report certain demographic information to the state. In addition, in 2022, several states, including Colorado, Illinois, Vermont and Washington, created task forces to study AI, and California proposed legislation that would impose liability on companies or third-party agencies administering AI tools that have a discriminatory impact.

New York City’s Automated Employment Decision Tool (“AEDT”) Law, which  goes into effect on January 1, 2023, requires employers to perform a bias audit of the AI tools they use. On September 23, 2022, the City released proposed rules to implement the law, initiating a comment period that ended October 24, 2022, and final rules will soon be published.[12] An AEDT is defined under the proposed rules as “any computational process derived from machine learning, statistical modeling, data analytics or artificial intelligence; that issues simplified output, including a score, classification or recommendation; and that substantially limits employment decisions being made by humans.” The City law requires bias audits, which include assessing the AEDT tool for disparate impact on persons in the EEOC designated protected categories, including race, gender, national origin, and ethnicity, to be conducted by an independent auditor. The proposed rules set out two metrics for employers to use to conduct the audit, a selection rate and an impact ratio. The selection rate “is the rate at which individuals in a protected category are either selected to move forward in the hiring process or assigned a classification by the AEDT – for example, how many Asian women a resume screening tool recommends for an interview. . .. So, if 100 people applied for a nursing position, and 10 of those applicants were Asian women, and three of those Asian women were selected for an interview, the [s]election [r]ate for Asian women by the AEDT for that position would be 0.3 or 30%.”[13] The impact ratio is the ratio of either the selection rate for a particular protected category divided by the selection rate of the most selected category OR the average score of all individuals in a particular protected category divided by the average score of individuals in the highest scoring category. “So, continuing with the above example, if 10 of the 100 applicants were white men, and five of those men were selected for an interview, the [s]election rate for white men would be .05 or 50%. Assuming white men had the highest selection rate of any category, the impact ratio for white men would be 0.5/0.5 or 1.0. The impact ratio for Asian women would be 0.3/0.5 or 0.6.”[14] The New York City law also requires that the results of the bias audit be made “publicly available on the careers or job section of [the employer’s] website in a clear and conspicuous manner,” and employers must include the date of the audit, the distribution date of the AI tool, and the selection rates and impact ratios for all categories. The proposed rule also requires employers to provide advance notice at least 10 business days before the use of the AEDT to individuals who reside in New York City. In addition, those individuals must be advised of their opportunity to request an alternative selection process or accommodation, of the job qualifications or characteristics that the AEDT will use in connection with the assessment, the employer’s retention policy, and the type and source of the data collected for the AEDT.

What Should Our Company Be Doing Going Forward?

“The answer to these concerns is not to simply return to human decisionmaking [sic]” which as we know has been found, in several cases, to be flawed and to perpetuate discrimination.[15] “Technology can help employers learn when and how bias occurs. . .. Algorithms can play a powerful role in improving decisionmaking by identifying job-related criteria and behaviors, as well as patterns of hidden bias.”[16]

Going forward, employers using AI in making employment-related decisions should consider the following steps:

  1. Be mindful of and comply with any legislation governing such use in the jurisdictions in which they operate or in which their employees reside.
  2. Regardless of whether existing legislation applies to the company and its decisions with regard to employees or applicants, employers should look at the demographics of the pool that the AI is trying to replicate and ensure they do not inadvertently exclude individuals in a protected classification. Further, employers should examine the algorithms underlying their AI tools, ensuring that they are tailored to the skills and abilities required of the specific position. A company may want to ask the software vendor they are using to create the AI tool whether the tool was developed with individuals with disabilities in mind or if the employer is developing its own tool, use an expert to do the same.
  3. If using a company to create the AI tool, employers should include in the contract negotiations a clause that requires the AI vendor, among other things, to defend and indemnify the employer from liability for discriminatory impact and require cooperation by the vendor in the event the AI tool is challenged.
  4. In addition, employers should conduct periodic audits, perhaps performed by independent auditors, to determine if the algorithms result in the disqualification of applicants or disadvantage employees of a particular protected classification. Of course, if bias is discovered in the use of the AI system at issue, the algorithm should be changed to correct that defect.

When implementing any new technology to hire or evaluate applicants and employees, employers should be aware of any relevant laws and regulations that govern the use of AI in the workplace and take steps to eliminate any potential impact of such technology on their compliance with Title VII, the ADEA, the ADA, as well as any other applicable fair employment practice laws.


[1] National Artificial Intelligence Act of 2020, Section 5002(3); - page=1210.


[3] L. Barrett, Georgetown Law Library, Georgetown Journal on Poverty Law & Policy, Using Artificial Intelligence to Reimagine Enforcement of Workplace Discrimination Laws, May 10, 2021;

[4] A. Rogers and M. Reed, American Bar Association Newsletter; Discrimination in the Age of Artificial Intelligence, Summer 2021;

[5] J. Yang, Urban Institute, Three Ways AI Can Discriminate in Hiring and Three Ways Forward, February 12, 2020;

[6] Id.

[7] A. Smith, J.D., HR Daily Newsletter, Society of Human Resources Management, AI: Discriminatory Data In, Discrimination Out, December 12, 2019;

[8] Id.

[9] The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC-NVTA-2022-2, May 12, 2022;

[10] The NPR Daily Newsletter, U.S. Warns of Discrimination in Using Artificial Intelligence to Screen Job Candidates, May 12, 2022;

[11] Id.

[12] NYC Dep’t Consumer & Worker Prot., Notice of Public Hearing and Opportunity to Comment on Proposed Rules,

[13] Id.

[14] Id.

[15] See J. Yang, supra.

[16] Id.