For more information please call  800.727.2766

 

AI in the Workplace: Uses and Risks

Amy-Jacobs-192-color.jpg

In the last two years, the use of artificial intelligence (AI) has exploded across all platforms and industries. While many employers have been implementing AI for years, the breadth of AI applications has increased enormously. What has not appeared is any sort of comprehensive federal legislation or rulemaking regarding permissible uses of AI. In the absence of such, employers are left without much guidance to manage how and when AI is used in their workplace. While the advantages and efficiencies of AI can be significant, the risks and pitfalls when AI is not carefully monitored can be sizeable and sometimes costly.  What do employers need to be aware of when implementing AI into the human resources function? Some tips and guidelines are outlined below.

Uses and Risks of AI in the Workplace

AI refers to computer systems that are capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions or solving problems.[1]

AI applications can create increased efficiencies and outcomes in the following areas:

  • Screening resumes and creating interview questions.
  • Streamlining the onboarding function.
  • Drafting or updating policies and job descriptions.
  • Summarizing internal data and feedback on employees to create performance summaries.
  • Researching and summarizing laws and regulations.
  • Recording and summarizing meeting or interview notes.
  • Parsing large data sets such as employee engagement surveys and salary surveys.
  • Creating content for communications and presentations.
  • And SO MUCH MORE.

All of these applications carry risk, some apparent and some not, that must be considered to maximize the benefit of AI and minimize the likelihood of challenges by employees or third parties.

Risks of a Biased Algorithm

The AI software has to be “taught” to perform the requested task. The teaching is performed via an algorithm created by a programmer using historical data, often referred to as training data, that is relevant to the task. Once an adequate amount of historical data is acquired, the algorithm has the knowledge to evaluate new data to make the relevant decision. Here’s where the proverbial “garbage in, garbage out” problem can occur if the algorithm is created using training data that is biased or too narrowly focused. For example, if a resume screening algorithm is created using data associated only with white males, it might improperly exclude words commonly found in resumes of a minority group.

Two recent cases illustrate the risk associated with a biased algorithm. In EEOC v. iTutorGroup, Inc. (E.D.N.Y. 2023), the EEOC alleged that iTutorGroup used application software that screened out older applicants (females over 55 years and males over 60 years) in violation of the Age Discrimination in Employment Act. The case was settled and a five year consent decree was entered in August 2023. The Consent Decree imposed several obligations on the company, including adopting EEOC approved AI policies as well as retaining a third party to conduct training to all management and any non-management employees involved in screening, hiring or supervising applicants or employees.

Mobley v. Workday, Inc., 4:23-cv-00770-YGR (N.D. Cal. Jul. 25, 2023) is the first proposed class action to challenge the use of AI screening software. Mobley alleged that the Workday AI applicant screening tool discriminates against applicants based on race, age and disability. The case is significant in that the presiding judge has allowed plaintiffs to proceed with its theory that Workday is an “agent” of the employer and therefore potentially directly liable for discriminatory decisions made by the screening software.

Risks of AI Mistakes or Inaccuracies

A second risk involves accuracy of information, more specifically AI “hallucinations.” In other words, sometimes the algorithm just makes stuff up. This is allegedly what occurred in Walters v. OpenAI LLC, Ga. Super. Ct., No. 23-A-04860-2. Walters, a host of Armed America Radio, claimed that ChatGPT (owned by OpenAI) defamed him when it produced to a third party a made-up legal complaint accusing Walters of embezzling money from the Second Amendment Foundation. Walters said he has never been accused of embezzlement or worked for the group. The trial court denied OpenAI’s motion to dismiss in January 2024.

Risks of Non-Compliance with Laws or Regulations

There is limited existing regulation in the United States dealing specifically with AI. The federal government has issued some guidance documents and legislation has been introduced in Congress with little progress. Employers will need to closely monitor state, federal, and international developments in this area to ensure compliance with applicable authorities.

Legislation, Litigation and Regulations Governing AI 

There is no United States federal legislation specifically governing the use in AI in the workplace. However, the Equal Employment Opportunity Commission (EEOC), through guidance documents[2], has stated that a decision or practice that violates the Americans with Disabilities Act (ADA) or Title VII is still unlawful even if produced by software that incorporates algorithmic decision-making or artificial intelligence. The guidance documents specifically reference the continued requirement of reasonable accommodation in the application process and that the EEOC will assess whether AI tools used in the selection process have a disparate impact as defined in Title VII.

In 2023, President Biden issued an executive order regarding AI.[3] As directed by this executive order, the Department of Labor has issued several guidance documents regarding AI best practices.[4] While interesting, these guidance documents have no enforcement authority.

While a majority of states have introduced AI legislation, only a few bills have actually passed and been signed into law. In May 2024, Colorado enacted the first comprehensive state legislation governing AI.[5] The Colorado Act has a broad scope across many AI uses including employment, and places responsibility for compliance on both the developers and deployers of AI systems. Effective on February 1, 2026, the Colorado Attorney General has exclusive enforcement authority to address violations.

A 2023 New York City law regulates the use of automated employment decisions tools (AEDT). Among other things, the law requires employers who use AEDT to: have an independent third party audit the technology for bias; publish the results of the audit to the public; and provide employees and job candidates advance notice before subjecting them to the technology. In 2022, Illinois enacted the Artificial Intelligence Video Interview Act which imposes certain requirements on employers that use AI to analyze video interviews of applicants. A Maryland law enacted in 2020 requires employers that use facial recognition services during an interview to obtain written informed consent from the applicant.

Some state governors have issued executive orders regarding the use of AI. These executive orders address a range of concerns including the ethical development and use of AI by state agencies as well as attracting AI businesses to the particular state.

There are very few reported cases involving allegations of AI misuse in the employment context. Other than iTutor and Workday, mentioned above, most cases center on copyright infringement or trade secret misappropriation. There is also a growing body of caselaw and standing court orders governing the use (or misuse) of AI in the preparation of legal filings.

Unlike the United States, the European Union (EU) has promulgated a legal framework governing AI. The EU AI Act went into effect in August 2024 with enforcement scheduled to begin in August 2026. The Act governs both providers and deployers of AI systems that are used in the EU regardless of where the provider or deployer is based. The Act imposes greater compliance requirements on AI systems that impose an “unacceptable” or “high risk” to safety or are intrusive or discriminatory. AI systems that are used by employers for recruitment or selection as well as those that make employment decisions like promotion or termination are considered “high risk.”  Employers who operate AI systems “intended to be used” in the EU will need to meet the Act’s compliance obligations, including notification to those subject to the system, human oversight and monitoring of the system, preservation of logs generated by the system, and compliance with any applicable data privacy laws.[6]

What Can Employers Do? 

In the absence of a clear and comprehensive legal framework, what can employers do?

  • Monitor Legislative and Regulatory Development: This should include an assessment of existing laws like the Colorado Act and the EU AI Act. An increasingly global marketplace and applicant pool creates the possibility that the employer has some compliance obligations.
  • Develop Policies and Procedures to Govern AI Use: An AI policy should clarify which areas of operations and employees can use AI and for what purpose. Consider also including a statement regarding which employee may not use AI in their work. The policy should ensure human review and oversight, and that all relevant employees are trained on the use of AI tools in accordance with the policy. To mitigate bias that could come from the use of AI tools, continue human review of AI-assisted decision-making and implement disclosure and informed consent when necessary and appropriate.
  • Exercise Diligence with AI Vendors: Be careful and thorough when purchasing software and/or engaging an AI vendor. Ask if they routinely audit their systems, if they have identified any discriminatory outcomes, and what actions they have taken in response to such findings. Ask specific questions about their range of clients and request references. If using a company to create the AI tool, employers should include in the contract negotiations a clause that requires the AI vendor, among other things, to defend and indemnify the employer from liability for discriminatory impact and require cooperation by the vendor in the event the AI tool is challenged.
  • Conduct Audits: Employers should conduct periodic audits, perhaps performed by independent auditors, to determine if the algorithms result in the disqualification of applicants or disadvantage employees of a particular protected classification. If bias is discovered in the use of the AI system at issue, the algorithm should be changed to correct that defect.

As with any quickly developing/emerging technology whose implementation affects human resources processes and decision-making, employers would be well advised to stay abreast of unfolding legal regulation and guidance and ensure that their policies and procedures keep pace with technological developments.


 

[1]What is Artificial Intelligence? Definitions, Usage, and Types,” Coursera Staff, April 3, 2024,  https://www.coursera.org/articles/what-is-artificial-intelligence.

[2] The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, May 12, 2022; Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, May 18, 2023.

[3] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023.

[4] Department of Labor “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers”; Office of Disability Employment, “AI & Inclusive Hiring Framework”, September 24, 2024; Wage and Hour Division Field Assistance Bulletin No. 2024-1, “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards” April 29, 2024.

[5] Colorado SB 24-205 “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems.”

[6] The EU’s General Data Protection Regulation, a strict privacy and security law, imposes obligations on all organizations, regardless of location, who target or collect data related to people in the EU. Significant fines can be assessed for violations.