The pace of innovation is astounding—jarring, really. Sure, “Artificial Intelligence” has been a topic of public conversation for decades. Many of us remember when Deep Blue beat Garry Kasparov at chess in 1997, and when AlphaGo beat Lee Sedol at Go, and Watson beat champions Ken Jennings and Brad Rutter at Jeopardy in 2011. Those early generations of AI were domain-specific and, while ground-breaking, had no real applications outside of those specific mind sports. A lot has changed since then, and new advanced forms of AI, like ChatGPT and Google Bard, are seemingly everywhere, including in the workplace. Employers and employees in all fields are increasingly asking about these Generative AI tools, whether and how they can use them in their work, and whether using AI in the workplace raises any new legal concerns.
What is Generative AI and why should employers care?
Less than a year after their public release, Generative AI like ChatGPT (released by Open AI in November 2022) and Bard (released by Google in March 2023) are already transforming industry and society. Unlike Deep Blue, AlphaGo, or Watson, these Generative AI are not domain-specific. They use large language models trained on vast amounts of data to produce text in response to prompts made in plain language.
Many thought leaders predict Generative AI and its derivatives to be more disruptive than the internet and the iPhone. If you doubt those predictions, you either know something we do not, or have yet to spend much time interacting with them. Setting aside questions about the underlying datasets from which AI learns and whether AI understands the language they are manipulating (see John Searle’s Chinese Room Argument and Cal Newport’s clearheaded New Yorker article), they are exceptionally powerful tools. ChatGPT passed the bar exam in March 2023, scoring in the top 10% of human examinees. True, some of the hype over ChatGPT has died down. But venture capital-backed programmers are rushing to develop APIs, Auto-GPTs, and AI Agents to take Generative AI to the next level. New enterprise applications are only a year or two away from making the issues addressed in this article exceedingly relevant.
Despite high-profile calls for a pause and regulation, including from OpenAI’s CEO to Congress in May 2023, the economic pressure to capture their vast economic benefits puts employers in uncharted legal territory. Just ask ChatGPT.
We prompted ChatGPT with the following:
“Draft an article from the perspective of a labor and employment attorney explaining how advanced AI like ChatGPT will affect labor and employment laws.”
“Summarize the prior response in 250 words.”
This is what ChatGPT produced in less than 20 seconds:
Title: The Impact of Advanced AI like ChatGPT on Labor and Employment Laws: Insights from a Labor and Employment Attorney
Introduction: Advanced AI technologies, such as ChatGPT, are poised to have a significant impact on labor and employment laws. In this article, a labor and employment attorney examines the potential effects of AI on the legal landscape.
1. Automation and Job Transformation:
AI has the potential to automate tasks and raise concerns about job displacement. Labor laws must adapt by providing support to affected workers through retraining programs and potential revisions to employment classifications.
2. Algorithmic Bias and Fairness:
AI systems, including ChatGPT, can perpetuate biases and discrimination. Labor laws need to address algorithmic bias by ensuring transparency, accountability, and fairness in AI-driven decision-making. Regular audits and remedies should be established to rectify potential discriminatory outcomes.
3. Workplace Surveillance and Privacy:
The integration of AI enables extensive workplace monitoring, raising privacy concerns. Labor laws must strike a balance between employers’ monitoring needs and employees’ right to privacy. Regulations should govern data collection, use, retention, employee consent, and safeguards against misuse of AI-driven monitoring systems.
4. Safety and Liability:
AI integration raises questions about workplace safety and liability. Labor laws must address responsibility in accidents involving AI systems, establishing regulations and liability frameworks for accountability.
Conclusion: The integration of advanced AI systems like ChatGPT has profound implications for labor and employment laws. Adapting legal frameworks to address job displacement, algorithmic bias, privacy concerns, and safety considerations is crucial. Collaboration between policymakers, legal professionals, and industry stakeholders is essential to shape a legal landscape that balances the benefits of AI with the protection of workers’ rights. Proactive measures will pave the way for a fair and inclusive future of work that navigates the evolving relationship between AI and labor.
While paradoxically lacking in depth and specificity, it gets the gist generally right. And you sure cannot beat the speed. We would be fooling ourselves if we did not admit that we were tempted, for just a moment, to publish that answer as an article and move on to other work. But even ChatGPT recognizes the importance of many unanswered questions about fairness, privacy, safety, and liability as it transforms the workplace.
For example, could an employee’s entry of trade secret information into ChatGPT somehow make that information available to other users? Large language models are trained on user input and interactions, after all. The answer is unclear, but because the platform retains user history, a compromised account could reveal it. Could an employee’s use of Generative AI to create content unwittingly infringe a copyright or trademark and subject the employer to infringement liability? It has already happened. The takeaway here is clear: employers should proceed with caution.
Thus, there are good reasons—for now, and the foreseeable future—to prohibit the use of Generative AI in the workplace by default and to permit their use only after establishing limited-use policies and training employees accordingly. To that end, although state and federal governments have yet to issue guidance to employers regarding these Generative AI, the National Institute of Standards and Technology (NIST), Equal Employment Opportunities Commission (EEOC), and the National Labor Relations Board (NLRB) have issued guidance regarding using AI-enabled Human Resources tools, which raise similar concerns.
Unlawful Discrimination: “The AI told me to!” is no excuse.
The EEOC has issued forceful guidance against AI bias. According to NIST, an agency within the Department of Commerce, there are three dominant categories of AI bias: systemic, human, and statistical or computational. See NIST Special Publication 1270. Systemic biases such as institutional racism and sexism are present in the datasets used to train AI. Group and individual human biases are omnipresent across the AI lifecycle. And a host of statistical and computational biases stem from errors that result when a data sample is not representative of the population. See id., Figure 2. NIST advocates for adopting socio-technical standards for identifying and managing bias in AI. But these standards are just emerging, and some companies are deploying AI tools without robust vetting. Indeed, OpenAI and Google released their ChatGPT and Bard as “experiments” with disclaimers that they may produce inaccurate information. They do.
Thus, it is no surprise that EEOC made clear in May 2023 technical assistance that Advanced AIs are “tools.” Suppose the use of an AI-enabled Human Resources tool has an adverse impact on individuals of protected characteristics or a combination of characteristics. In that case, the use of the tool will violate Title VII unless the employer can show that such use is job-related and consistent with business necessity under Title VII. Employers are responsible for their selection decisions (such as hiring, promoting, and firing)—regardless if they merely purport to follow the recommendation of the AI overlord – ahem, we mean, an AI-enabled Human Resources tool – regardless of the vendor’s assurances, and irrespective of whether they delegate a selection decision to a vendor that has the authority to act on the employer’s behalf.
And in the case of AI, not even the vendor may be able to explain how the AI made its selection or verify that the AI’s recommendation would not result in disparate treatment or disparate impact. Thus, the four-fifths rule as evidence of adverse impact may take renewed importance. Under that rule, less than four-fifths (or 80%) selection rates are generally evidence of discrimination. 29 C.F.R. § 1607.4(D). Let’s say, for example, that an employer uses an AI-enabled personality test to screen job applicants. If the selection rate for Black applicants was 30% and the selection rate for White applicants was 60%, the ratio of the two rates is 50% (30/60), which is less than 80%. The four-fifths rule says that the selection rate for Black applicants is substantially different than that for White applicants, which could be evidence of discrimination against Black applicants. Still, compliance with the four-fifths rule is no guarantee that a particular employment procedure does not have an adverse impact. Hence the latter part of this article’s title: proceed with caution.
The EEOC encourages employers to conduct ongoing self-analyses to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently.
New York City was the first, and likely not the last, to require such action. New York City’s Local Law 144 of 2021 prohibits employers and employment agencies from using automated employment decision tools to make hiring and promotion decisions unless the tool is audited annually for bias and the employer publically publishes a summary of the audit. The employer must also provide certain notices to applicants and employees subjected to the tool.
It might be prudent for employers to remember their obligations under the Americans with Disabilities Act and analogous state laws to provide reasonable accommodations to applicants and employees with disabilities so that they are treated fairly by employers’ use of AI-enabled HR tools.
Unlawful Labor Practices: the NLRB comes out swinging.
In late 2022, NLRB General Counsel (GC) Jennifer Abruzzo issued Memorandum GC 23-02, espousing her strong concern that employers could deploy AI-powered management tools to interfere with the exercise of employees’ Section 7 rights. Section 7 of the National Labor Relations Act (NLRA) guarantees the right to self-organization, to form, join, or assist labor organizations, to bargain collectively through representatives of their choosing, and to engage in other concerted activities for collective bargaining or other mutual aid or protection, as well as the right to refrain from any or all such activities. GC Abruzzo acknowledges that employers are using new technologies to monitor and manage employees and anticipates that AI-powered management tools will impair employees’ ability to engage in Section 7 activity confidentially.
Whereas the use of the most intrusive types of monitoring and managing has historically been limited by the employer’s ability to process the vast amounts of data that can be collected on employees, new AI-powered software can analyze and act on that data, and can do so very quickly. GC Abruzzo anticipates employers may run afoul of settled law, such as using AI to observe open, protected concerted, or public union activity, which can unlawfully intimidate employees from engaging in those activities. Surveillance is generally lawful if the need for it outweighs its tendency to interfere with an employee’s right to engage in concerted activity (although state law can vary). But what does this mean? We will surely see litigation over this justification–interference balance in the near term.
GC Abrazzo emphasizes that, in her view, it would be unlawful under established precedents to:
- implement an AI monitoring technology in response to protected activity,
- use it to discover or surveil protected activity,
- fail to report expenditures on surveillance technology,
- discipline employees who concertedly protest workplace surveillance,
- coercively question employees with personality tests designed to evaluate their propensity to seek union representations,
- preclude employee conversations to prevent Section 7 activity,
- screen job applicants or discipline employees based on protected activity,
- or even, generally, to create the impression that the employer was doing such things, as the impression can interfere with the right to engage in concerted activity.
She also advocates expanding NLRB precedent to protect employees from “intrusive or abusive forms of electronic monitoring and automated management” that interfere with Section 7 activity. GC Abrazzo would make it a presumptive violation if the “employer’s surveillance and management practices, viewed as a whole, would tend to interfere with or prevent a reasonable employee from engaging in activity protected by the Act.” For an employer’s business need to outweigh employees’ Section 7 rights, she would require the employer to show that the monitoring and surveillance are narrowly tailored to address a legitimate business need. But even if an employer made this showing, GC Abrazzo would require the employer to disclose to employees the technologies it uses to monitor and manage them, its reasons for doing so, and explain how it uses the information it obtains.
Proceed with Caution
The EEOC’s and NLRB’s stances on employers’ use of Advanced AI in the workplace bring to mind Jean-Baptiste Karr’s epigram in The Wasps: “The more things change, the more they stay the same.” We are in uncharted legal territory, but employers’ obligations remain to refrain from unlawful discrimination and labor practices. As we have said many times already, employers should, therefore, proceed with caution with respect to Generative AI in the workplace, and consider implementing or updating internal policies and practices to address the same.
The labor and employment attorneys at Schwabe closely follow developments in this area and help employers navigate them.
This article summarizes aspects of the law and does not constitute legal advice. For legal advice for your situation, you should contact an attorney.
Ideas & Insights
News and Insights delivered to your inbox