Testimony of ReNika Moore

Chair Burrows and Members of the Commission:

Thank you to Chair Burrows and the Commission for the invitation to testify at this public meeting. My remarks will be principally focused on the potential for employment discrimination when using algorithms, artificial intelligence (“AI”), and machine learning (“ML”) in automated decision-making (“ADM”) systems in hiring. 

I am the Director of the Racial Justice Program at the American Civil Liberties Union (“ACLU”). In my role, I lead the ACLU’s racial justice litigation, advocacy, grassroots mobilization, and public education to dismantle barriers to equality for people of color. Prior to joining the ACLU, I served as Labor Bureau Chief of the New York Office of the Attorney General (“NYAG”). During my tenure, the Labor Bureau was nationally recognized for aggressively enforcing labor standards on behalf of low-wage workers who were disproportionately people of color and immigrants.

Before joining the NYAG, I supervised and coordinated the NAACP Legal Defense Fund’s economic justice litigation, public education, and public policy efforts. I litigated high-impact racial justice cases tackling a variety of civil rights issues, including major class actions challenging racial discrimination in employment. I also practiced at the plaintiff-side employment law firm Outten & Golden LLP, representing workers who had been unlawfully discriminated against or had been unlawfully denied their earned wages.

The ACLU is a nationwide, non-profit, non-partisan organization of nearly 2 million members dedicated to defending the principles of liberty and equality embodied in the U.S. Constitution and our nation’s civil rights laws. Founded more than 100 years ago, the ACLU has participated in numerous cases in state and federal court, including the U.S. Supreme Court, involving the scope and application of employment discrimination and other federal civil rights laws. The ACLU’s Racial Justice Program advocates in a range of issue areas including employment, education, housing, and the criminal legal system. We also work closely with our ACLU colleagues who specialize in disability rights, women’s rights, technology, data science, and analytics.

Thank you to my ACLU colleagues who provided guidance, suggestions, and feedback, and a special thank you to Olga Akselrod and Marissa Gerchick, who assisted with preparing this testimony. 

In Part (I), I discuss the legacy and continuing reality of systemic discrimination in employment and the overarching ways in which bias and discrimination can infect technologically-driven ADM systems in employment. In Part (II), I detail the widespread use of tech-driven tools throughout the labor market and the specific tools used in employment, with a focus on hiring, and the ways these tools are vulnerable to bias based on protected characteristics. In Part (III), I offer recommendations to the Commission to improve employer compliance, transparency, and fairness for workers. 

I.               The legacy and continuing reality of systemic discrimination in employment

The COVID-19 pandemic led to sweeping changes in how huge numbers of jobs are filled. Technology led much of this massive change, with many employers dramatically expanding their use of technologically-driven ADM tools and products to recruit, hire, monitor, and evaluate workers. Yet, even as the use of employment-related technologies seems to become ubiquitous, the pandemic exposed that some of the oldest, most persistent dysfunctions of our labor markets and workplaces – discrimination, segregation, and exclusion based on race, ethnicity, gender, LGTBQ status, disability, and national origin – continue to limit opportunities for workers with marginalized identities. The history of discriminatory labor practices reaches far back and touches many different groups. The depth and breadth of this history demand that we prioritize equity and anti-discrimination protections for all workers. If we fail to acknowledge the pervasiveness of bias and discrimination in employment, we will fall short of taking the actions necessary, such as new guidance, research, and enforcement, to guarantee equal opportunity. We must have comprehensive public oversight, transparency, and accountability to guarantee that jobseekers and employees do not face the same old discrimination dressed up in new clothes. 

  1. A Deeply entrenched legacy of employment discrimination based on race, gender, and other protected characteristics persists.

Since the earliest days of the United States with its violent displacement of Indigenous people and dependence on the chattel slavery of Africans and their descendants, the most important aspects of work, such as who worked, in what job, under what conditions, and for what compensation, have been determined too often by a person’s identity, e.g., their race, ethnicity, or gender, rather than by what they were qualified to do. Examples of race and ethnicity limiting opportunity have been the rule rather than the exception. In the South, even after slavery was abolished, Jim Crow laws and customs limited the jobs Black people could hold. In the western United States, as Chinese immigration rose through the 1800s, Chinese immigrants were limited to dangerous, low-paying work building railroads and were denied job opportunities in most other sectors. In the West and the Southwest, Mexican-Americans and immigrants also faced violence, discrimination, and exploitation and were disproportionately restricted to low-wage farm labor. The lowest paid agricultural and domestic workers have been almost exclusively of color, including Black, Mexican, Filipino, and Central American. The New Deal established new protections for most workers, but agricultural and domestic workers were excluded from the federal minimum wage, overtime, collective bargaining, and other protections.    

Title VII of the Civil Rights Act of 1964 outlawed employment discrimination explicitly based on race and gender, and other historically marginalized categories.[1] While employers began complying with the letter of the new law, almost immediately they began to undermine the spirit of Title VII. Employers began imposing educational and testing requirements to create new barriers for Black workers, barriers like those challenged in Griggs v. Duke Power Co.,[2] the seminal civil rights case first establishing the disparate impact theory of discrimination. The story of Griggs illustrates how new systems may appear at first glance to be unbiased or less-biased than the systems they replace, when in fact they may simply mask or worsen the same old discrimination. The Griggs example also highlights the critical, necessary role that the EEOC can play in protecting against evolving forms of discrimination. 

Willie Boyd, a Black man, was one of the thirteen plaintiffs in Griggs. Mr. Boyd was the son of sharecroppers and he grew up toiling on the family’s tobacco farm in North Carolina. When he began working at the Duke Power Company plant in the mid-1950s, he saw the job as a significant improvement over the farm, but he found that his position was not so different from his sharecropper parents because there were no opportunities for Black workers to advance. The plant had four departments, but Black workers were only permitted to work in one, “the labor department,” doing the most menial jobs in the plant for the lowest pay – in fact, the Black workers referred to themselves as janitors. The highest-paid Black worker made less than the lowest-paid white worker. Prior to the passage of Title VII, the workforce was explicitly segregated by race: Black workers were forced to use segregated bathrooms, water fountains, and lockers. After Title VII was passed, Duke Power shed its explicitly racist practices and segregation.[3] But it quickly adopted new requirements to work in every department except the labor department. The new requirements mandated that any employee who wanted to work in a department other than the labor department had to pass two general knowledge standardized tests. These new requirements effectively blocked all Black workers from transfers.

Mr. Boyd, who had become active in his local NAACP chapter, organized his Black coworkers and, with the help of the NAACP Legal Defense Fund, filed a charge with the then-newly established EEOC. The EEOC investigated and found that the tests were not job-related and discriminated against Black workers. The EEOC’s investigation laid the groundwork for the litigation that ultimately reached the U.S. Supreme Court. The combined efforts of the workers themselves, advocates, and the EEOC culminated in the Supreme Court ruling that the discriminatory tests were unlawful. Mr. Boyd went on to earn a promotion, becoming the first Black supervisor at Duke Power.  

Since Griggs, the EEOC, advocates, and workers themselves have sought to identify and root out systemic barriers that discriminate based on historically marginalized characteristics. These efforts are only possible when the devices are known and can be investigated and evaluated. 

There is also a long history of workers being denied opportunities because of their gender. Women have faced discrimination and segregation that cabined them into jobs in just a few sectors. Even when they have worked in male-dominated sectors, women have been paid less and had fewer opportunities for advancement. Employers, with the cooperation of newspapers, plainly advertised jobs to women and men separately.[4] The jobs for women were for administrative support, domestic work, and other stereotypical “women’s work,” and the positions were generally lower-paying, often part-time, and emphasized physical appearance as compared to jobs targeted to men.[5] Hiring ads also reflected the occupational segregation based on race and gender with ads targeted, e.g., to Black women for domestic work.[6]

Through the late 20th century, women were disproportionately concentrated in teaching, administrative support, and domestic work.[7] Black, Indigenous, Latina and other women of color fared even worse than white women and were consistently paid less than their white counterparts.[8] Disproportionately high numbers of Black and Latina women continue to hold minimum and sub-minimum wage jobs as home health aides, childcare providers, waiters, and domestic and janitorial workers.[9]  

As the data on women of color demonstrate, race compounds the disadvantage of other characteristics too. For example, overall in 2021, people with disabilities were far less likely (by about half) to be employed than people without disabilities, but Black and Latino people with disabilities faced a higher unemployment rate than white people with disabilities.[10] A survey of decades of data on people with disabilities found that people of color with disabilities were 40% less likely to be hired when unemployed than white jobseekers with disabilities.[11] Among people who are LGBTQ, Black LGBTQ people, and especially Black trans people, experience higher rates of unemployment.[12]

B.             Tech-driven ADM tools, on their own, will not address systemic discrimination in employment.

It has been 50 years since Mr. Boyd successfully challenged Duke Power’s hiring and promotion tests as unlawful. Despite this, anecdotal evidence and various data metrics show widespread employment discrimination based on race, gender, and other protected categories still exists. Throughout our labor markets, and most dramatically at the highest and lowest wage jobs, we still see disparities by race and gender in major employment indicators like unemployment rate, hiring, and pay. During the pandemic, Black and Latino workers experienced the highest rates of unemployment with Black and Latina women experiencing the highest rates within those groups.[13] The most recent unemployment data from the U.S. Department of Labor show that the unemployment rate for all workers remains low but for 2022, the unemployment rate for Black workers was still at least 90% higher than – sometimes more than double –the rate for white workers.[14] In hiring, a 2022 study using over 80,000 fictitious applications to large employers found that otherwise similar applicants with traditionally Black names were less likely to advance than those with more traditionally white names.[15]      

At the same time, since the start of the pandemic in early 2020, the use of tech-driven ADM tools for recruiting and hiring has skyrocketed. As described in more detail in Part II, these new tools use algorithms or preset rules, AI, and ML to automate recruiting, sourcing, interviewing, and monitoring, among other employment processes. These tools are marketed as cheaper, more efficient, and non-discriminatory or less discriminatory than their predecessors. While these tools may theoretically be able to help employers identify and hire more diverse pools of candidates, these benefits are not proven. In fact, there is a dearth of controlled testing comparing human-driven hiring processes with AI-driven processes to evaluate for discrimination. To the contrary, there is research showing that AI-driven tools can lead to more discriminatory outcomes than human-driven processes. One recent study of human-driven hiring compared with typical AI-driven hiring found that the standard AI-driven tool selected 50% fewer Black applicants than humans did.[16]     

Furthermore, research has shown that there are various ways that bias and discrimination can creep in when employers rely on algorithms and AI in the hiring process and during employment: 

  • Overrepresentation in negative, undesirable data: Black and Latino people are over-represented in data sets that contain negative or undesirable information, such as records from criminal legal proceedings, evictions, and credit history.[17] This is a consequence of many factors, including racial profiling of people of color by the police and harsher treatment within the criminal legal system that lead to longer and more serious consequences for Black, Indigenous, Latino and other people of color once arrested. Similarly, Black women are more likely to be targeted for eviction by landlords than other similarly situated groups.[18] Data sets containing criminal records and eviction records are also notoriously poor quality; they contain incorrect or incomplete names, old and out of date entries, and non-uniform terms to describe charges, dispositions, and other information necessary to understand outcomes.[19] Black people and many other people of color are similarly disadvantaged by credit history data. Though credit history data is not necessarily an undesirable source of data, employers generally only consider credit data to disqualify a candidate for a job opportunity.[20] A history of redlining, targeting people of color for predatory subprime loans more likely to be defaulted, and other barriers to accessing mainstream financial institutions has led to disproportionately low credit scores for people of color.[21] 26 million people in the U.S. have no credit history, 19 million have insufficient credit history, and Black and Brown individuals are overrepresented in both categories.[22] These realities result in people of color having relatively worse credit scores and histories than white people. As with criminal legal system and eviction records, this problem is compounded by data quality problems that have been documented in credit history data, including errors and misleading or incomplete information.[23] These data sets are used for background checks.[24] Thus, Black, Latino, and other people of color are more likely to be disadvantaged by and lose out on employment opportunities. 
  • Underrepresentation in the training data: Where data does not draw from a sufficiently diverse pool and significantly underrepresents groups in the data relative to the population for which the algorithm is used, the algorithm may be less accurate for people in the underrepresented group.[25] For example, factors that may correlate negatively for white people may correlate positively for Black people, yet the algorithm may not have sufficient representation of data from Black people to accurately gauge the factor as it applies to them.[26] People with disabilities[27] and trans people[28] are more likely to be missing from data altogether. There are many reasons these groups may be invisible. People with disabilities are more likely to have gaps in schooling and employment.[29] Trans people and other LGBTQ people are more likely to use names and pronouns that do not match their government identification, thus obscuring their information in the data.[30]
  • Bias in the training data and target: Algorithms are trained with enormous amounts of data including past hiring decisions. In general, many algorithms are developed through analyses of correlations between a specified target outcome (e., some quantification of strong work performance) and patterns in the data. Selection of the target may itself introduce bias.[31] For example, to the extent that the target is employees who will stay at the company for years and have good performance evaluations, those variables are the product of human decision-making and systems grounded in structural discrimination and subject to individual discrimination. Thus, inequities mar the outcomes in those systems, inequities such as prior discriminatory hiring decisions, subjective performance evaluations, effects of a hostile workplace, or reduced access for people in protected categories to social network in a company.[32]  Similarly, data used to train the algorithm will reflect the outcomes of those same discriminatory decisions and systems, yet be treated by the algorithm as ground truth.[33] 
  • Proxies in the training data and inputs: Even where race, gender, or other protected categories are withheld from the algorithm, many data points are proxies for those characteristics either in isolation or in combination, such as zip code, name, college attended, online browsing history, etc.[34]
  • Bias reinforcement through feedback loops: Many algorithms continue to learn after they are initially deployed, incorporating additional data as a kind of “feedback” through use of the algorithm.[35] For example, targeting algorithms make predictions about who is likely to click on an ad – to the extent a user clicks on the ad as predicted, the algorithm often incorporates that successful click data into subsequent predictions.[36] These feedback loops can reinforce discriminatory decisions, such as where an algorithm funnels predatory loan ads to Black users and their clicks on those ads lead to more such ads being delivered to those users.[37]
  • Impacts of the digital divide: Among the groups on the wrong side of the digital divide, Black, Indigenous, and Latino households are much less likely to have reliable high-speed internet access. Native Americans living on reservations have the lowest connectivity rates of any racial group.[38] People with disabilities are also less likely to have high-speed internet access.[39] Without internet service, people are less likely to engage digitally and/or online with many systems that produce data that is then used to train or otherwise develop tools. This lack of access may also create barriers to employment opportunities, including learning about job opportunities, submitting applications, or requesting accommodations or assistance.

II.            Prevalence of algorithms, AI, and ML in employment and the potential for bias and discrimination.

Recent reports indicate at least seven out of ten employers are using ADM tools in their hiring process, including 99% of Fortune 500 companies.[40] Media reports and employer announcements show increasing use of AI-driven hiring tools for lower wage jobs in sectors like retail, logistics, and food services.[41] Black and Latino workers are disproportionately concentrated in these sectors, and they may also interface with tech-driven ADM tools as they seek higher-paying managerial roles. At Amazon, the nation’s second largest employer, Black and Latino workers are clustered in entry-level positions and have struggled to advance to the corporate levels, where they are consistently underrepresented. Amazon has faced lawsuits and reports of systemic discrimination.[42] Against this backdrop, Amazon recently announced that it is moving to hire more employees through internally-developed AI-driven tools.[43] Given the racial stratification of its workforce, reliance on such tools to select for employment opportunities raises questions about how fair these processes will be for Black and Latino workers – particularly given that Amazon’s earlier attempt to use AI-driven tools for hiring is now one of the most frequently cited examples of algorithmic bias in employment because it discriminated against women applicants.[44]

Employers are using automated tools in virtually every stage of the employment process, from recruiting and hiring to managing and surveilling employees.[45] Often, workers may have little or no awareness that such tools are being used, let alone of how they work or that these tools may be making discriminatory decisions about them.[46] While these tools may seem attractive to employers as a way to reduce the cost and time of otherwise resource-intensive employer processes[47] and are marketed with claims that they are objective and less discriminatory,[48] many of these tools instead pose an enormous danger of amplifying existing discrimination in the workplace and labor markets and exacerbating harmful barriers to employment based on race, gender, disability, and other protected characteristics.[49]

This section discusses some of the tools that are being used, but this is by no means exhaustive. For a more detailed look at tools currently in use, please see the following sources:

  • Upturn, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias[50]
  • Upturn, Essential Work: Analyzing the Hiring Technologies of Large Hourly Employers[51]
  • Coworker, Little Tech is Coming for Workers[52]
  • Raghavan, et al., Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices[53]
  1. Recruitment and sourcing tools

In the sourcing stage, when employers seek to find and attract candidates, automated processes have come to play a pivotal role in determining who will and will not learn of a job opportunity. These processes can create major barriers to employment for people, especially people from groups that are already historically excluded from certain industries, and are invisible to most workers.[54]

One example is the widespread use of targeted advertising for job opportunities, which funnels ads to individuals on job boards, social media, and other online sites based on data collected about their personal characteristics, online behaviors, interests, or location.[55] Employers use various tools to select who will be shown a job ad. Some tools allow employers to select attributes from a dropdown menu of personal characteristics of people to whom the ad would be targeted. Other tools allow employers to use so-called “lookalike” tools to upload a list of people, which an algorithm then uses to curate an audience list of people with perceived similar attributes or interests.[56] When ad targeting tools are used to show employment ads on the basis of people’s real or inferred personal characteristics and algorithmic predictions about their interests, others with different predicted characteristics or interests will never be shown the job opportunity.[57]

Ad targeting tools have repeatedly been a vehicle of both intentional and unintentional discrimination in violation of civil rights laws. In 2018, for example, the ACLU filed a charge with the EEOC against Facebook and several employers that advertised on its platform for the use of trait selection menus and “lookalike” tools that included gender and other protected characteristics or close proxies.[58] Employers were able to use the tools to directly exclude women and non-binary users from receiving their ads. Or, for example, employers could upload a list of current employees for use with a “lookalike” tool, and if that list was skewed towards white men due to historically biased hiring decisions, their ad would reach a primarily white male audience as the algorithm picked up on race and gender or proxies thereof in determining who would be similar to the list.[59] While Facebook agreed in 2019 to changes[60] to remove protected characteristics or close proxies from employers’ audience selection tools and to stop directly using them in Facebook’s determination of who would be “similar” to an audience the employer was seeking to reach, those changes were insufficient to remove discriminatory impact from the use of those tools – the algorithm continued to pick up on even distant proxies for protected characteristics.[61] Moreover, even when employers seek to reach a diverse audience, researchers have found that Facebook’s own ad-delivery algorithm and its predictions of what users “want” to see also continues to be biased and based in stereotypes. For example, a recent audit of Facebook’s ad-delivery system found that Facebook continues to withhold certain job ads from women in a way that perpetuates historical patterns of discrimination: ads for sales associates for cars were primarily shown to men, while ads for sales associates for jewelry were shown to women.[62] While Facebook’s recent sweeping settlement with the Department of Justice (“DOJ”) and its agreement to expand the provisions in that settlement to employment ads will hopefully mean real progress in addressing discrimination on the platform,[63] discriminatory ad targeting is not unique to Facebook.[64]

Platforms such as LinkedIn,[65] ZipRecruiter,[66] Indeed,[67] CareerBuilder,[68] and Monster[69] also play a crucial role in many employers’ recruitment and sourcing processes and in many job seekers’ search processes. These platforms perform a kind of matching: employers advertise open positions, job seekers upload or post information about their professional interests and backgrounds, and the platforms make recommendations, often in the form of ranked lists, to both candidates and employers about jobs they should apply for or candidates they should consider. These recommendations may be based on information provided by each kind of user – such as resumes provided by candidates or job descriptions provided by employers – as well as data about the user’s prior activity on the platform – like which job ads candidates have clicked on in the past or which candidates employers have reached out to for interviews.[70] For employers, these platforms offer functionality that differs from the consumer-facing version with which job seekers interact. For example, LinkedIn’s offerings for employers include LinkedIn Recruiter, a tool that boasts usage by more than 1.6 million professionals and access to the more than 740 million users on LinkedIn.[71]

Despite the pervasiveness of these platforms and their integral role in sourcing and recruitment for many employers, these ranking and recommendation systems are generally largely black boxes to candidates and the general public.[72] What we do know about the candidate and job opportunity recommendations generated by these platforms raises serious concerns about the potential for these matching platforms to enable discrimination with little oversight or accountability, and demonstrates that there are multiple dangers with such recommender systems. For example, a predictive algorithm that assesses which jobseekers are similar to one another in making recommendations risks downplaying or even withholding job opportunities based on protected characteristics or proxies thereof.[73] In 2018, LinkedIn publicly shared that it had found that its recommendation system underpinning LinkedIn Recruiter generated results that unfairly ranked men over women, potentially enabling feedback loops in recruitment that perpetuated the gender bias.[74] While LinkedIn has stated that it has taken steps to address this issue,[75] it raises serious concerns that workers are wholly dependent on the employer/company to disclose and address algorithmic bias on its own. These kinds of biases are likely not limited to LinkedIn alone: researchers have found that recommender systems similar to those that comprise the core of job matching platforms can suffer from algorithmic bias in rankings and recommendations.[76] We cannot rely solely on companies – which may have little incentive to share negative findings about their algorithms – to regularly self-evaluate for algorithmic bias. The Commission should examine not only the tools of vendors or employers, but also sourcing platforms like LinkedIn, Monster, ZipRecruiter, Indeed, and CareerBuilder, among others.[77] Jobseekers need concrete protections that provide meaningful transparency and recourse, address algorithmic bias, and prevent discrimination enabled by these systems.

  1. Screening and interviewing tools

ADM tools are also widely used at the screening stage, and applicants are now often rejected through algorithmic tools without any human review of their candidacy.[78] An overwhelming number of employers – 99% of Fortune 500 companies and the vast majority of mid-size and large companies – use an Applicant Tracking System (“ATS”),[79] many of which have built-in algorithmic tools that employers use to filter out or rank applicants with automated resume screening based on knockout questions, keyword requirements, or specific qualifications or characteristics.[80] Many employers have also incorporated chatbots and text apps into their online hiring processes, which steer people through the application process, schedule interviews, or ask basic questions of jobseekers such as a jobseeker’s available days, hours, or work history.[81] These chatbots (and indeed many screening and assessment tools) often do not have information about how to seek reasonable accommodations built into them or displayed in a way that is easy to find, creating additional barriers for persons with disabilities who want to ask for a reasonable accommodation.[82] Some of these tools are designed to encourage or discourage applications based on answers to questions, and people interacting with these chatbots often will not know the impact their answers will have on their ability to apply for a role or advance in the interview process.[83] Often, these automated screening tools create rigid rules for highly specific certifications, credentials or particular descriptions of job experience, or screen for gaps in work history of more than 6 months, which can weed out qualified candidates that a human reviewer may have otherwise interviewed or hired and disproportionately create barriers for people with protected characteristics, such as pregnancy or a disability.[84]  

Employers also use various automated assessment tools to conduct personality testing. Some employers use online versions of multiple-choice personality tests that ask situational questions or questions about a person’s outlook or approach to assess amorphous traits such as work style, dependability, whether they like to work in a team, communication style, emotion, enthusiasm, or attention. Other employers use gamified assessments that are video-game style tools that claim to assess similar traits through an automated analysis of how someone plays a game.[85]

Employers also assess candidates through online video interviewing, whereby a candidate records an interview online in response to a set of standardized question prompts. Some employers solely use these tools as a means of conducting interviews without human labor, and humans later watch and evaluate the interview recording. Other employers use automated analysis tools so that a human never needs to watch the interview.[86] Vendors of these tools often claim to be able to measure potentially vague and subjective personality traits similar to those in online tests and gamified assessments, sometimes using voice analysis that assesses content and audio factors such as tone, pitch, and word choice and/or video analysis that assesses visual factors such as facial expressions, eye contact, and posture.[87] Some assessments are sold by vendors as standard applications for particular kinds of job functions.[88] Others train their algorithms based on data obtained from the employer about its current staff, often having people identified by the employer as its best employees take the tests or undergo the interviews and then using their answers or performance as a baseline for candidate evaluation.[89] 

There are numerous concerns with these assessment tools and other automated screeners.

First, as discussed previously, any tools that rely on existing employee data to train the algorithm may exacerbate discrimination. Predictive hiring tools often rely on training data regarding who would be a successful employee that reflects existing institutional and systemic biases in employment.[90] An employer’s existing workforce may lack diversity, and employer decisions as to who to designate as a successful employee to serve as the baseline for training is itself subjective and can reflect institutional and systemic biases in the workplace.[91] The Amazon hiring algorithm that discriminated against women cited above, supra note 44, is an example of this.[92]

Second, many ADM systems function by analyzing a large amount of data to uncover correlations and make predictions related to a target outcome, but the correlations that they uncover may not actually have a causal connection with being a successful employee, may not themselves be job-related, and may be proxies for protected characteristics.[93] For example, one resume screening company found that its model identified being named Jared and playing lacrosse in high school as indicators of a successful employee, and another determined that there was a correlation between job tenure and residing within a certain distance of the office.[94] Even when explicit consideration of race or other protected characteristics is removed from the model, the proxy-based correlations that an algorithm unearths to make its decisions can nevertheless lead to discriminatory decisions.[95] 

Moreover, as with traditional personality assessments, automated assessments are often designed to measure subjective and amorphous personality traits – characteristics such as optimism, positivity, ability to handle pressure, or extroversion – that are not clearly job related or necessary for the job, that may reflect standards and norms that are culturally specific, or that can screen out candidates with disabilities such as autism, depression, or attention deficit disorder.[96] These problems are exacerbated even further with predictive tools that rely on facial and audio analysis or gamified assessments. Of course, there is cause for great skepticism that personality characteristics can be accurately measured through things such as how fast someone clicks a mouse, the tone of a person’s voice, or facial expressions.[97] But even if the tools are somehow generally able to make those measurements accurately, predictive tools that rely on analysis of facial, audio, or physical interaction with a computer raise even more risk that individuals will be automatically rejected or scored lower on the basis of protected characteristics.[98] For example, there is a high risk that vocal assessments may perform more poorly on people with accents or with speech disabilities, and it has been established that video technology performs more poorly at recognizing the faces of women with darker skin.[99] Likewise, tools can be inaccessible to people with disabilities when they rely on detection of color or reactions to visual images, measure physical reactions and speed, require verbal responses to question prompts, or are incompatible with screen readers.[100] 

The lack of transparency in the use of these tools only adds to the harm. Applicants know that they are being subjected to an online recorded interview or test assessment, but are rarely provided information on the standards that will be used to analyze them or what the interviews and tests are seeking to measure.[101] As a result, applicants often do not have enough information about the process to know whether to seek an accommodation or alternative evaluation method.[102] This dynamic is compounded by the fact that reasonable accommodation notices on online hiring sites are often difficult to find or unclear.[103] Moreover, the lack of transparency makes it more difficult to detect discrimination, reducing the ability of individuals, the private bar, and government agencies to enforce civil rights laws.[104]

  1. Background checks

ATSs have made it easier than ever for employers to conduct background checks on applicants, allowing for easy integration of background check features for eviction and criminal legal records, finance records, and sometimes even social media searches, amongst others.[105] As I discussed above, reliance on criminal legal system, eviction, and credit records can inject discrimination into the hiring process.[106]

 

 

  1. Post-hiring tools impacting workers

The ACLU’s work on technologies used by employers has largely focused on the use of automated technologies for hiring, so my comments do not discuss in detail the tools employers use to evaluate and surveil their employees. But, I will briefly mention those tools and refer the EEOC to some of the useful resources that discuss the tools that are in use.

AI tools are increasingly used in worker evaluation and surveillance, especially in low wage jobs, and are being used by employers for key decisions such as setting hours, promotion, compensation, discipline, and termination.[107] This includes tools that are used to monitor workers’ movements, such as tools that monitor key strokes, time spent on particular tasks and breaks taken from those tasks, and GPS monitoring, tools that monitor worker communications both on and sometimes off the job, such as email and phone monitoring or social media monitoring, as well as tools that algorithmically evaluate performance including analysis of recorded customer interactions for worker performance through vocal and sentiment analysis.[108] Many of these tools raise similar concerns to the tools used for hiring, including discrimination based on disabilities and other protected characteristics, but raise additional concerns such as creating barriers to workers organizing, increased encroachments on worker privacy, and setting unreasonable pace and productivity expectations that can lead to increased injuries and harm workers’ health. For a detailed discussion of these tools and the problems that they raise, I refer the Commission to the following resources:

  • org, Little Tech is Coming for Workers[109]
  • Data and Society, The Constant Boss[110]
  • Data and Society, Algorithmic Management in the Workplace[111]
  • UC Berkeley Labor Center, Data and Algorithms at Work[112]
  • Center for Democracy & Technology, Warning: Bossware May Be Hazardous to Your Health[113]
  •  

Discrimination in hiring and in the workplace is nothing new, and it has always been the EEOC’s mission to prevent and remedy such discrimination. But the digital tools that are the focus of this hearing are the new frontier of discrimination, and they are more complex and less transparent than what workers have faced before, and threaten to exacerbate existing systemic inequities. In order to ensure that the protections of Title VII, the Americans with Disabilities Act (“ADA”), the Age Discrimination in Employment Act (“ADEA”), and other federal laws are enforced in this new automated landscape, the EEOC will need to meet the moment with robust regulation and enforcement using all of the tools in the EEOC’s toolbox.

We applaud the EEOC for the work that it has undertaken to begin to address the harms of new technologies in the employment sphere. The EEOC’s creation of the Initiative on AI and Algorithmic Fairness, as well as its collaboration with the DOJ to develop and issue guidance on the application of the ADA to new technologies, are critical first steps.[114] This section lays out some recommendations for additional EEOC action that builds on that groundwork. I note that many of these recommendations are informed by the ACLU’s work in coalition with numerous civil rights and technology equity groups that have collaborated to advocate for federal government actors to center civil rights in their technology policies.[115]

  1. The EEOC should issue additional guidance on the application of Title VII and ADEA to the use of tech-driven ADM systems in employment decisions.

The core guidance for employers and vendors on how to assess the fairness and validity of hiring and other selection procedures is the Uniform Guidelines on Employee Selection Procedures (“UGESP”), which was adopted 45 years ago, long before the advent of the kind of technological tools in use today. Many advocates and scholars have raised concerns that the UGESP is dated, including that the UGESP fails to address discrimination on the basis of disability, age, aspects of sex discrimination or intersectional discrimination, and that the UGESP do not clearly state whether employers can establish the validity of a procedure through evidence based on correlations between certain characteristics and job performance without showing such characteristics are necessary to perform the job.[116]

 

The EEOC should address the gaps in the application of the UGESP to new employment technologies. As a starting point, the EEOC should use its recent guidance on the application of the ADA to new AI and algorithmic technologies as a springboard for developing similar guidance on the application of Title VII and the ADEA, whether through technical assistance documents, Questions and Answers, or other guidance documents. Whatever the format, it is critical that the EEOC continue to educate employers – and software vendors – on how their use of these technologies can violate civil rights laws and advise on steps to take to come into compliance. The EEOC should also offer employers additional guidance under the ADA, Title VII and ADEA on the potential for discrimination in the use of technologies for monitoring worker performance and productivity, much of which directly impacts worker compensation, scheduling, benefits, termination, and other key employer decisions. 

  1. Any EEOC guidance should include more detailed and comprehensive best practice standards.

The EEOC’s recent guidance on the ADA contains some extremely important “promising practices” to help employers meet their obligations under the ADA, including providing reasonable accommodations and alternatives; using tools that have been designed with accessibility in mind; providing plain language notice to applicants and employees regarding what traits are being assessed and how they are being measured; ensuring that the tools being used “only measure abilities or qualifications that are truly necessary for the job – even for people who are entitled to an on-the-job reasonable accommodation” and that “necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications”; and that employers inquire with vendors whether the tool asks questions of applicants or employees about disability information or are likely to lead to disclosure of such information.[117] The guidance also critically advises employers that they could be held liable for “the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.”[118]

These “promising practices” are indeed some of the critical steps needed to protect the rights of employees and applicants. Next, we recommend the EEOC further clarify how employers can ensure their tools conform with those principles – both for ADA compliance and with other civil rights laws. What kind of process will allow employers to determine whether their tools are following promising practices? What specifically should they ask of vendors? When does a tool pose too great of a risk of discrimination and, therefore, should not be used? Robust evaluation of algorithmic systems is crucial here, and because there are currently no industry standards for such evaluations or when mitigation or decommission measures should be employed, the EEOC can help to fill that void with research and detailed guidance about industry best practices for auditing and transparency measures, as well as guidance around what kinds of tools to avoid.

The EEOC can look to several existing sources for models on developing such standards.

First, the ACLU joined with the Center for Democracy & Technology (“CDT”) and a number of other civil society groups to draft the “Civil Rights Standards for 21st Century Employment Selection Procedures,” which were published in December.[119] The Civil Rights Standards provide a concrete, detailed road map for civil rights-focused guardrails for automated tools used in employment decisions, such as for pre- and post-deployment audits, short-form disclosures, procedures for requesting accommodations or opting out, record keeping, transparency and notice, and systems for oversight and accountability. The Standards also call for prohibition of “certain selection procedures that create an especially high risk of discrimination. These include selection procedures that rely on analyzing candidates’ facial features or movements, body language, emotional state, affect, personality, tone of voice, pace of speech, and other methods as determined by the enforcement agency.”[120] One of the lead drafters of the Standards, Matt Scherer of CDT, is likewise testifying before this Commission and will provide further details on what the Civil Rights Standards contain. 

Second, the EEOC should look to the White House’s recently released Blueprint for an AI Bill of Rights,[121] which contains comprehensive and robust measures that are very much in line with the growing consensus amongst civil society groups as to what is needed to address algorithmic discrimination and other harms from new technologies, including proactive measures throughout the entirety of an AI lifecycle, such as consultation with the communities directly impacted by system deployment, pre- and post-deployment testing and mitigation or decommissioning when necessary, independent auditing, transparent reporting, and notice and recourse measures for impacted individuals. The AI Bill of Rights framework includes useful discussions of five core principles: a right to safe and effective systems, protections from discriminatory or inequitable algorithmic systems, data privacy, notice and explanation, and human alternatives, consideration, and fallback.[122] 

Third, the EEOC can also look to the National Institute of Standards and Technology (“NIST”) proposal for “Managing Bias within Artificial Intelligence,” for an informative discussion of “technical characteristics needed to cultivate trust in AI systems: accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) – and that harmful biases are mitigated.”[123] The ACLU cautions that it has raised concerns to NIST that its proposal was too tech-determinist and did not sufficiently include non-technical sociological and ethical considerations, and it remains a work in progress.[124] Nevertheless, NIST’s work provides the EEOC with an opportunity for inter-agency collaboration around the development of clear standards for assessments.

  1. Increased enforcement measures, including strategically selected targets.

While there has always been an information gulf between job applicants or workers and the ways that employment practices, especially hiring, may be discriminatory, the increased use of hiring technologies has widened that gulf. Many hiring technologies are invisible to workers, or workers are aware that a technology is being used but not how or the manner in which it is impacting them. This has made it more challenging for individuals and the private bar to file complaints with the EEOC. It is therefore critical that the EEOC use the full force of its enforcement powers to proactively investigate discrimination in the use of hiring technologies. The EEOC can begin through research and information gathering to identify employers who are using the tools that are at greatest risk for discrimination, and where appropriate, use Commissioner charges under Title VII and the ADA,[125] and direct investigations under the ADEA and the Equal Pay Act,[126] to investigate systemic discrimination caused by these tools in the absence of individual complaints.

The ACLU is aware that the EEOC has published a draft of its Strategic Enforcement Plan and currently plans to separately submit comments on that draft during the open comment period.

  1. The EEOC should take additional steps, including technical studies, to make hiring tech tools more transparent.

The auditing and notice standards mentioned above are critical to addressing transparency. But the EEOC can also use its additional authority to “make such technical studies as are appropriate to effectuate the purposes and policies of [Title VII] and to make the results of such studies available to the public[.]”[127] We encourage the EEOC to use the full scope of its authority to conduct technical studies and examine other creative ways that it can encourage the private industry to share information about its practices.[128] Public reporting on such studies is critical, but the EEOC could report such information in a summary or aggregated form where appropriate.

 

 

  1. The EEOC should issue guidance on when digital platforms or software vendors can be held directly liable for their role in violations of civil rights laws.

While the EEOC’s recent guidance on the application of the ADA to automated tools discusses how employers can potentially be held liable for the actions of their vendors, more clarity is needed on when digital platforms or software vendors across the employment spectrum can themselves be liable under Title VII, the ADA, the ADEA, and other civil rights laws.[129] The ACLU and others have argued that in targeting and delivering employment ads, Facebook could be held liable as an employment agency.[130] In a recent complaint before the EEOC against Facebook, the complainants also argued that Facebook could be held liable for aiding and abetting employment discrimination, and could also be deemed an “employer” in their actions on behalf of an employer.[131] Similar arguments apply to other sourcing and recruiting platforms, and may likewise apply to vendors of other kinds of digital tools used in hiring and employment decisions. The EEOC should issue guidance that provides clarity in this area.

For additional recommendations, including adoption of the internet applicant rule, increased employer recordkeeping and reporting requirements, particularly for disability-related data, and others, please see the July 13, 2021 letter from ACLU and coalition partners to the EEOC.[132]

IV.          Conclusion

Thank you to the Commission for convening this meeting to further explore and understand the challenges that these new technologies pose to equal employment opportunity and we look forward to working with the Commission to chart a course forward that protects the rights of all workers. 

 

[1] 42 U.S.C. §§ 2000e, et seq.

[2] 401 U.S. 424 (1971).

[3] Robert Belton, The Crusade for Equality in the Workplace: The Griggs v. Duke Power Story (2014).

[4] See Pittsburgh Press Co. v. Pittsburgh Comm’n on Hum. Rels., 413 U.S. 376 (1973) (upholding ordinance prohibiting segregated employment ads); Laura Tanenbaum & Mark Engler, Help Wanted - Female, The New Republic (Aug. 30, 2017), https://newrepublic.com/article/144614/help-wantedfemale.

[5] Tanenbaum & Engler, supra note 4.

[6] Id.

[7] Marina Zhavoronkova, Rose Khattar & Mathew Brady, Occupational Segregation in America, Ctr. for Am. Progress (Mar. 29, 2022) https://www.americanprogress.org/article/occupational-segregation-in-america/.

[8] Id.

[9] Id.

[10] Disability Employment Statistics, U.S. Dep’t of Lab.: Off. of Disability Emp. Pol’y, https://www.dol.gov/agencies/odep/research-evaluation/statistics (last visited Jan. 15, 2023) (white people with disabilities had a 9.2% unemployment rate, while Black people with disabilities had a 15.2% unemployment rate, and Latino people with disabilities had a rate of 13.9%).

[11] Edward Yelin & Laura Trupin, Successful Labor Market Transitions for Persons with Disabilities: Factors Affecting the Probability of Entering and Maintaining Employment, 1 Rsch. in Soc. Sci. and Disability 105–29 (2000).

[12] Movement Advancement Project, Center for American Progress, Human Rights Campaign, Freedom to Work & National Black Justice Coalition, A Broken Bargain for LGBT Workers of Color (Nov. 2013), at i, https://www.lgbtmap.org/file/a-broken-bargain-for-lgbt-workers-of-color.pdf.

[13] Bearing the Cost: How Overrepresentation in Undervalued Jobs Disadvantaged Women During the Pandemic, U.S. Dep’t of Lab., 7 (Mar. 15, 2022). https://www.dol.gov/sites/dolgov/files/WB/media/BearingTheCostReport.pdf.

[14] Economic News Release: Table A-2, Employment Status of the Civilian Population by Race, Sex, and Age, U.S. BUREAU OF LAB. STATS., (Jan, 6, 2023) https://www.bls.gov/news.release/empsit.t02.htm.

[15] Patrick Kline, Evan K. Rose & Christopher R. Walters, Systemic Discrimination Among Large U.S. Employers, 137 Q. J. of Econ. 1963, 1963 (2022), https://academic.oup.com/qje/article/137/4/1963/6605934.

[16] Learning Collider, Hidden Bias in Hiring: Examining Applicant Screening Technologies, 12 (2022),  https://static1.squarespace.com/static/60d0c05ace34212ef5a1131b/t/62ab8039e3a4642b49f2f730/1655406650864/Learning+Collider%27s+White+Paper+-+Hidden+Bias+in+Hiring+-+2022+Master.pdf.

[17] Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52.1 Colum. Hum. Rts. L. Rev. 251, 270-74 (2020), https://hrlr.law.columbia.edu/hrlr/locked-out-by-big-data-how-big-data-algorithms-and-machine-learning-may-undermine-housing-justice/.

[18] Peter Hepburn, Renee Louis & Matthew Desmond, Racial and Gender Disparities Among Evicted Americans, 7 Socio. Sci. 649, 655 (2020), https://sociologicalscience.com/download/vol-7/december/SocSci_v7_649to662.pdf.

[19] Search Group Inc. & Bureau of Justice Statistics, Data Quality of Criminal History Records, Dep’t of Just., 1-5 (Dec. 1, 1985) https://bjs.ojp.gov/library/publications/data-quality-criminal-history-records; Léon Digard & James Kang-Brown, Yes, the New FBI Data is Poor Quality. But We’ve Always Needed Better, Vera Inst. (Oct. 12, 2022), https://www.vera.org/news/yes-the-new-fbi-data-is-poor-quality-but-weve-always-needed-better.   

[20] Elizabeth Gravier, Can Employers See Your Credit Score? How to Prepare for What They Actually See When They Run a Credit Check, CNBC.com, (Nov. 18, 2022), https://www.cnbc.com/select/can-employers-see-your-credit-score/.

[21] Kristen Broady, Mac McComas & Amine Ouazad, An Analysis of Financial Institutions in Black-Majority Communities: Black Borrowers and Depositors Face Considerable Challenges in Accessing Banking Services, The Brookings Inst. (Nov. 2, 2021), https://www.brookings.edu/research/an-analysis-of-financial-institutions-in-black-majority-communities-black-borrowers-and-depositors-face-considerable-challenges-in-accessing-banking-services/.

[22] CFPB Explores Impact of Alternative Data on Credit Access for Consumers Who Are Credit Invisible, Consumer Fin. Prot. Bureau (Feb. 16, 2017), https://www.consumerfinance.gov/about-us/newsroom/cfpb-explores-impact-alternative-data-credit-access-consumers-who-are-credit-invisible/; Explaining the Black-White Homeownership Gap, Urb. Inst. 8 (Oct. 2019), https://www.urban.org/sites/default/files/publication/101160/explaining_the_black-white_homeownership_gap_2.pdf (“For those with scores, more than half of white people have FICO scores above 700 compared with just 20.6 percent of black people. About one-third of black people do not have a FICO score.”); Discriminatory Effects of Credit Scoring on Communities of Color, Nat’l Fair Hous. All. 15 (2012), https://nationalfairhousing.org/wp-content/uploads/2017/04/NFHA-credit-scoring-paper-for-Suffolk-NCLC-symposium-submitted-to-Suffolk-Law.pdf.

[23] Aaron Klein, The real problem with credit reports is the astounding number of errors, The Brookings Inst. (Sept. 28, 2017), https://www.brookings.edu/research/the-real-problem-with-credit-reports-is-the-astounding-number-of-errors/; Levi Kaplan, Alan Mislove & Piotr Sapieżyński, Measuring Biases in a Data Broker’s Coverage, Ne. Univ. (2022), https://www.ftc.gov/system/files/ftc_gov/pdf/PrivacyCon-2022-Kaplan-Mislove-Sapiezynski-Measuring-Biases-in-a-Data-Brokers-Coverage.pdf; Tenant Background Checks Market Report, Consumer Fin. Prot. Bureau (Nov. 15, 2022), https://www.consumerfinance.gov/data-research/research-reports/tenant-background-checks-market-report/ (examining problems with criminal background checks for tenants but applicable to employment-related criminal background checks as well).

[24] See Gravier, supra note 20.

[25] Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Cal. L. Rev. 671, 680–81 (2016), https://www.californialawreview.org/wp-content/uploads/2016/06/2Barocas-Selbst.pdf; see also Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender ClassificationConference on Fairness, Accountability and Transparency, 81 Proc. of Mach. Learning Rsch. 1 (2018), https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf; Allison Koenecke, et al., Racial Disparities in Automated Speech Recognition, 117 Proceedings of the National Academy of Sciences 7684, 7684 (2020), https://www.pnas.org/doi/epdf/10.1073/pnas.1915768117.  

[26] Pauline Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 878–79 (2017), https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=3680&context=wmlr.

[27] Shari Trewin, AI Fairness for People with Disabilities: Point of View, IBM Accessibility Research (Nov. 26, 2018), https://arxiv.org/pdf/1811.10670.pdf (explaining challenges of “gathering a balanced set of training data [for disabilities], because there are so many forms and degrees of disability.”).

[28] AI Now Institute, A New AI Lexicon: Gender: Transgender Erasure in AI; Binary Gender Data Redefining ‘Gender’ in Data Systems, Medium (Dec. 15, 2021), https://medium.com/a-new-ai-lexicon/a-new-ai-lexicon-gender-b36573e87bdc.

[29] Kevin Rockmael, Higher Education Can Help Bridge the LD Employment Gap, Nat’l Ctr. for Learning Disabilities (Oct. 13, 2021), https://www.ncld.org/news/higher-education-can-help-bridge-the-ld-employment-gap/  (citing to lower high school and college graduation rates and employment rates for people with disabilities).  

[30] See Sandy E. James, et al., The Report of the 2015 U.S. Transgender Survey, Nat’l Ctr. for Transgender Equal. 85–91 (2016), https://transequality.org/sites/default/files/docs/usts/USTS-Full-Report-Dec17.pdf (out of over 27,000 respondents, “[m]ore than two-thirds (68%) of respondents did not have any ID or record that reflected both the name and gender they preferred.”); Silver Flight, Name Changes: Do We Need Judicial Discretion?, U. of Cincinnati L. Rev. (Oct. 1, 2021), https://uclawreview.org/2021/10/01/name-changes-do-we-need-judicial-discretion/ (describing barriers to legal name changes for trans people).

[31] Barocas & Selbst, supra note 25, at 679–80.

[32] Id.

[33] Id. at 680–81.

[34] Id. at 690-92. See generally Using Publicly Available Information to Proxy for Unidentified Race and Ethnicity, Consumer Fin. Prot. BUREAU (2014), https://files.consumerfinance.gov/f/201409_cfpb_report_proxy-methodology.pdf.

[35] Aaron Rieke & Miranda Bogen, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn 44 (Dec. 10, 2018), https://www.upturn.org/work/help-wanted/.

[36] See generally, Good Questions, Real Answers: How Does Facebook Use Machine Learning to Deliver Ads?, Facebook (June 11, 2020), https://www.facebook.com/business/news/good-questions-real-answers-how-does-facebook-use-machine-learning-to-deliver-ads.

[37] See, e.g., Nicol Turner Lee, Testimony Before the Task Force on Artificial Intelligence, House Committee on Financial Services: Hearing on “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services,” 7 (June 26, 2019), https://docs.house.gov/meetings/BA/BA00/20190626/109735/HHRG-116-BA00-Wstate-Turner-LeePhDN-20190626.pdf.

[38] Hansi Lo Wang, Native Americans On Tribal Land Are 'The Least Connected' To High-Speed Internet, NPR (Dec. 6, 2018), https://www.npr.org/2018/12/06/673364305/native-americans-on-tribal-land-are-the-least-connected-to-high-speed-internet (one out of three Native Americans generally lack access to high-speed internet but 47% of Native Americans living on reservations lack access).

[39] Disability and the Digital Divide: Internet Subscriptions, Internet Use and Employment Outcomes, Off. of Disability Emp. Pol’y & U.S. Dep’t of Lab. (June 2022), https://www.dol.gov/sites/dolgov/files/ODEP/pdf/disability-digital-divide-brief.pdf.

[40] Jennifer Alsever, AI-Powered Speed Hiring Could Get You an Instant Job, but are Employers Moving Too Fast?, Fast Co. (Jan. 6, 2023), https://www.fastcompany.com/90831648/ai-powered-speed-hiring-could-get-you-an-instant-job-but-are-employers-moving-too-fast (“70% of companies now rely on automated tools for scoring candidates and conducting background checks, and AI-enabled tools are matching skills to jobs.”); Joseph B. Fuller, et al., Hidden Workers: Untapped Talent, Harv. Bus. Sch. & Accenture 20, (Oct. 4, 2021),  https://www.hbs.edu/managing-the-future-of-work/Documents/research/hiddenworkers09032021.pdf (citing that 99% of Fortune 500 companies use an applicant tracking system).

[41] See, e.g., Patrick Thibodeau, Food Industry Turns to AI Hiring Platform to Fill 1M Jobs, TechTarget (Apr. 9, 2020), https://www.techtarget.com/searchhrsoftware/news/252481461/Food-industry-turns-to-AI-hiring-platform-to-fill-1M-jobs.

[42] See Katherine Anne Long, New Amazon Data Shows Black, Latino and Female Employees are Underrepresented in Best-Paid Jobs, The Seattle Times (Apr. 14, 2021, 1:42 PM), https://www.seattletimes.com/business/amazon/new-amazon-data-shows-black-latino-and-female-employees-are-underrepresented-in-best-paid-jobs/; Jason Del Rey, Bias, Disrespect, and Demotions: Black Employees Say Amazon has a Race Problem, Vox (Feb. 26, 2021, 8:00 AM), https://www.vox.com/recode/2021/2/26/22297554/amazon-race-black-diversity-inclusion

[43] Jason Del Rey, A Leaked Amazon Memo May Help Explain Why the Tech Giant is Pushing Out So Many Recruiters, Vox (Nov. 23, 2022, 4:21 PM), https://www.vox.com/recode/2022/11/23/23475697/amazon-layoffs-buyouts-recruiters-ai-hiring-software.

[44] See, e.g., Jessica Kim-Schmid & Roshni Raveendhran, Where AI Can – and Can’t – Help Talent Management, Harv. Bus. Rev (Oct. 13, 2022), https://hbr.org/2022/10/where-ai-can-and-cant-help-talent-management (mentioning “the infamous Amazon AI tool that disadvantaged women applicants”).

[45] See Rieke & Bogen, supra note 35; Ifeoma Ajunwa, An Auditing Imperative for Automated Hiring, 34 Harv. J.L. & Tech. 2 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3437631; Mona Sloane, Emanuel Moss & Rumman Chowdhury, A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability, 3 Patterns 1 (2022), https://www.sciencedirect.com/science/article/pii/S2666389921003081.  

[46] See generally Online but Disconnected: Young Adults’ Experiences with Online Job Applications, JobsFirstNYC 5 (Oct. 23, 2017), https://jobsfirstnyc.org/wp-content/uploads/2019/11/Online_but_Disconnected.pdf; Aaron Smith & Monica Anderson, Americans’ Attitudes Toward Hiring Algorithms, Pew Rsch. Ctr. (Oct. 4, 2017), https://www.pewresearch.org/internet/2017/10/04/americans-attitudes-toward-hiring-algorithms/.

[47] Fuller, et al., supra note 40, at 11.

[48] For examples of these types of claims, see, e.g., Table 2, Manish Raghavan, et al., Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices, Cornell Univ. & Microsoft Rsch. 8 (2020), https://arxiv.org/abs/1906.09208.

[49] See, e.g., Rieke & Bogen, supra note 35, at 8; Algorithm-Driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?, Ctr. for Democracy & Tech. 7 (Dec. 2020), https://cdt.org/wp-content/uploads/2020/12/Full-Text-Algorithm-driven-Hiring-Tools-Innovative-Recruitment-or-Expedited-Disability-Discrimination.pdf.

[50] Rieke & Bogen, supra note 35.

[51] Aaron Rieke, et al., Essential Work: Analyzing the Hiring Technologies of Large Hourly Employers, Upturn (May 2021), https://www.upturn.org/work/essential-work/.    

[52] Wilneida Negrón, Little Tech is Coming for Workers, Coworker (2021), https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf.

[53] Raghavan, et al., supra note 48.

[54] Rieke & Bogen, supra note 35, at 14.

[55] Id. at 17.

[56] For more on lookalike audiences, see generally Piotr Sapiezynski, et al., Algorithms that “Don’t See Color”: Measuring Biases in Lookalike and Special Ad Audiences, Ne. Univ. (2019), https://arxiv.org/pdf/1912.07579.pdf.

[57] See generally Linda Morris & Olga Akselrod, Holding Facebook Accountable for Digital Redlining, ACLU (Jan. 27, 2022), https://www.aclu.org/news/privacy-technology/holding-facebook-accountable-for-digital-redlining; Ariana Tobin & Ava Kofman, Facebook Finally Agrees to Eliminate Tool That Enabled Discriminatory Advertising, ProPublica (June 22, 2022 4:30 PM), https://www.propublica.org/article/facebook-doj-advertising-discrimination-settlement; Muhammad Ali, et al., Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes, 3 Proc. of the ACM on Hum. Comput. Interaction 1 (2019), https://dl.acm.org/doi/abs/10.1145/3359301; Till Speicher, et al., Potential for Discrimination in Online Targeted Advertising: Conference on Fairness, Accountability, and Transparency, Proc. of Mach. Learning Rsch. 2 (2018), https://proceedings.mlr.press/v81/speicher18a/speicher18a.pdf.

[58] See Facebook EEOC Complaints, ACLU, (Sept. 25, 2019), https://www.aclu.org/cases/facebook-eeoc-complaints.

[59] Ariana Tobin & Jeremy Merrill, Facebook is Letting Job Advertisers Target Only Men, ProPublica (Sept. 18, 2018, 6:39 PM), https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men; Ava Kofman & Ariana Tobin, Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement, ProPublica (Dec. 13, 2019, 5:00 AM), https://www.propublica.org/article/facebook-ads-can-still-discriminate-against-women-and-older-workers-despite-a-civil-rights-settlement.

[60] See, e.g., Facebook Agrees to Sweeping Reforms to Curb Discriminatory Ad Targeting Practices, ACLU (Mar. 19, 2019), https://www.aclu.org/press-releases/facebook-agrees-sweeping-reforms-curb-discriminatory-ad-targeting-practices.

[61] See Jinyan Zang, How Facebook’s Advertising Algorithms Can Discriminate By Race and Ethnicity, Tech. Sci. (Oct. 19, 2021), https://techscience.org/a/2021101901/

[62] Karen Hao, Facebook’s Ad Algorithms are Still Excluding Women from Seeing Jobs, MIT Tech. Rev. (Apr. 9, 2021), https://www.technologyreview.com/2021/04/09/1022217/facebook-ad-algorithm-sex-discrimination/; see also Sara Kingsley, et al., Auditing Digital Platforms for Discrimination in Economic Opportunity Advertising, Carnegie Mellon Univ. 1 (June 2020), https://arxiv.org/ftp/arxiv/papers/2008/2008.09656.pdf; Nicolas Kayser-Bril, Automated Discrimination: Facebook Uses Gross Stereotypes to Optimize Ad Delivery, AlgorithmWatch (Oct. 18, 2020), https://algorithmwatch.org/en/automated-discrimination-facebook-google/.

[63] See Tobin & Kofman, supra note 57; Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements, Dep’t of Just. Off. of Pub. Affs. (Jan. 9, 2023), https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreaking.

[64] See, e.g., Pauline Kim & Sharion Scott, Discrimination in Online Employment Recruiting, 63 St. Louis University Law Journal 1, 8 (2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3214898 (describing study that found that simulated users who were identified as male or female and who engaged in identical web browsing activities related to job searches on Google were shown very different ads, with an ad for coaching on higher paying executive jobs shown significantly more often to men).

[65] See LinkedIn, https://www.linkedin.com/ (last visited Jan. 17, 2023).

[66] See ZipRecruiter, https://www.ziprecruiter.com/ (last visited Jan. 17, 2023).

[67] See Indeed, https://www.indeed.com/ (last visited Jan. 17, 2023).

[68] See CareerBuilder, https://www.careerbuilder.com/ (last visited Jan. 17, 2023).

[69] See Monster, https://www.monster.com/ (last visited Jan. 17, 2023).

[70] Rieke & Bogen, supra note 35, at 19.

[71] See LinkedIn Recruiter, LinkedIn, https://business.linkedin.com/talent-solutions/recruiter (last visited Jan. 8, 2023).

[72] Rieke & Bogen, supra note 35.

[73] Rieke & Bogen, supra note 35, at 21.

[74] See Sheridan Wall & Hilke Schellmann, LinkedIn’s Job-Matching AI was Biased. The Company’s Solution? More AI., MIT Tech. Rev. (June 23, 2021), https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/; Sahin Geyik, Stuart Ambler & Krishnaram Kenthapadi, Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search, ACM KDD 2221–31 (July 2019), https://arxiv.org/pdf/1905.01989.pdf.

[75] Wall & Schellmann, supra note 74; Sahin Cem Geyik & Krishnaram Kenthapadi, Building Representative Talent Search at LinkedIn, LinkedIn (Oct. 10, 2018), https://engineering.linkedin.com/blog/2018/10/building-representative-talent-search-at-linkedin.

[76] See generally Michael D. Ekstrand, et al., All the Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness: In Conference on Fairness, Accountability and Transparency, 81 Proc. of Mach. Learning Rsch 1, 172–86 (2018), https://proceedings.mlr.press/v81/ekstrand18b.html; Masoud Mansoury, et al., Feedback Loop and Bias Amplification in Recommender Systems: In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Ass’n for Comput. Mach. 2145–48 (Oct. 2020), https://arxiv.org/abs/2007.13019; Kyle Wiggers, Researchers Find Evidence of Bias in Recommender Systems, VentureBeat (July 29, 2020, 12:35 PM), https://venturebeat.com/ai/researchers-find-evidence-of-bias-in-recommender-systems/.

[77] Alex Engler, Auditing Employment Algorithms for Discrimination, Brookings (Mar. 12, 2021), https://www.brookings.edu/research/auditing-employment-algorithms-for-discrimination/.

[78] Rieke & Bogen, supra note 35, at 13; see also Rieke, et al., supra note 51, at 20.

[79] Fuller, et al., supra note 40, at 20.

[80] Rieke & Bogen, supra note 35, at 26.

[81] Id.; see also Rieke, et al., supra note 51, at 13.

[82] Cf. Rieke, et al., supra note 51, at 24, 50.

[83] Rieke & Bogen, supra note 35, at 27.

[84] Fuller, et al., supra note 40, at 22.

[85] Rieke & Bogen, supra note 35, at 29; Rieke, et al., supra note 51, at 23; Algorithm-Driven Hiring Tools, supra note 49, at 6.

[86] Rieke & Bogen, supra note 35, at 36.  

[87] Algorithm-Driven Hiring Tools, supra note 49, at 6.

[88] See Rieke & Bogen, supra note 35, at 29.

[89] Pymetrics is one example of a vendor of this kind of tool. See id. at 33.

[90] Kim, supra note 26, at 876; Barocas & Selbst, supra note 25, at 729–32; Rieke & Bogen, supra note 35, at 8.

[91] Kim, supra note 26.

[92] See, e.g., Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 10, 2018, 7:04 PM), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[93] Kim, supra note 26; Barocas & Selbst, supra note 25, at 729–32; Rieke & Bogen, supra note 35, at 35.

[94] Dave Gershgorn, Companies are on the Hook if their Hiring Algorithms are Biased, Quartz (Oct. 22, 2018), https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased; Kim, supra note 26, at 863.

[95] Barocas & Selbst, supra note 25, at 729–32.

[96] See, e.g., Luke Stark & Jesse Hoey, The Ethics of Emotion in Artificial Intelligence Systems: In Proceedings of ACM Conference on Fairness, Accountability, and Transparency (FAccT’21), ACM (Mar. 1, 2021), https://doi.org/10.1145/3442188.3445939; Algorithm-Driven Hiring Tools, supra note 49, at 6; Rieke & Bogen, supra note 35; Lydia X. Z. Brown, How Opaque Personality Tests Can Stop Disabled People from Getting Hired, Ctr. for Democracy & Tech. (Jan. 6, 2021), https://cdt.org/insights/how-opaque-personality-tests-can-stop-disabled-people-from-getting-hired/.

[97] See generally Luke Stark & Jevan Hutson, Physiognomic Artificial Intelligence, Fordham Intell. Prop., Media & Ent. L. J., Forthcoming (Sept. 24, 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3927300.

[98] Id.

[99] See, e.g., Algorithm-Driven Hiring Tools, supra note 49, at 9; Rieke & Bogen, supra note 35, at 36; Buolamwini & Gebru, supra note 25, at 77–91.

[100] See, e.g., Algorithm-Driven Hiring Tools, supra note 49, at 10; Guidance on Web Accessibility and the ADA, ADA.gov: U.S. Dep’t of Just. Civ. Rts. Division (Mar. 18, 2022), https://www.ada.gov/resources/web-guidance/.

[101] See, e.g., Rieke, et al., supra note 51, at 24.

[102] See, e.g., Algorithm-Driven Hiring Tools, supra note 49, at 10; Rieke, et al., supra note 51, at 24.

[103] Id.

[104] See generally Rieke & Bogen, supra note 35; Engler, supra note 77.

[105] Rieke, et al., supra note 51, at 21–22.

[106] This Commission has previously recognized some of the ways that background checks may lead to disparate impact based on race or other protected characteristics. Background Checks: What Employers Need to Know, EEOC (Mar. 11, 2014), https://www.eeoc.gov/laws/guidance/background-checks-what-employers-need-know.

[107] Jodi Kantor & Aryan Sundaram, The Rise of the Worker Productivity Score, N.Y. Times (Aug. 14, 2022), https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html.

[108] Id.; see also Annette Bernhardt, Reem Suleiman & Lisa Kresge, Data and Algorithms at Work: The Case for Worker Technology Rights, U. Cal. Berkeley Lab. Ctr. (Nov. 3, 2021), https://laborcenter.berkeley.edu/data-algorithms-at-work/; Tom Simonite, This Call May be Monitored for Tone and Emotion, Wired (Mar. 19, 2018), https://www.wired.com/story/this-call-may-be-monitored-for-tone-and-emotion/.

[109] Negrón, supra note 52; see also Bossware and Employment Tech Database, Coworker (Nov. 17, 2021), https://home.coworker.org/worktech.

[110] Aiha Nguyen, The Constant Boss: Work Under Digital Surveillance, Data & Soc’y (May 2021), https://datasociety.net/wp-content/uploads/2021/05/The_Constant_Boss.pdf.

[111] Alexandra Mateescu & Aiha Nguyen, Algorithmic Management in the Workplace, Data & Soc’y (Feb. 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf.

[112] Bernhardt, Suleiman & Kresge, supra note 108.

[113]  Center for Democacy & Technology, Warning: Bossware May Be Hazardous to Your Health (July 29, 2021), https://cdt.org/insights/report-warning-bossware-may-be-hazardous-to-your-health/.

[114] The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.

[115] See, e.g., Letter from ACLU, American Association of People with Disabilities, Bazelon Center for Mental Health Law, Center for Democracy & Technology, Center on Privacy & Technology at Georgetown Law, Lawyers’ Committee for Civil Rights Under Law, the Leadership Conference on Civil and Human Rights, and Upturn to EEOC, Coalition Memo: Addressing Technology’s Role in Hiring Discrimination (July 13, 2021), https://www.aclu.org/letter/coalition-memo-addressing-technologys-role-hiring-discrimination; Olga Akselrod, How Artificial Intelligence Can Deepen Racial and Economic Inequities, ACLU (July 13 2021), https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities (discussing letter to federal administration signed by two dozen partner organizations asking the administration to take concrete action to address equity and civil rights concerns in AI and technology policy).

[116] See, e.g., Jenny Yang, Testimony before the House Civil Rights and Human Services Subcommittee: The Future of Work: Protecting Workers’ Civil Rights in the Digital Age, Urb. Inst. 9–10 (Feb. 5, 2020), https://www.urban.org/sites/default/files/publication/101676/testimony_future_of_work_and_technology_-_jenny_yang_0_2.pdf; Raghavan, et al., supra note 48, at 17; Rieke & Bogen, supra note 35, at 11, 46; Rieke, et al., supra note 51, at 29–30.

[117] The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, supra note 114, at Q14.

[118] Id. at Q3.

[119] Matt Scherer & Ridhi Shetty, Civil Rights Standards for 21st Century Employment Selection Procedures, Ctr. for Democracy & Tech. (Dec. 5, 2022), https://cdt.org/insights/civil-rights-standards-for-21st-century-employment-selection-procedures/.

[120] Id. at 8.

[121] Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[122] Id.

[123] Reva Schwartz, et al., A Proposal for Identifying and Managing Bias in Artificial Intelligence, Nat’l Inst. Standards & Tech. Spec. Pub. 1270 i (June 2021), https://doi.org/10.6028/NIST.SP.1270-draft.

[124] ACLU Comment on NIST’s Proposal for Managing Bias in AI, ACLU (Sept. 10, 2021), https://www.aclu.org/letter/aclu-comment-nists-proposal-managing-bias-ai.

[125] 42 U.S.C. § 2000e-5(b). 

[126] See 29 U.S.C. § 626; 29 U.S.C. § 211(a). 

[127] 42 U.S.C. § 2000e-4(g)(5). Similar authority is granted by the ADA. 42 U.S.C. § 12117(a).

[128] See also Rieke, et al., supra note 51, at 37, 41.

[129] See, e.g., Rieke & Bogen, supra note 35, at 22 (discussing ambiguity in liability of recruiting platforms).

[130] Facebook EEOC Complaint – Charge of Discrimination, ACLU 6–7, 13 (Sept. 18, 2018), https://www.aclu.org/legal-document/facebook-eeoc-complaint-charge-discrimination; Real Women in Trucking v. Meta Platforms, Inc. Charge 41–42 (Dec. 1, 2022), http://guptawessler.com/wp-content/uploads/2022/12/Real-Women-in-Trucking-Meta-Charge.pdf.

[131] Meta Platforms, Inc. Charge, supra note 130, at 42.

[132] See, e.g., Letter from ACLU, supra note 115.