Testimony of Adam T. Klein

Good morning, Chair Burrows and members of the Commission. Thank you for this opportunity to share my insights with you about the use of AI and automated systems relating to sourcing, recruiting, and applicant selection, and ways that the EEOC can provide additional guidance in this area to protect against technology-sponsored discrimination. My name is Adam Klein and I serve as Managing Partner at Outten & Golden LLP.  In that capacity, I represent classes of workers in civil rights and workplace equity litigation, including cases focusing on discriminatory hiring selection procedures relating to criminal background records and social media/online job advertising.

As a starting point, there are numerous types of AI and automated systems used by employers situated on a spectrum of complexity and utility – from simple applicant tracking to complex gamified psychometrics and unsupervised machine learning deployed to source and recruit applicants on social media platforms.  In this context, AI/algorithms are specifically designed to process large data sets and efficiently differentiate applicants with limited or no human participation. For the remainder of this discussion, I will focus on the more complex use case for AI/algorithms in the workplace.

The advantages of using AI/algorithms by employers are clear and obvious: a computer algorithm can easily and cheaply source, recruit, and select applicants for employment. The disadvantages are equally obvious: there is a fundamental and profound lack of a theoretical or practical nexus between the key competencies or requirements of a target position – using a job analysis and competency model – with the actual selection criteria used by AI systems. Moreover, there is a serious concern that these AI sourcing/hiring selection systems will essentially automate ingrained biases that tend to perpetuate disturbing and longstanding patterns of hiring discrimination based on protected characteristics.  I urge the EEOC to take additional proactive measures to address these emerging trends in the workplace.

Recommendations

First, as noted, there is a disturbing lack of scientific evidence supporting claims that machine-learning, automated hiring processes provide any practical utility other than user convenience. Predictive algorithms claim to identify the “best” or preferred candidates but may instead perpetuate biased representation rates and identified traits and interests of “favored” incumbent employees that are not job relevant.  The EEOC should issue guidance requiring employers to document the use of these emerging technologies and provide a sound scientific basis for its use for sourcing, recruiting, and hiring selection. 

Second, employers must have the ability, and be incentivized, to audit data from AI systems and isolate each discrete selection step so they can monitor for adverse impact. This is important because the algorithmic “tests” used in selection are constantly changing (or “learning”), and typically proceed with no underlying conceptual framework. Consequently, evidence of adverse impact is extremely problematic and should be eliminated to the extent possible.  Moreover, many of the AI systems are maintained by outside vendors with no real accountability.

Third, applicants exposed to AI hiring selection systems should be informed of its use and be provided with disclosures sufficient to understand whether a potential violation of federal anti-discrimination statutes may have occurred.

Fourth, the federal government has a unique role to play to address the use of these emerging technologies. I recommend a coordinated government response – including drawing resources from federal agencies with particular subject-matter expertise in the use of AI and machine learning systems.

Growing Technologies in Pre-Employment

Legal scholars and practitioners have written at length about the discriminatory impact on workers resulting from the increase in employers’ use of AI or algorithmic/automated decision-making in pre-employment recruitment, selection/screening, and assessment practices.[1] 

First, Online Recruitment/Job Advertising.  Online advertising is big business, reaching ever-widening audiences. Social media platforms mine user data, and their algorithms employ that information to target particular audiences for job advertisements. Social media platforms, such as Meta (Facebook), previously required job advertisers to select filters—location (as a proxy for race), age, and gender—to target their ads until this practice was challenged in several class action lawsuits brought by my law firm, co-counsel, and nonprofit organizations. In a landmark settlement, Facebook agreed to settle these legal challenges and discontinued the underlying practices.  In addition, employers who published the discriminatory job ads settled as well with agreements to discontinue job ad targeting based on protected characteristics.[2]

Notwithstanding such changes, researchers have commented that Facebook’s AI in job advertisements may still target audiences in ways consistent with gender-, race-, or age-based stereotypes (for example, male users disproportionately receiving jobs for lumberjacks or truck drivers). In response to newly filed legal challenges and DOJ investigations, earlier this month, Facebook announced its plan to create a “Variance Reduction System” to advance equitable distribution of ads, including employment ads, with the goal of reducing the variances between the eligible and actual audiences along perceived sex and race/ethnicity identifiers in the delivery of ads. Time will tell if this newest intervention is successful in diminishing unlawful targeting.[3]

It appears that Facebook is just one of several job advertising platforms that utilize AI in delivering ads to prospective applicants—others appear to include LinkedIn, ZipRecruiter, CareerBuilder, and Monster. While it remains largely opaque how the AI operates, journalists report that these sites utilize AI “matching engines,” which are optimized to produce applications based on categories of user-provided information; data assigned to the user based on other users’ skill sets, experiences, and interests; and behavioral data based on how a user interacts or responds to job postings. Another example is the rise of TikTok and its foray into the job recruitment space through the pilot “TikTok Resumes” program, which invites applicants to submit a TikTok video resume for employer review. The clear concern is that this type of video resume technology may perpetuate age, appearance-based (implicating gender discrimination), and race-based discriminatory hiring practices.[4]

Second, Applicant Screening Tools. Employers also use AI-driven automated screening tools to sort, rank, and select applicants for employer review. Job applicants often interact with automated hiring platforms to submit their employment application, including providing personal information, agreeing to background checks, and completing personality/skills assessments. These automated hiring platforms then utilize algorithms to sort the applications; they are ubiquitous in certain industries, including retail. One example of bias is when AI-based hiring programs screen out applicants with gaps in employment history, disparately impacting groups who have taken time off for caregiving responsibilities. Holistically, we should be asking if the predictive algorithms are designed to select the perceived “best” candidates or qualified candidates based on actual job-related criteria. It should be the latter: because if the applicant is qualified based on the ability to perform the job, the applicant should have the opportunity to compete and not get washed out early in the process.

Further, third-party vendors harvest online information creating datasets of attributes and behaviors, then create automated decision-making programs that analyze the datasets to find statistical relationship between variables. What these vendors are doing is predicting who is a good match for an employer by identifying patterns through inferring characteristics from a dataset of information about candidates. These algorithms make predictions based upon statistical correlations or observed patterns but are not based on causal factors and further lack any connection with job performance, making them prone to error and biases. Such screening and selection procedures should by validated at a minimum by SIOP Principles, which note that “[v]ariables chosen as predictors [for employment] should have theoretical, logical, or empirical foundation.”[5] When selection procedures are challenged as having a disparate impact, employers bear the burden of demonstrating the selection procedure is job-related and consistent with business necessity.[6] In this way, employer bear the burden of demonstrating that the model is statistically valid and substantively meaningful, as opposed to merely job related.

Lastly, Psychometric Assessments. Newer psychometric assessments include personality tests, video interviews, and gamified assessments. Of course, psychometric testing has been around for decades but is currently making a comeback with the help of AI. In the 1950s, psychometric tests began to be used in the workplace by companies outside of the armed services. In the 1960s and 1970s, IO psychologists began reintroducing personality tests based on new behavioral and social science research and techniques.

Vendors selling these services promote the idea that certain tests can accurately assess candidates for certain job competencies, values, and intelligence. But personality tests have a long history of legal challenges including privacy concerns, accessibility and disability discrimination, as well as disparate impact concerns. The shortcomings of facial recognition programming are now well documented. Before discontinuing the practice in 2021, it was publicly reported that HireVue’s virtual interview program would sort and grade video job applicants and uses AI algorithms to evaluate their performance, analyze the interview, and predict their performance based on the interview.[7]

“Gamification” in psychometric assessment goes beyond personality questions by “add[ing] features such as rules; competition; scores; medals, badges, or trinkets won; levels of progress; and comparisons of performance against other ‘players,’ typically in work-related scenarios.”[8] For example, it is reported that one widely-used vendor provides “an online technology platform that enables hiring managers to hold blind audition challenges,” in which “job applicants are given mini assignments that are designed to assess the applicant for the specific skills required for the open position.”[9]

To address this new wave of assessment, the Society of Industrial Organization Psychologists (SIOP) published a white paper that recognized that gamification testing for hiring has not developed enough to be scientifically studied and needs “further empirical testing in accuracy of job performance predictivity and accuracy in general.”[10]

Vendors of this type of testing have offered voluntary audits of its AI assessments, which beg new questions regarding the AI-auditing industry. These voluntary audits have been criticized for being self-funded, creating “a risk of the auditor being influenced by the fact that this is a client,” failing to account for intersectionality, and questioning whether auditing reveals if AI products assist employers with making better hiring choices.[11]

EEOC Regulatory Enforcement and Proposed Actions

The EEOC’s October 2021 launch of the Artificial Intelligence and Algorithmic Fairness Initiative and its May 2022 technical assistance document[12] about AI and disability discrimination have been important steps in engaging stakeholders and the public to update its Uniform Guidelines on Employee Selection Procedures. The DOL’s Office Federal Contract Compliance Program also issued guidance in 2019, stating that AI-based pre-employment screening and selection programs would be subject to the Uniform Guidelines if an adverse impact was found, and that contractors would be required to validate the selection procedure.[13] Generally, as attorneys, we may not be the best equipped with the expertise to propose technical solutions and guidance, and should first be informed by IO psychologists, like SIOP (as the Uniform Guidelines had been), mathematicians, computer scientists, social scientists, and others.

Commissioner Sonderling summarized proposed solutions and approaches to addressing employment discrimination in AI-based pre-employment tools include:[14]

  • The Algorithmic Accountability Act, granting FTC authority to promulgate regulations to require large companies to assess AI tools for potential bias.
  • State-level proposals to expand liability for employers and third-party vendors using, selling, or administrating AI tools used in employment decision-making.
  • Model risk management (MRM), or self-audits.
  • Improved data collection efforts.
  • Looking to the European Union’s Artificial Intelligence Act’s risk-based approach to regulation, which also focuses on vendor liability rather than solely employer liability—which will impact companies doing business in both the US and the EU.

I agree with Commissioner Sonderling in his proposal that “the EEOC should consider using Commissioner charges and directed investigations to address AI-related employment discrimination” because they can “facilitate and may expedite the initiation of targeted bias probes.”[15] For example, “the EEOC can initiate [directed] investigations without an underlying charge from an identifiable victim” and “Commissioner charges are useful for identifying and remedying possible systemic or pattern-or-practice discrimination rather than single plaintiff discrimination because they are initiated from a broader enforcement perspective.”[16] These proposals contemplate the difficulty that potential plaintiffs may face when the source of the discrimination—AI in the various stages of recruitment and hiring—is largely unknown as the reason for the employment decision. 

Other preventative measures that can be taken include voluntary compliance by employers (facilitated by an EEOC-created voluntary compliance program), along with attorneys’ adherence to professional responsibility duties in technology competence to advise clients in using AI-based technologies responsibly, ethically, and legally. These approaches are tied to suggestions of auditing (whether self-auditing or third-party auditing), and as the Director of OFCCP Jenny R. Yang recognized: “the EEOC could be empowered to establish standards for auditors concerning qualifications and independence” and that the “government could establish an auditing framework and set core requirements for retention and documentation of technical details, including what training data must be disclosed for review during an investigation.”[17] 

Workplace advocates agree that the EEOC should provide more frequent and consistent guidance to clarify the law and help encourage technology vendors and employers to be proactive in preventing discriminatory effects through issuing more opinion letters on the topic, and to work with state and local agencies where new laws directed at AI are becoming more prevalent. The EEOC could work collectively with localities that are out in front protecting workers, including New York City’s Local Law Int. No. 144, which just took effect on January 1, 2023, and will be enforced starting April 15, 2023, as the NYC Department of Consumer and Worker Protection considers proposed rules to implement the law.[18] This new law regulates the use of “automated employment decision tools” in hiring and promotion decisions within NYC. The law, which applies to employers and employment agencies alike, requires that: any AI tool undergo an annual, independent “bias audit,” with a publicly available summary; employers provide each candidate (internal or external) with 10 business days’ notice prior to being subject to the tool; the notice list the “job qualifications and characteristics” used by the tool to make its assessment; the sources and types of data used by the tool, as well as the applicable data-retention policy, be made available publicly (or upon written request from the candidate); and candidates be able to opt out and request an alternative selection process or accommodation.

As a workplace fairness advocate, I’m particularly attuned to how marginalized workers are most disadvantaged from these new technologies. As mathematician Cathy O’Neil recognized in her book, Weapons of Math Destruction, algorithms have a destructive disparate impact on poor candidates because wealthier individuals are more likely to benefit from personal input. “A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than will a fast-food chain or cash-strapped urban school district. The privileged . . . are processed more by people, the masses by machines.”[19]

Conclusion

AI/algorithm technologies that are deployed for sourcing, recruitment, and hiring selection are designed to discriminate, that is, differentiate and select prospective job applicants and candidates based on complex statistical analyses. These tools function primarily as time-saving and cost-effective ways to sort and hire workers and have become ubiquitous in low-wage industries. For example, Workstream, a hiring and onboarding platform, states on its website: “Workstream is the mobile-first hiring and onboarding platform for the deskless workforce. Powered by automation and two-way texting, our platform enables businesses to source, screen and onboard hourly workers faster. More than 24,000 businesses trust Workstream to hire - and save up to 70% of time on hiring.”[20]

However, cost-effectiveness cannot drive employment decision-making if it runs afoul of anti-discrimination laws. We need to construct the means for testing the validity and reliability of these models. For any algorithmic decision making, the algorithm should be tested, by experts, as valid in advance for the type of job at issue before it is applied. “Congress or state legislatures could codify, with stiff penalties, the Uniform Guidelines approach that before using a selection tool for hiring, an employer should perform a job analysis to determine which measures of work behaviors or performance are relevant to the job or group of jobs in question. Then the employer must assess whether there is ‘empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.’”[21]

Moreover, people who are exposed to this technology should be given adequate notice of its use and sufficient information to assess whether their civil rights have been implicated or violated.  Finally, a coordinated federal and state/local inter-agency government response is clearly warranted to develop the technical expertise required to evaluate and regulate these new technologies in the workplace to protect against systemic violations of our nation’s civil rights statutes.

 

 

 

[1]             See, e.g., Pauline Kim and Matthew T. Bodie, Artificial Intelligence and the Challenges of Workplace Discrimination and Privacy, 35 ABA Journal of Labor and Employment Law 2, 289 (2021); Nantiya Ruan, Attorney Competence in the Algorithm Age, 35 ABA Journal of Labor & Employment Law 317 (2021); Jenny R. Yang, Adapting Our Anti-Discrimination Laws to Protect Workers’ Rights in the Age of Algorithmic Employment Assessments and Evolving Workplace Technology, 35 ABA Journal of Labor & Employment Law 207 (2021).

[2]             See https://www.aarp.org/work/age-discrimination/facebook-settles-discrimination-lawsuits/ and  https://www.propublica.org/article/facebook-ads-discrimination-settlement-housing-employment-credit

[3]             See https://about.fb.com/news/2023/01/an-update-on-our-ads-fairness-efforts/ (“As a part of our settlement with the Department of Justice (DOJ), representing the US Department of Housing and Urban Development (HUD), we announced our plan to create the Variance Reduction System (VRS) to help advance the equitable distribution of ads on Meta technologies. After more than a year of collaboration with the DOJ, we have now launched the VRS in the United States for housing ads. Over the coming year, we will extend its use to US employment and credit ads. Additionally, we discontinued the use of Special Ad Audiences, an additional commitment in the settlement.”), last visited on January 28, 2023.

[4]             See https://newsroom.tiktok.com/en-us/find-a-job-with-tiktok-resumes. (“Interested candidates are encouraged to creatively and authentically showcase their skillsets and experiences, and use #TikTokResumes in their caption when publishing their video resume to TikTok.”), last visited on January 28, 2023.

[5]             https://www.apa.org/ed/accreditation/about/policies/personnel-selection-procedures.pdf at 12.

[6]             Griggs v. Duke Power Co., 401 U.S. 424 (1971).

[7]             See Joe Avella & Richard Feloni, We Tried the AI Software Companies Like Goldman Sachs and Unilever Use to Analyze Job Applicants, Bus. Insider (Aug. 29, 2017), https://www.businessinsider.com/hirevue-uses-ai-for-job-interview-applicants-goldman-sachs-unilever-2017-8 [https://perma.cc/6YJL-ZNXM].

[8]             Jessica M. Walker & Don Moretti, Visibilit Comm., Soc’y for Indus. & Org. Psych., Recent Trends in Psychometric Assessment 4 (2018), http://www.siop.org /Portals/84/docs/White%20Papers/PreAssess.pdf (“The intent is to provide a more captivating candidate experience that assesses specific skills while keeping the applicant engaged.”).

[9]             Stephanie Bornstein, Reckless Discrimination, 105 Cal. L. Rev. 1055, 1102 (2017) (citing Marianne Cooper, The False Promise of Meritocracy, Atlantic (Dec. 1, 2015), http://www.theatlantic.com/business/archive/2015/12/meritocracy/418074; Discover Great Talent “The Voice” Way, GapJumpers, https://www.gapjumpers.me).

[10]            SIOP Statement on the use of Artificial Intelligence (AI) for Hiring: Guidance on the Effective use of AI-Based Assessments, Society for Industrial and Organizational Psychology, (January 29, 2022), https://www.siop.org/Portals/84/docs/SIOP%20Statement%20on%20the%20Use%20of%20Artificial%20Intelligence.pdf?ver=mSGVRY-z_wR5iIuE2NWQPQ%3d%3d. Such advice includes: (1) AI-based assessments should produce scores that are considered fair and unbiased; (2) The content and scoring of AI-based assessments should be clearly related to the job; (3) AI-based assessments should produce scores that predict future job performance (or other relevant outcomes) accurately; (4) AI-based assessments should produce consistent scores that measure job-related characteristics (e.g., upon re-assessment); and (5) All steps and decisions relating to the development and scoring of AI-based assessments should be documented for verification and auditing.

[11]            Hilke Schellman, Aditors are testing hiring algorithms for bias, but there’s no easy fix, MIT Technology Review (February 11, 2021), https://questionstechnologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/ (quoting Professor Pauline Kim).  

[12]            Press Release, U.S. Equal Emp. Opportunity Comm’n, U.S. EEOC and U.S. Department of Justice Warn Against Disability Discrimination (May 12, 2022), https://www.eeoc.gov/newsroom/us-eeoc-and-us-department-justice-warn-against-disability-discrimination. 

[13]            See Off. of Fed. Cont. Compliance Programs, Validation of Employee Selection Procedures, U.S. DEP’T. OF LAB (2019), https://www.dol.gov/agencies/ofccp/faqs/employee-selection-procedures.

[14]             Keith E. Sonderling, Bradford J. Kelley, and Lance Casimir, The Promise and The Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1, 53-61 (2022). Available at: https://repository.law.miami.edu/umlr/vol77/iss1/3.

[15]            Id. at 66.

[16]            Id. at 67. 

[17]            Jenny R. Yang, Adapting Our Anti-Discrimination Laws to Protect Workers’ Rights in the Age of Algorithmic Employment Assessments and Evolving Workplace Technology, 35 ABA Journal of Labor & Employment Law 207, 227 (2021).

[18]            NYC Admin. Code § 20-1201.

[19]            Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatened Democracy, 8 (2016).

[20]             https://www.workstream.us/, last visited January 23, 2023.

[21]            Lori Andrews and Hannah Bucher, Automating Discrimination: AI Hiring Practices and Gender Inequality, 44 Cardozo L. Rev. 145, 200–01 (2022).