Breadcrumb

  1. Inicio
  2. node
  3. Testimony of Dr. Ifeoma Ajunwa

Testimony of Dr. Ifeoma Ajunwa

 

  1. Introduction

Greetings to all. Chair Burrows and Commissioners, thank you for inviting me to testify at this important public meeting on employment discrimination in AI and automated systems. My name is Dr. Ifeoma Ajunwa. I am a law professor at the University of North Carolina School of Law where I am also the founding director of the AI Decision-Making Research Program. I am also a founding board member of the Labor Tech Research Network which is an international group of scholars dedicated to research on ethical and legal issues associated with AI in the workplace. I have published several law review articles on automated hiring systems and I have a forthcoming book, The Quantified Worker, which discusses pressing legal issues arising from the use of AI and automated decision-making technologies in the workplace. I have previously testified before this commission in 2016, and in 2020, I also testified before the Congressional Education and Labor Committee on the issue of workers’ rights in the digital age.  

 

  1. Automated Hiring’s Potential for Discrimination

In several writings, I have documented the capability of automated hiring programs to both replicate unlawful employment discrimination and obfuscate it.[1] First, I  make note of the business trend towards the use of automated hiring programs. In an informal survey that a co-author and I conducted, we found that the top twenty private employers in the Fortune 500 list all made use of automated hiring systems.[2] Notably also, this list comprised of mostly retail companies with large numbers of workers in low-wage or entry level work. It is true that many businesses turn to automated hiring in an attempt to diversify their workplace and eliminate the human bias that might stand in the way of that, yet, there is evidence that such algorithmic decision-making processes may stymie the progress made by antidiscrimination laws and may also serve to magnify inequality in the workplace.[3] In a study of 135 archival texts tracing the development of hiring platforms from 1990-2006, my co-author and I found that although one presumed reason for automated hiring was to reduce hirer bias, in actuality, the automated hiring platforms were initially advertised as a way to “clone your best worker,” a slogan that, in effect, replicated bias.[4] This is precisely because automated hiring systems take the nebulous concept of  “cultural fit” for a given corporation or organization and concretize it into select algorithmic variables that are stand-ins for protected categories.[5] The creation of these proxy variables can have racial, gender, and age discrimination effects in contravention of antidiscrimination laws for which this commission has regulatory power such as Title VII of the Civil Rights Act and the Age Discrimination in Employment Act (ADEA). Furthermore, the designs of automated hiring systems, both in their user interface and data retention protocols can further enable unlawful employment discrimination, in a manner that is both discreet and difficult to document.[6]

       I’ll illustrate these problems first with an example shared by an employment and labor law attorney who had been hired to audit an automated hiring system. As reported by the lawyer, when the automated system was presented with training data (presumably the resumes of top performers) and then queried as to which two variables were found to be the most relevant – the system reported back those variables as the name “Jared” and whether an applicant had played “high school lacrosse.”[7]  As I detail in my forthcoming book, the significance of this result is that these are variables that may be considered proxy variables. As confirmed by social security records, the name “Jared” is highly correlated to individuals who are both white and male.[8] Furthermore,  lacrosse is a sport that isn’t found in all high schools, rather it is an expensive sport that is found in well-funded high schools located in affluent neighborhoods that are more likely to be predominantly white, given the history of racial segregation in the United States.[9] Thus, in this insidious manner, proxy variables as part of automated hiring systems can enact unlawful racial and gender employment discrimination.

      I share two more stories as examples of how automated hiring systems can enable unlawful discrimination through the platform authoritarianism of their user interface and through their data retention rules. The first story is that of a man in Massachusetts who found that he was unable to complete an application because he could not register his college graduation year in the drop-down menu which limited choices to more recent graduation years.[10] In this case, the year of college graduation which is highly correlated to age is masquerading as a proxy variable for age discrimination. The second story is that my co-author and I conducted a field experiment in which we found that we could not complete the application for a major retail employer. Although we had selected a part-time position, the automated hiring platform still required the applicant to indicate unlimited availability to submit the application.[11] What was stealthy about this was that the design features of most automated hiring systems would not retain a record of the failed attempt to complete the application.[12] In this scenario, requiring unlimited availability disallows applicants with caregiving obligations from applying. This would have a disproportionate impact on women as women are more likely to be caregivers.

      Finally, I point to the use of automated video interviewing with facial analysis and emotion detection as an inherently flawed and discriminatory automated hiring practice.[13]  In 2018, 60% of organizations were using video interviews, but the use of automated video interviews sharply increased to 86% in 2020 due to the Covid-19 pandemic.[14] This automated hiring practice is akin to the disproved pseudoscience of phrenology and thus should be banned.  Automated video interviews that are scored by speech algorithms provide opportunity for accent discrimination[15] and the ones that claim to detect emotion from facial analysis further enable racial and gender discrimination given cultural and gender differences in how emotions are expressed.[16]

 

  • Proposals for Governance of Automated Hiring

Given the demonstrated capability of automated systems to be used in the service of unlawful discrimination, I offer four proposals that the EEOC should consider as part of its enforcement of employment antidiscrimination laws.

 

  1. Discrimination Per Se

First, the EEOC should consider the addition of a third cause of action for Title VII of the Civil Rights Act. Currently, there are two causes of action: disparate treatment and disparate impact. Both causes of action present high burdens of proof for the applicant. In the first, the applicant must in essence find the “smoking gun” that proves direct and intentional discrimination. In the second, the applicant must provide the requite statistics to show that a specific hiring practice that had a disparate impact on a protected category. However, courts have lacked consistency in deciding appropriate statistics to consider for a disparate impact cause of action, with the effect that the plaintiff’s burden of proof has become unreasonably high.[17] A third cause of action, which I call discrimination per se, would shift the burden of proof from applicant to employer, so long as an applicant is able to point to a feature or requirement of an automated hiring systems that seems egregiously discriminatory for a particular protected class.[18] The employer would then bear the burden of proving with the statistical results of the audits of its automated hiring system, that the identified automated hiring feature does not in fact have a disparate impact on a protected class.[19]

 

  1. Mandated Audits of Automated Hiring Systems

This brings me to the second proposal. I propose that the EEOC should mandate employer audits of any automated hiring systems in use.[20] There is some question as to whether this should be an internal audit or whether the mandate should require an external audit with an independent third-party auditor. I argue that the EEOC should require external audits as internal audits are not enough. The EEOC could chose to take on these external audits or it could certify third party vendors that would provide those audits.

 

  1. The EEOC Should Create Its Own Audit Tools

A third proposal follows from the second, as I argue that the EEOC (perhaps in cooperation with the FTC) should develop its own automated governance tools in the form of AI or automated systems that could be used to audit automated hiring systems.[21] The EEOC could then provide audit services to corporations deploying automated hiring systems or it could use such an audit tool in its investigation and enforcement actions.[22]

 

  1. The Uniform Guidelines on Employee Selection Procedures Should Govern

Finally, my fourth proposal relates to automated video interviewing and other automated hiring practices based on shaky science and which have no probative value for hiring. I propose that the EEOC should release an advisory notice that the Uniform Guidelines on Employee Selection Procedures govern the use of variables in algorithmic hiring and the design of automated hiring systems to retain full records of both completed application and application attempts.[23] All variables selected for automated hiring systems should then meet criterion, content, and construct validity for the specified job position:

“Evidence of the validity of a test or other selection procedure by a criterion-related validity study should consist of empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance. Evidence of the validity of a test or other selection procedure by a content validity study should consist of data showing that the content of the selection procedure is representative of important aspects of performance on the job for which the candidates are to be evaluated. Evidence of the validity of a test or other selection procedure through a construct validity study should consist of data showing that the procedure measures the degree to which candidates have identifiable characteristics which have been determined to be important in successful performance in the job for which the candidates are to be evaluated.”[24]

 This will lessen opportunities for proxy variables to be deployed for unlawful employment discrimination against protected categories of workers and would dissuade the use of facial analysis for emotion recognition.

 

Once again, my gratitude to Chair Burrows and her fellow commissioners for the opportunity to present these remarks and proposals in support of the EEOC mission of equal employment opportunity.

 

 

 

[1] See, Ifeoma Ajunwa, Automated Video Interviewing as the New Phrenology, 36 Berkeley Tech. L.J. 1173 (2022);

Ifeoma Ajunwa, The Auditing Imperative for Automated Hiring, 34 Harv. J.L. & Tech. 621 (2021); Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo. L. Rev. 1671 (2020); Ifeoma Ajunwa,

 Protecting Workers’ Civil Rights in the Digital Age, 21 N.C.J.L & Tech. 1 (2020), Ifeoma Ajunwa,

Age Discrimination by Platforms, 40 Berkeley J. Emp. & Lab. L.1 (2019); Ifeoma Ajunwa and Daniel Greene, “Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of the Workplace" In Work and Labor in the Digital Age. Research in the Sociology of Work. Published online: 14 Jun 2019; 61-91.

[2] “Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of the Workplace" In Work and Labor in the Digital Age. Research in the Sociology of Work. Published online: 14 Jun 2019; 61-91.

[3] “Advocates applaud the removal of human beings and their flaws from the assessment process.” Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 WASH. L. REV. 1, 4 (2014). Algorithms or automated systems are often seen as fair because they are “claimed to rate all individuals in the same way, thus averting discrimination.” Id.

[4] Ifeoma Ajunwa and Daniel Greene, “Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of the Workplace" In Work and Labor in the Digital Age. Research in the Sociology of Work. Published online: 14 Jun 2019; 61-91, page 77.

[5] Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo. L. Rev. 1671, 1712-1715 (2020).

[6] Ifeoma Ajunwa, The Auditing Imperative for Automated Hiring, 34 Harv. J.L. & Tech. 621, 622 (2021).

[7] Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo. L. Rev. 1671, 1690 (2020).

[8] Ifeoma Ajunwa, The Quantified Worker (forthcoming from Cambridge University Press, April 2023).

[9] Ifeoma Ajunwa, The Quantified Worker (forthcoming from Cambridge University Press, April 2023).

[10] Patricia G. Barnes, Behind the Scenes, Discrimination by Job Search Engines, AGE DISCRIMINATION EMP. (Mar. 29, 2017), https://www.agediscriminationinemployment.com/ behind-the-scenes-discrimination-by-job-search-engines

[11] Ifeoma Ajunwa & Daniel Greene, Platforms at Work: Data Intermediaries in the Organization of the Workplace, in WORK AND LABOR IN THE DIGITAL AGE (2019

[12] See generally cathy o’neil, weapons of math destruction: how big data increases inequality and threatens democracy (2016).

[13] “Akin to how phrenology sought to quantify human character through observable, physical traits, video interview technologies also seek to quantify and objectively understand human behavior as it relates to job success. An inherent underlying assumption of these technologies is that there exist observable, physical manifestations that give insight into the character and behavioral traits that define a successful individual. Video interviewing technology is purportedly motivated by objectivity, yet it ranks candidates based on judgments rooted in normative comparisons. The automated video interviewing algorithms are trained to search for certain traits deemed to be valuable, but these normative conclusions are based on samples of existing employees, and these samples are not random and may not be representative.” Ifeoma Ajunwa, Automated Video Interviewing as the New Phrenology, 36 Berkeley Tech. L.J. 1173, 1190 (2022). See also, Evan Selinger & Woodrow Hartzog, The Inconsentability of Facial Surveillance, 66

LOY. L. REV. 101, 105 (2019).

[14] Nilam Oswal, The Latest Recruitment Technology Trends and How to Really Use Them, PC WORLD (Feb. 9, 2018, 4:56 PM), https://www.pcworld.idg.com.au/article/633219/latest- recruitment-technology-trends-how-really-use-them/; Gartner HR Survey Shows 86% of Organizations Are Conducting Virtual Interviews to Hire Candidates During Coronavirus Pandemic, gartner: newsroom (Apr. 30, 2020), https://www.gartner.com/en/newsroom/press- releases/2020-04-30-gartner-hr-survey-shows-86—of-organizations-are-cond

[15] “Accent discrimination is a credible threat of automated video systems given that many video interview algorithms employ vocal analysis. In fact, a recent audit of HireVue’s algorithms suggest[ed] that accent discrimination may already be present in the company’s assessment outcomes.” Ifeoma Ajunwa, Automated Video Interviewing as the New Phrenology, 36 Berkeley Tech. L.J. 1173, 1199 (2022).

[16] In Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices, Raghavan and his coauthors point to “[a] wave of studies [which have] shown that several commercially available facial analysis techniques suffer from disparities in error rates across gender and racial lines. Ifeoma Ajunwa, Automated Video Interviewing as the New Phrenology, 36 Berkeley Tech. L.J. 1173, 1190 (2022); See also, Manish Raghavan, Solon Barocas, Jon Kleinberg & Karen Levy, Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices, 2020 conf. on fairness, accountability, and transparency (fat*) 469, 475 (2020); Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, 81 proc. mach. learning rsch. 1, 2 (2018); Lauren Rhue, Racial Influence on Automated Perceptions of Emotions, race, ai, & emotions 1, 1 (2018).

[17] Charles A. Sullivan, Disparate Impact: Looking Past the Desert Palace Mirage, 47 WM. & MARY L. REV. 911, 989 (2005).  

[18] Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo. L. Rev. 1671, 1726 -1728 (2020).

[19] Id.

[20] Ifeoma Ajunwa, The Auditing Imperative for Automated Hiring, 34 Harv. J.L. & Tech. 621, 622 (2021).

[21] Ifeoma Ajunwa, Automated Governance, 101 N.C. L. Rev 355 (2023).

[22] I offer several mechanisms to serve as guardrails to automated governance. These mechanisms include: 1) Standing Advisory Council of Technologists and Social Scientists, 2) Stakeholder and Constituency Engagement, 3) Congressional Overview and Review. Ifeoma Ajunwa, Automated Governance, 101 N.C. L. Rev 355, 398-402 (2023).

[23] Ifeoma Ajunwa, The Auditing Imperative for Automated Hiring, 34 Harv. J.L. & Tech. 621, 675-677 (2021).

[24]  Charles Sullivan, Employing AI, 63 vill. l. rev. 395, 423 (2018). See also, 29 C.F.R. § 1607.5B (2018).