Testimony of Alex C. Engler

My name is Alex Engler, I am fellow at the Brookings Institution, an associate fellow at the Center for European Policy Studies, and an adjunct professor at Georgetown University. In these roles, I primarily study the interaction between algorithms and social policy. This research is informed by a decade of experience working as a data scientist in government, think tanks, and academia.

First, I would like to commend the EEOC on last year’s technical assistance, detailing how AI hiring tools can be discriminatory against people with disabilities, and how they might comply with the Americans with Disabilities Act. The EEOC guidance is reasoned and well attuned in its underlying goal to meaningfully improve the market of AI hiring software for people with disabilities. I applaud this work, and will continue to hold it up as an example to other federal agencies, especially for how it considers the entire socio-technical process of hiring, and not just the algorithms alone.1 Further, I commend the EEOC on providing guidance not just for non-discrimination, but also implementing principles of disclosure—that people with disabilities deserve to know when they are being evaluated by an algorithm—as well as the availability of a reasonable accommodation and/or alternative, non-algorithmic, process.

This work from EEOC is especially encouraging, because the story of AI hiring is not unique. Almost all critical decisions in the employment are experiencing ‘algorithmitization’—meaning the steady expansion of algorithms to more and more tasks. This now includes AI’s application to targeted job ads, recruitment, background-checks, task allocation, evaluation of employee performance, wage-setting, promotion, at times even termination, and others.

Unfortunately, while many of these AI systems have significant value when used responsibly, they have too often been deployed with inflated promises and insufficient testing or worker protections. Much like AI hiring, this can lead to discriminatory outcomes, worker disenfranchisement through to black-box AI decisions, and unjust decisions resulting from algorithmic mistakes.

The most comprehensive U.S. federal document on AI harms, the Blueprint for an AI Bill of Rights, states that these AI applications pose meaningful risks to equal opportunity, and warrant government scrutiny. The European Union’s draft AI Act also recognizes this, and when it passes, it will categorize nearly all of these AI applications as “high-risk,” and will create significant new regulatory requirements, as well as government enforcement capacity.

While AI hiring is perhaps the most visible in the media and best analyzed by academics and civil society, these other AI employment systems are used by thousands of businesses and affect millions of Americans. It is difficult to precisely interpret the limited survey evidence about the market penetration of AI employment tools. Still, the prevailing evidence suggests that, for medium- and large-businesses, algorithmic systems contribute significantly to, or perform outright, the majority of all employment decisions in the categories mentioned above.

That most employment decisions will be assisted by, or made by, an AI system is a sea change in the employer-employer relationship, and in turn, requires profound change at the EEOC. Continuing the work of the Artificial Intelligence and Algorithmic Fairness Initiative, the EEOC should systematically consider these AI applications, develop tailored guidance for each under all of the EEOC’s legal authorities, and build necessary enforcement capacity.

I understand that this is an enormous undertaking, and that it will take time and resources. I also expect that, over time, it will affect the structure and core functions of the EEOC. While a great challenge, this is the appropriate response to the new algorithmic paradigm in employment.

Beyond new policy, the EEOC must also develop new capacity. An important takeaway from my research is that the transition to AI employment systems represents a possibility for a more fair and just labor market, these better outcomes are absolutely not guaranteed. In a Brookings Institution paper, I argue that the market incentives around AI hiring, are not sufficient to produce fair outcomes on their own. Further, an effective and independent auditing market that might self-regulate AI hiring systems will not emerge on its own, without any government enforcement.2

The European Union recognizes this challenge, and the EU AI Act will enable significant government oversight, notably requiring that developers to make available data and documentation to regulators, which will enable algorithmic audits, to ensure conformity with the EU AI requirements. Notably, the EU AI Act will also require registration of all covered AI employment systems in a public database, potentially leading to an informative resource for U.S. policymakers.

I was encouraged to see “Technology-related employment discrimination” mentioned in the EEOC’s Draft Strategic Enforcement Plan. In order to provide meaningful enforcement, the EEOC should actively build capacity, such as by hiring data scientists who specialize in regulatory compliance, as well as algorithmic auditors, who will be essential in the investigation and litigation of AI employment systems. Even before any specific enforcement actions, the EEOC should look to acquire and evaluate AI employment systems in order to improve public knowledge. This effort might be modeled after the National Institute for Standards and Technology’s Face Recognition Vendor Testing Program, which evaluates facial recognition software and publishes results from this testing. In total, this development of new EEOC capacity for algorithmic oversight will be as critical as the development of policy guidance and technical assistance.

To summarize, I urge the EEOC to:

 

  1. Consider a wide range of AI employment systems, not just in hiring, but also targeted job ads, recruitment, task allocation, evaluation of employee performance, wage setting, promotion, and termination.
  1. Encourage and enforce the whole range of AI principles on these AI employment systems, as advocated in exemplar policy documents, such as the Blueprint for an AI Bill of Rights or the EU AI Act, to the extent possible under EEOC’s legal authorities.
  1. Develop the capacity to provide oversight, such as by using investigations to audit these critical AI systems and ensure their compliance with federal law, as well as to use information gathering authorities to inform the EEOC and the public on their proliferation and impact.

 

1 Alex C. Engler, “The EEOC wants to make AI hiring fairer for people with disabilities.” May 26th, 2022. https://www.brookings.edu/blog/techtank/2022/05/26/the-eeoc-wants-to-make-ai-hiring-fairer-for-people-with- disabilities/

2 Alex C. Engler, “Auditing employment algorithms for discrimination.” March 12th, 2021. https://www.brookings.edu/research/auditing-employment-algorithms-for-discrimination/