Breadcrumb

  1. Inicio
  2. node
  3. Testimony of Suresh Venkatasubramanian

Testimony of Suresh Venkatasubramanian

Chair Burrows, and Commissioners of the EEOC, thank you for the opportunity to provide witness to the Commission today. My name is Suresh Venkatasubramanian, and I am a professor at Brown University and director of the Center for Technological Responsibility. I am a computer scientist who has for the last decade studied the ways in which automated systems, and especially those that use artificial intelligence, may produce discriminatory outcomes in employment and performance evaluation. Most recently, I served as the Assistant Director for Science and Justice in the White House Office of Science and Technology Policy in the Biden-Harris Administration and coauthored the Blueprint for an AI Bill of Rights[1], a document that lays out five key protections for those meaningfully impacted by the use of automation, and a detailed technical companion for how these protections can be realized. 

Automated systems, fueled by vast quantities of data, innovative machine learning algorithms, and fast computing resources, hold out the promise of faster, more efficient, and more accurate approaches to evaluating candidates for employment, whether it be algorithms based on natural language processing that can screen candidate resumes and identify salient factors, game-based interview sessions that seek to identify key cognitive factors that make an individual a good fit for a job, or multimedia analysis procedures that score candidates based on video interviews.  These systems promise to make the interview process more seamless for candidates and recruiters, eliminate biases in judgement, and allow for a broader pool of candidates to be recruited and evaluated fairly.

The keyword in the above is ‘promise’: the tremendous hype surrounding the development of new technology, especially those that use artificial intelligence-based approaches, has obscured many documented problems that arise when these algorithms are deployed in actual employment settings. These include differential outcomes for people from different demographics groups, inferences based on psychological premises (such as emotion recognition) that are unsound or unvalidated, and a lack of accountability arising from the shifting of responsibility between the vendors who develop such software and the companies that procure them for use in hiring.

Over the decades, any new technology that has been introduced into society – cars, medical treatments, airplanes, a host of consumer products – has been accompanied by rigorous testing regimes to ensure that the technologies work, are safe, and do not cause harm. These guardrails build trust in the technology and create an environment in which innovation flourishes without fear of liability. Indeed, we have already seen that in case of data-driven automated technologies such as machine learning, the insistence on guardrails to protect against discrimination and make the workings of systems more transparent has fostered a whole new area of innovation in the tech industry described as ‘Responsible AI’. Guardrails feed further innovation rather than hamper it: those who frame this as a zero-sum game are in effect advocating for sloppy, badly engineered and irresponsible technologies that would never be deployed in any other sector.

So what should these guardrails look like? The aforementioned Blueprint for an AI Bill of Rights, which I note was developed in consultation with agencies across the Federal government including the EEOC, as well as after extensive consultation with the private sector, civil society advocates, and academics, provides several relevant suggestions.

Firstly, a note about scope. The Commission correctly mentions both AI and automated systems in the title of this event, recognizing the varied nature of the systems that are used to assist in the employment process. As a computer scientist, I have seen the term ‘AI’ morph and evolve – going out of favor during AI winters[2] and coming back into vogue as money and investments began to pour into the field. Therefore, it is important when the Commission provides guidance, that it focuses on the impact and harms on individuals rather than on the (rapidly evolving) technologies themselves and thus retain within scope any automated system as defined in the Blueprint.

Just like the Commission has done in the context of algorithms for employment and the Americans with Disabilities Act[3], it should issue enforcement guidance and recommended questions that the designers and developers of such systems should answer as they develop their systems.

The Commission should direct the creators of automated systems used in employment to perform

  • detailed validation testing that includes the specific technology being used as well as the system interaction with human operators or reviewers whose actions might impact overall system effectiveness. The results of this testing should be made available for review.
  • risk identification and mitigation that can be based on the National Institute of Standard and Technology AI Risk Mitigation Framework[4].
  • disparity assessments to determine how their systems might exhibit unjustified differential outcomes based on different protected characteristics and mitigate these differential outcomes as far as possible with the result of this assessment and mitigation made available for review.
  • ongoing monitoring of the developed systems on a regular basis to ensure that the mitigations and validations continue to be maintained, since automated systems can “drift” away from their training over time especially if the underlying models are retrained based on new data.
  • Evaluation of the data used to build models (in the case of AI or machine learning-based models) to ensure that only relevant, high quality data, tailored to the specific context of employment, is used. Relevancy itself should be determined based on research-backed demonstration of the causal influence of the data on the outcome, rather than via an appeal to historical practices.

The Commission should strongly encourage the following best practices by entities seeking to develop automated systems for use in employment contexts.

  • The use of transparent and explainable models. Complex and opaque models make it difficult to understand why model predictions take the form that they do, and can render the system liable to make mistakes that are undetectable. Models that are simple enough to be easily explained, or that are augmented with procedures that can accurately explain the results of a prediction in a way that is tailored to the individual asking for the explanation are likely to be more accurate and less prone to unexpected errors or differential group outcomes.
  • The inclusion of human oversight. Systems should provide timely human consideration and remedy by a fallback system to account for when the system fails. This is important because automated systems are fallible especially when presented with scenarios far removed from the scenarios used to train them. This is important also to ensure that the use of the system does not prevent individuals with accessibility challenges from participating in the hiring process.

In conclusion, I once again would like to thank the Commissioners for giving me the opportunity to testify at this hearing and commend the Commission for taking up this complex and important civil rights challenge presented by modern technology.

 

 

[1] The White House Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Oct 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[2] Wikipedia. AI Winter. https://en.wikipedia.org/wiki/AI_winter

[3] EEOC. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence

[4] NIST. AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework