Testimony of Gary D. Friedman

  1. Introduction

Chair Burrows and Commissioners, thank you for inviting me to testify before the Equal Employment Opportunity Commission (“EEOC” or “Commission”) on this important emerging topic. I am a senior partner at Weil, Gotshal & Manges LLP in its Employment Litigation Practice Group, and I represent employers in a wide range of employment-related matters, including discrimination and other complex employment class and collective actions, trade secrets and restrictive covenant litigations, and internal investigations. I have handled scores of matters on behalf of employers before the EEOC, including matters involving Commissioner’s charges and Commission-initiated investigations. I have also testified before public bodies on behalf of management on myriad topics, including proposed changes to the discrimination and harassment laws. I have also written and spoken on artificial intelligence (“AI”) issues in the employment context and have been advising global businesses across sectors on the use of AI in the workplace.

A 2022 study by the Society for Human Resource Management (“SHRM”) found that “nearly 1 in 4 organizations report using automation or artificial intelligence (AI)[1] to support HR-related activities,” including recruitment and hiring.[2] The proliferation of AI shows no signs of abating. In 2021, one market research firm valued the AI market at $59.67 billion, and estimated that number would grow to $422.37 billion by 2028.[3] In fact, just last week, Microsoft announced that it was making a multi-billion dollar investment in OpenAI, the start-up behind the viral ChatGPT chatbot. Because of AI’s enormous growth potential, and the potential issues it raises with respect to bias and other workplace concerns, I applaud the Commission for taking proactive steps to help ensure employers’ use of AI tools complies with our existing federal employment discrimination laws.

In my testimony, I hope to bring forward the perspective of employers which use, or may in the future use, automated decision-making tools. I do not intend to speak on behalf of the management bar per se, Weil, or any particular client, but I can synthesize what I see as the important considerations for employers on this topic. I think what you will find is that, as a general matter, the objectives of employers, employees and applicants, and the Commission on AI in the workplace are aligned in many ways.

  1. Companies want to use AI responsibly.

A.   Companies are increasingly using AI.

We’ve seen that more and more companies are using AI in their recruitment and hiring practices, performance evaluations, and general decision-making regarding employment. Survey statistics support this anecdotal evidence. A recent study showed that up to 83% of large employers surveyed are using some form of AI in employment decision-making.[4] According to a February 2022 survey from SHRM, 79% of employers use AI and/or automation for recruitment and hiring.[5] Modern Hire, a vendor of AI hiring technology, advertises that its clients include more than half of the Fortune 100 companies, including FedEx, LG, Macy’s, Pepsico, Delta, Starbucks, Sysco, Volvo, and Roche.[6] In my experience, corporate employers are not only using AI more frequently, but are also focused on using it responsibly, recognizing the benefits of AI in, among other things, reducing unconscious biases that are often present in human decision-making.

Companies’ responsible use of AI to reduce unconscious bias in employment decision-making should not come as a surprise. In the past few years, increasingly more public and private companies have prioritized diversity, equity, and inclusion initiatives. After the death of George Floyd and the subsequent protests around the country, businesses pledged $200 billion to increase efforts toward racial justice.[7] In 2022, a survey by the American Productivity & Quality Center showed that, in the previous year, 36% of respondents increased staff dedicated to Diversity, Equity, and Inclusion (“DEI”), 32% increased their DEI budgets, and 30% disclosed DEI metrics publicly and invested more in employee resource and affinity groups.[8] Moreover, pay equity audits are on the rise, with 58% of organizations reporting in 2021 that they have reviewed their pay structures and decisions.[9] Businesses are incentivized now, more than ever, to take action to improve diversity and reduce bias in the workplace.

B.   Companies use AI in the hiring and employment context.

AI is here, and it’s not going anywhere. A survey of 7,300 human resources managers worldwide found that the proportion who said their department uses predictive analytics increased nearly 400%, from 10% in 2016 to 39% in 2020.[10] That is not surprising because there are many obvious incentives for companies to use AI. AI can reduce costs through efficiencies in hiring and decision-making processes. And, more importantly, AI can help foster a more diverse workforce and minimize unconscious human biases, which are key goals for 21st century employers.[11]

Companies today are using AI to assist in a variety of contexts, including anonymizing resumes and interviewees, performing structured interviews, and using neuroscience games to identify traits, skills, and behaviors.[12] During the interviewing stage, some companies will conduct video interviews of applicants and then use AI to analyze factors including facial expression, eye contact, and word choice.

Companies also use AI to monitor employee productivity and performance, and to make decisions regarding promotion and salary increases.[13] For example, UPS uses AI to monitor and report on driver safety and productivity by tracking driver movement and when drivers put their trucks in reverse.[14] Other companies may use AI to track employee login times, and monitor whether employees are paying attention to their computer screens using webcams and eye-tracking software.[15]

C.   Companies are trying to use AI technology to mitigate bias and improve diversity.

The use of AI technology can help avoid decisions that treat similarly situated applicants and employees differently based on entrenched bias or even just the whims of individual decision makers. For example, if the criteria for hiring or promotion are set in advance, using an algorithm to assess employees can help reduce the bias of individual managers by applying the criteria uniformly. A Yale study showed that when evaluating candidates for police chief, human evaluators justified choosing men without college degrees over women with college degrees because “street smarts” purportedly was the most important criteria. However, when the names on the applications were reversed, evaluators chose men with college degrees over women without college degrees, claiming that degrees were the more important criteria. If the criteria had been set in advance, unconscious biases against women could have been mitigated because evaluators would not have been able to justify their decisions post hoc. Importantly, AI can be trained on certain criteria and, unlike humans, AI tools won’t deviate from pre-selected criteria to rationalize a biased decision.[16] Shortly, I will discuss some illustrative examples of how AI has been shown to reduce bias in real-world hiring decisions.

The McKinsey Global Institute has reported that AI can reduce the effect of humans’ subjective interpretation of data because machine-learning algorithms learn to consider only variables that improve predictive accuracy.[17] For example, algorithms can consider various characteristics on a resume—including a candidate’s name, prior experience, education, and hobbies. AI algorithms can be trained to consider only those characteristics or traits that predict a desired outcome, such as whether a candidate will perform well once on the job.[18]

AI can also be instrumental in detecting existing workplace discrimination. Professors Kleinberg, Ludwig, Mullainathan, and Sunstein provide a useful hypothetical on this issue. Consider a firm that is trying to decide which of its sales professionals it will steer toward its most lucrative clients based on two predictive inputs: (1) past sales levels; and (2) manager ratings. Suppose that for men, the firm’s managers provide meaningful assessments that provide useful information about employee performance that is not fully captured in the past sales data, but for women, the same managers provide meaningless assessments infused with negative bias towards women, which assigns them lower performance scores. An algorithm that is designed to be cognizant of gender would be able to identify the discriminatory manager ratings. If the algorithm is tasked with the function of determining whether manager ratings are predictive of future sales proficiency, it will identify the gender discriminatory manager assessments, as they would not be predictive of women’s future sales proficiency based on the more objective data inputs of past sales levels.[19]

Despite its advantages, AI technology can also perpetuate discrimination depending on the data sets used to train the AI tool. A well-publicized cautionary tale involves Amazon. Amazon began working on an AI tool to screen job applicants in 2014, and in 2018 news broke that Amazon scrapped the tool because it determined that it resulted in certain bias against female applicants.[20] As reported, Amazon fed the tool resumes submitted to the company over the course of the prior 10 years as “training data.” The tool then recognized patterns among the resumes, constructed an image for itself of the “ideal candidate,” and then searched an applicant pool and scored applicants on a scale of 1 to 5. Most of the resumes in the training data were those of men, reflecting the disproportionate number of men in the technology sector. The AI tool taught itself that men were preferable candidates because of patterns in the training data. The tool then attributed a lower score to resumes of people who attended “women’s” colleges or who played on the “women’s” chess team. Importantly, Amazon scrapped the tool when it realized the adverse consequences of the algorithm.

Recognizing the risks associated with AI, some companies have collaborated to develop polices to mitigate its potential discriminatory effects. Data & Trust Alliance is a corporate group that has developed “Algorithmic Bias Safeguards for Workforce” with the goal of detecting, mitigating, and monitoring algorithmic bias in workforce decisions.[21] It has signed up major employers such as American Express, CVS Health, Deloitte, General Motors, Humana, IBM, MasterCard, Meta, Nike, and Walmart.[22] According to a recent New York Times article reporting on the group, “[c]orporate America is pushing programs for a more diverse work force.”[23]

Data and Trust Alliance’s proposed safeguards include 55 questions for companies to ask an AI vendor, education and assessment for evaluating vendor responses, a scorecard to compare vendors, and guidance for integrating the safeguards.[24] For example, some questions ask about the use of “proxy” data in AI, including cellphone type (which could be indicative of class or age), sports affiliations, and social club memberships, in otherwise seemingly neutral datasets.[25] Other questions ask how bias is minimized during training models, what steps are used to remediate bias, and what practices are used to mitigate bias during deployment.

D.   Studies show that AI can help mitigate bias.

As referenced above, there is growing evidence that AI can be used to mitigate unconscious bias. In a forthcoming paper, Bo Cowgill at Columbia Business School studied the performance of a job-screening algorithm in hiring software engineers. A large company trained an algorithm to predict which candidates would pass its interview. Cowgill found that a candidate picked by the machine (and not by a human) is 14% more likely to pass an interview and receive a job offer and 18% more likely to accept a job offer when extended. He found the algorithm also increases hiring of what he calls “non-traditional” candidates—i.e., women, racial minorities, candidates without a job referral, candidates from non-elite colleges, candidates with no prior work experience, and candidates who did not work for competitors. He concluded that while completely eliminating bias may be extremely difficult, reducing bias is more feasible.[26]

In an example outside of the employment context, Professors Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan studied the use of AI in predicting the risk of a criminal defendant’s failure to appear in court in the future. The study compared judges’ decisions to those made by an algorithm based on three factors: (1) age, (2) current offense, and (3) prior criminal record. The professors found that, as compared to the algorithm, judges detain many low-risk people and release many high-risk people. The judges were overweighting the severity of the charge, but the machine learned that a person’s prior record matters far more in terms of future risk, which judges were not considering. The professors concluded that using the algorithm’s release recommendations would reduce jail population by up to 42% without any increase in crime.[27] The authors did not use race as an input in their prediction, but recognized that other variables could correlate with race and result in an algorithm that aggravated racial disparities. However, they found that, in this case, the algorithm could reduce crime and jail populations while simultaneously reducing racial disparities in detention rates.[28]

 A study at the Fisher College of Business analyzed the use of machine learning in selecting board directors by comparing human-selected boards with the predictions of machine learning algorithms.[29] The main measure of performance was based on shareholder support that directors receive in annual director re-elections. The study found that in comparison to algorithm-selected directors, management-selected directors were more likely to be male, had larger networks, and had many past and current directorships. By contrast, the machine algorithm found that directors who were not friends of management, had smaller networks, and had different backgrounds than those of management were more likely to be effective directors, including by monitoring management more rigorously and offering potentially more useful opinions about policy, suggesting that directors from diverse backgrounds would be more effective.[30]

Moreover, AI vendors are self-reporting about their products’ ability to improve diversity. Pymetrics, an AI vendor, has conducted several case studies on the effectiveness of its product. In one study, Pymetrics worked with a top food production company that was looking to more effectively review 6,000 job applications for 40 job openings. The recruiting team had been using indicators such as GPA, past experience, and/or extracurricular activities to screen applications. Pymetrics selected a group of top-performers at the company and built a predicative model based on game play. Pymetrics then developed a candidate evaluation process whereby candidates completed the same set of Pymetrics core exercises, a numerical and logical reasoning assessment, and a digital interview process with standardized questions. The company was able to review 16% of the applications that were pre-screened by Pymetrics (as opposed to 40% when it was screening manually using CVs). Importantly, for the first time, the gender split of the finalists was 50:50.[31]

In another case study, Pymetrics evaluated the use of AI in hiring at a large investment firm. When the firm first started working with Pymetrics, it was receiving 20,000 applications for its job vacancies. Pymetrics again developed an assessment based on data collected from gameplay of current employees. The Pymetrics-recommended candidates were 97% more likely to ultimately receive a job offer. The Pymetrics evaluation process also expanded the diversity of universities represented (from 9 universities to 66 different schools), and increased female representation among recommended candidates by 44% and minority representation by 9%.[32]

Of course, AI is not perfect, and it will likely be unable to completely eradicate bias and discrimination in the hiring and employment context. But the studies evaluating AI so far are promising, and suggest that AI can be developed to improve diversity in the workplace. My experience in advising corporate clients has been that companies are making good faith efforts to use AI responsibly, thus contributing to the development of more equitable and efficient uses of AI.

  • The use of AI brushes up against a number of concerns in the employment context.

Most employers are aware that federal antidiscrimination laws prohibit them from making employment decisions based on race, color, religion, sex, national origin, age, disability, or genetic information (i.e., a protected class). And although outside of the EEOC’s purview, employers are also forbidden from making employment decisions based on military veteran and union membership status.

The use of AI and automation tools implicate federal antidiscrimination laws in a variety of ways.

First, employers are not permitted to disfavor individuals based on their membership in a protected class. Aggrieved individuals assert these claims under either a disparate treatment or disparate impact theory.

A disparate treatment claim may arise as the result of an employer’s use of a tool if the employee can show that an employer intentionally programmed or used a tool to disadvantage individuals in a protected class. A tool programmed, for instance, to filter out candidates above a certain age would fall into this category. This issue was at the heart of the EEOC’s complaint filed last year against iTutorGroup, Inc. [33]—a provider of online English language tutoring services to students in China—where it was alleged that iTutorGroup programmed its application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60. Although the technology allegedly used by iTutorGroup was not technically artificial intelligence but rather a form of automated screening, the allegations in the complaint illustrate the perils of using impermissible data inputs to develop a hiring algorithm that would clearly discriminate against members of a protected class.

Second, a disparate impact claim arising out of an employer’s use of an AI tool may be a high-tech version of neutral factors that have adverse consequences, but it is certainly not novel. In a disparate impact claim, an employee would show that an employer’s practice—such as the use of an AI tool to screen applicants—results in members of a protected class being disfavored at a higher rate than members of another class. This would be the case even where an employer had no intention to discriminate. To avoid disparate impact liability, an employer using such a policy or practice must show that the practice is “job related for the position in question and consistent with business necessity” and that no alternative requirement would suffice.[34] A company that uses an AI tool that, for instance, disproportionally disfavors women may be subject to liability on a disparate impact theory.

The ADA poses a host of other issues, as the EEOC pointed out in its May 2022 guidance. The major issue here is that employers must, under the ADA, provide employees and job seekers with reasonable accommodations that allow them to perform the essential functions of their positions so long as the accommodations are not an undue hardship on employers. Individuals with disabilities, for instance, might have limited dexterity that results in programs assigning them lower scores on computer “games” that some employers use to build personality profiles of candidates. If an employer can provide an alternative testing format, and doing so would not constitute an undue hardship, the employer must offer this accommodation. Employers may also be required to offer more time to complete such assessments or might be required to provide assessments compatible with accessible technology such as screen readers.

Given these potential legal concerns, the EEOC has an interest in keeping companies on the right side of the line. And it should go without saying that most employers want to avoid discrimination, and its associated liability, as well. It should be noted that although Amazon’s tool disproportionally disfavored certain protected classes, Amazon properly stress-tested the tool and scrapped it because it realized it was producing biased results. This illustrates the important point that errors in designing an AI program used to assist with workplace decision-making can be easily identified and corrected, as compared to human decision-making, where rooting out conscious or unconscious bias can be far more challenging.

  1. So far, regulations are a mixed bag from the employer perspective.

Although some recent proposed and actual state and local regulations have been a step in the right direction, there is still ambiguity that complicates compliance for employers. I am hopeful that states and municipalities can act as laboratories for regulating AI, but it is premature to draw any conclusions from these regulations with respect to their impact in actually reducing workplace bias.

Two states, Maryland and Illinois, have enacted statutes regulating the use of AI. Illinois’ Artificial Intelligence Video Interview Act requires employers using AI analysis of applicant-submitted video interviews to (1) notify applicants that such AI will be used to analyze an applicant’s video interview, (2) provide applicants with “information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants,” and (3) obtain the applicant’s consent to be evaluated by the AI program.[35] The Illinois law was amended on January 1, 2022 to require employers who “[rely] solely upon an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview” to report annually to the Illinois Department of Commerce and Economic Opportunity (“DCEO”) the race and ethnicity of applicants who are not offered in-person interviews after the use of AI analysis and the race and ethnicity of applicants who are actually hired.[36] The DCEO will then analyze the reported data and create a report discussing whether the data discloses racial bias in the use of AI. The first report has not yet been issued, but is due by July 1, 2023. Maryland’s law requires only that applicants consent to the use of facial recognition technology during an interview.[37]

On the plus side, these laws notify applicants that an AI tool will be used. This gives applicants the opportunity to conduct research to better understand how AI functions in the interview process and request ADA accommodations, if needed, and hopefully this transparency will foster trust in these tools. Also, since applicants know the tools will be used, applicants can challenge the use of these tools as discriminatory. This incentivizes employers to ensure the tools do not disproportionately disadvantage members of protected classes. On the other hand, these laws are somewhat vague and provide employers minimal guidance. By way of example, the Illinois statute fails to even define “AI.”

California’s Fair Employment and Housing Council has proposed regulations with regard to the use of “Automated Decision Systems.”[38] These proposed regulations would apply to any computational process that “screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”[39] At bottom, the proposed regulations clarify that California’s antidiscrimination laws apply equally to decisions made by AI tools. For instance, “[t]he use of and reliance upon automated-decision systems that limit or screen out, or tend to limit or screen out, applicants based on protected characteristic(s)…may constitute unlawful disparate treatment or disparate impact.”[40] This clarification that antidiscrimination laws apply equally to employment decisions made in reliance on AI tools should motivate employers to evaluate tools for any discriminatory impact.

Significantly, California’s proposed regulation also takes a page from the European Union’s General Data Protection Regulation and would impose liability on vendors of AI tools, as the proposed regulation applies to “agents” of employers, defined to include those that “provide[ ] services related to recruiting, applicant screening…or the administration of automated-decision systems for an employer’s use in recruitment, hiring, performance evaluation, or other assessments…”[41] This type of regulation imposing liability on AI tool vendors would likely be outside of the EEOC’s scope, but serious consideration should be given to ensuring accountability of the software developers in this space.

The New York City law places regulations on AI tools that are somewhat burdensome and may dissuade employers from using them. Regulators must be careful to strike the right balance between encouraging the correct use of AI tools—reaping the previously discussed benefits of eliminating unconscious bias and fostering diversity in the workplace—and incentivizing employers to ensure the tools do not disadvantage members of protected classes. The New York City law, set to take effect April 15, 2023, covers “Automated Employment Decision Tools” (“AEDTs”), defined as those tools that use a computational process that issues a “simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making” in employment decisions.[42] It requires that an independent party conduct bias audits of these tools, which is a major positive of the New York City law and could help employers ensure that their tools do not unintentionally discriminate. I’ll touch on the potential limits of such independent auditing in a bit.

However, the law also requires summaries of the bias audits to be published, which may discourage employers from adopting these tools. Employers already have plenty of incentive to ensure their tools do not screen out protected classes because they want to avoid liability under federal and state antidiscrimination laws. Under the New York City law, employers must also inform applicants that an AEDT tool will be used to evaluate them “no less than 10 business days before” the tool is used.[43] This requirement is burdensome, and perhaps prohibitive, for employers who want to have an expeditious hiring process. If employers have to wait 10 days before using AI tools, employers may lose candidates as they find employment elsewhere and will be unable to fill open roles quickly. The danger with these sorts of laws is that companies may decide to scrap AI tools altogether despite their potential to reduce unconscious bias and subjectivity in the hiring process.

  1. Some recommendations for approaching AI in the employment context.

It is unlikely that there is going to be a one-size-fits-all approach to using AI effectively and responsibly. Guidelines will need to be tailored to different sectors. For example, the types of considerations relevant for AI tracking the productivity of truck drivers will be different from those relevant for AI tracking the performance of sales representatives. Of course, sectors will certainly be able to learn from each other.

Regardless of the industry, there are some key guideposts that can help companies use AI responsibly and help mitigate the risk of violating antidiscrimination laws. First, transparency—companies should be upfront about the use of AI, as required by some of the state and city laws we’ve discussed today. At this time, there is no federal requirement for employers to disclose the AI technology they use. Nevertheless, applicants and employees should know when they are being evaluated by a machine algorithm as opposed to a human reviewer. Companies should not need to provide excruciating detail of how they are using AI, but general notice will give applicants the opportunity to request more information and help identify instances of potential discrimination.

The second guidepost is auditing—whether it is self-auditing or third-party auditing, it is important that companies are proactive in mitigating potential biases of AI. As mentioned above, New York City’s AI law will require independent parties to conduct bias audits of AI tools, and will require employers to post summaries of the bias audit findings on their websites. Although well-intentioned, independent auditing may be difficult to implement effectively in practice. Recently, Pymetrics paid a team of computer scientists from Northeastern University to audit its AI hiring algorithm.[44] The audit evaluated the “fairness” of Pymetrics’ algorithm under the EEOC’s four-fifths rule stating that hiring procedures should select roughly the same proportion of men and women, and people from different racial groups. That is, if 100% of men are passing a test, at least 80% of women must pass it.

Northeastern University’s audit showed that Pymetric’s algorithm satisfies the four-fifths rule, but it did not show that the tool was bias-free or that it chose the most qualified candidates. The audit compared men versus women, and one racial group against another, but did not address disparities between people who belong to more than one protected class. The tool could not determine if the algorithm was biased against Asian men or Black women, for example. Moreover, the audit was funded by Pymetrics, which creates a risk of the auditor being influenced by the client. As independent auditing companies pop up in response to the New York City law or client demand, companies should be cautious not to take their assessments at face value. They should look at which metrics independent auditing companies are using to evaluate AI technology, and consider how the auditing companies are compensated.

To date, there is a lack of consensus of which metrics and data auditors should use to audit AI technology. There are no clear standards for which biases to test for in AI, and it could be difficult to define which data points are the most useful in detecting bias. IBM has suggested that it become standard practice for auditing companies to disclose the assumptions used for determining relevant protected characteristics used in an audit bias. [45] As companies conduct audits to comply with NYC’s recent AI legislation (and potentially future legislation), more standards may develop as to what constitutes a valid and effective bias audit of AI technology.

Relatedly, vendor vetting can also help companies decrease the likelihood that the AI tools they are using do not perpetuate bias. First, companies can ask vendors questions such as those proposed by Data and Trust Alliance: (1) what measures are taken to detect and mitigate bias; (2) what approaches are used to remediate bias; (3) what measures have been taken to demonstrate that the system performs as intended, and as claimed; and (4) what are the vendor’s commitments to ethical practices. Companies can also ask the vendors about the types of data used to train models: (1) how is the data collected, (2) where does that data come from, (3) how often is the data updated, and (4) how often is the data audited. In short, companies should have a sense of how vendors are developing AI algorithms and what steps they are taking to regularly mitigate potential biases.

Finally, companies should develop their own internal policies to regulate and mitigate biases in AI technology. Just as companies have had to recently assess and develop social media policies, they will have to work with consultants and counsel to draft and implement best practices. Some major companies have already created such guidelines for mitigating bias in the use of AI technology. For example, Google has a set of guidelines to ensure that machine learning is “fair.” It encourages developers to (1) design models using concrete goals for fairness and inclusion (i.e., making tools accessible in different languages or for different age groups); (2) use representative datasets to train and test models; (3) check the system for unfair biases, including by organizing a pool of diverse testers to identify who may experience adverse impacts; and (4) analyze the performance of the machine learning model.

Similarly, IBM has developed five “pillars of trustworthy AI”: Explainability, Fairness, Robustness, Transparency, and Privacy.[46] It encourages companies and developers to (1) take accountability for the outcomes of their AI systems in the real world; (2) be sensitive to a wide range of cultural norms and values; (3) work to address biases and promote inclusive representation; (4) ensure humans can understand an AI decision process; and (5) preserve and fortify users’ power over their own data.

Companies can learn from each other and develop standardized industry regulations for using AI responsibly in hiring practices. As technology and auditing systems develop, we will get a sense of what works, and what doesn’t. Self-regulation may also emerge at the corporate board level, as boards become aware of how AI can be used effectively and in a non-discriminatory manner.

  1. Government’s Role

In terms of regulation, I think it is important to keep in mind a few over-arching concepts. First, employers do not want to use AI tools that discriminate. Second, these tools are new, and not going away, so regulators should leave room for experimentation and take into account employers’ efforts to get the use of these tools right. Third, these tools may be better than the alternative, which is human subjectivity. In contrast to the human mind, AI tools can at least be audited. Unconscious bias in humans is extremely difficult to audit.

The Commission could and should put employers on notice that AI tools are subject to all the same rules and regulations as other processes and procedures used to make employment decisions. The Commission need not look far to provide guidance on how to audit employment processes and procedures for disparate impact. The Commission’s 1978 Uniform Guidelines on Employee Selection Procedures apply equally to intelligence/aptitude tests—the original problem disparate impact theories of discrimination were meant to solve—and AI tools.[47] If an AI tool has a disparate impact on members of a protected class, employers must show that the selection procedure “is predictive of or significantly correlated with important elements of job performance.”[48] The Commission also uses statistical analysis in other areas such as in determining whether differences in pay between a protected class and a comparator group are statistically significant.[49]

An approach to regulating this space that allows employers leeway to regularly audit, revise, and (if needed) scrap AI tools that result in discriminatory outcomes would allow for needed experimentation with AI. The concept of allowing employers that  audit themselves and self-correct is not new in the law. For instance, in Massachusetts, employers have a defense to claims under the Massachusetts Equal Pay Act if, in the past three years, the employer “has both completed a self-evaluation of its pay practices in good faith and can demonstrate that reasonable progress has been made towards eliminating wage differentials based on gender for comparable work, if any, in accordance with that evaluation.”[50] Such measures encouraging employer self-correction recognize that it is not possible for employers to make completely unbiased decisions all the time, and has the salutary effect of encouraging regular auditing and self-correcting. In suggesting this, I am in no way suggesting that an audit should shield intentional discrimination.

At bottom, due to the emergence of powerful social movements over the past half-decade and the expansive federal, state and local regulation of issues that intersect with Title VII, the ADEA, the ADA and the Equal Pay Act, employers’ awareness of and focus on matters of importance to legally protected groups is at an all-time high, and as they embark on deploying artificial intelligence tools in the workplace, they do so with these principles top of mind. This is an inflection point at which the EEOC can partner with the business community to issue guidance that will allow employers to continue to increase their focus on diversity, equity and inclusion while giving them room to use these tools to enhance workplace culture and performance.

I want to thank the Commission for giving me this opportunity to share my perspectives, and I look forward to working with all of you on this important initiative.

 

[1] “Automation” and “AI” are two concepts that should not be conflated. As a general matter, “automation is a broad category describing an entire class of technologies,” which has been around for many decades, and is designed primarily to relieve humans of repetitive, monotonous and/or mundane tasks. See Michael Gaynor, Automation and AI Sounds Similar, But May Have Vastly Different Impacts on the Future of Work, Brookings (Jan. 29, 2020), https://www.brookings.edu/blog/the-avenue/2020/01/29/automation-and-artificial-intelligence-sound-similar-but-may-have-vastly-different-impacts-on-the-future-of-work/. “AI,” however, is designed to simulate human thinking. It “refers to how computer systems can use huge amounts of data to imitate human intelligence and reasoning, allowing the system to learn, predict and recommend what to do next.” Or Shani, AI Automation: What You Need To Know, Marketing Artificial Intelligence Institute (Nov. 9, 2021), https://www.marketingaiinstitute.com/blog/automation-and-ai-what-you-need-to-know.

[2] Society for Human Resource Management, Automation & AI in HR (Apr. 2022), at 3, https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf?_ga=2.112869508.1029738808.1666019592-61357574.1655121608.

[3] Zion Market Research, Global Artificial Intelligence (AI) Market to Register an Annual Growth of 39.4% During Forecast Period (June 22, 2022), https://www.zionmarketresearch.com/news/global-artificial-intelligence-market.

[4] Keith E. Sonderling, Op-ed: Artificial Intelligence Is Changing How HR Is Handled at Companies. But Do Robots Care About Your Civil Rights?, Chi. Trib. (Sept. 20, 2021), https://www.chicagotribune.com/opinion/commentary/ct-opinion-robots-ai-civil-rights-amazon-20210920-tef7m7az3rgjtacauazvw3u224-story.html.

[5] Society for Human Resource Management, Automation & AI in HR (Apr. 2022), at 4, https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf?_ga=2.112869508.1029738808.1666019592-61357574.1655121608.

[6] See Modern Hire, https://modernhire.com/ (last visited January 12, 2023).

[7] Earl Fitzhugh, JP Julien, Nick Noel & Shelley Stewart, It’s Time For A New Approach to Racial Equity, McKinsey & Company, at 2 (May 25, 2021).

[8] Dale Buss, 12 Ways Companies Are Boosting Their DEI, Society for Human Resource Management (Mar. 9, 2022), https://www.shrm.org/resourcesandtools/hr-topics/behavioral-competencies/global-and-cultural-effectiveness/pages/12-ways-companies-are-boosting-their-dei.aspx.

[9] Dr. Brian Marentette, Pay Equity Audits: On the Rise and Becoming Essential, HR Daily Advisor (Jan. 13, 2023), https://hrdailyadvisor.blr.com/2022/11/17/pay-equity-audits-on-the-rise/#:~:text=More%20companies%20are%20adding%20at,level%20businesses%20taking%20the%20lead.

[10] Hilke Schellmann, Auditors Are Testing Hiring Algorithms for Bias but There’s No Easy Fix, MIT Tech. Rev. (Feb. 11, 2021), https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/.

[11] See Kimberly A. Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, 22 Stan. Tech. L. Rev. 290, 294 (2019).

[12] See Id. at 324.

[13] Keith E. Sonderling, Bradford J. Kelley & Lance Casimir, The Promise and The Peril: Artificial Intelligence and Employment Discrimination, 77 U. Mia. L. Rev. 1, 33 (2022), https://repository.law.miami.edu/umlr/vol77/iss1/3.

[14] Id.

[15] Id. at 15.

[16] See Kimberly A. Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, 22 Stan. Tech. L. Rev. 290, 326 (2019). (citing Eric Luis Uhlmann & Geoffrey L. Cohen, Constructed Criteria Redefining Merit to Justify Discrimination, 16 Psych-ol. Sci. 474, 474 (2005)).

[17]Jake Silberg & James Manyika, Notes From the AI Frontier: Tackling Bias in AI (And In Humans), McKinsey Global Institute, at 2 (June 2019), https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.pdf.

[18] Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Cass R. Sunstein, Discrimination in the Age of Algorithms, at 33 (Feb. 5, 2019), https://ssrn.com/abstract=3329669.

[19] Id. at 33-34.

[20] See Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women, Reuters (Oct. 10, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[21] Data and Trust Alliance, Algorithmic Safety: Mitigating Bias in Workforce Decisions, https://dataandtrustalliance.org/our-initiatives/algorithmic-safety-mitigating-bias-in-workforce-decisions (last visited Jan. 13, 2023).

[22] Data and Trust Alliance, https://dataandtrustalliance.org/ (last visited Jan. 13, 2023).

[23] Steve Lohr, Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring, N.Y. Times (Dec. 8, 2021), https://www.nytimes.com/2021/12/08/technology/data-trust-alliance-ai-hiring-bias.html#:~:text=The%20Data%20%26%20Trust%20Alliance%2C%20announced,organization%20or%20a%20think%20tank.

[24] Data and Trust Alliance, Algorithmic Safety: Mitigating Bias in Workforce Decisions, https://dataandtrustalliance.org/our-initiatives/algorithmic-safety-mitigating-bias-in-workforce-decisions (last visited Jan. 13, 2023).

[25] Steve Lohr, Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring, N.Y. Times (Dec. 8, 2021), https://www.nytimes.com/2021/12/08/technology/data-trust-alliance-ai-hiring-bias.html#:~:text=The%20Data%20%26%20Trust%20Alliance%2C%20announced,organization%20or%20a%20think%20tank.

[26] Bo Cowgill, Bias and Productivity in Humans and Machines, Colum. Univ. (Mar. 21, 2020), https://conference.iza.org/conference_files/MacroEcon_2017/cowgill_b8981.pdf.

[27] Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil Mullainathan, Human Decisions and Machine Predictions, Q. J. Econ. 237 (2018).

[28] Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil Mullainathan, Human Decisions and Machine Predictions, Q. J. Econ. 237 (2018); Kleinberg, Jon and Ludwig, Jens and Mullainathan, Sendhil and Sunstein, Cass R., Discrimination in the Age of Algorithms (Feb. 5, 2019), available at https://ssrn.com/abstract=3329669.

[29] Isil Erel, Lea H. Stern, Chenhao Tan & Michael S. Weisbach, Selecting Directors Using Machine Learning, Fisher College of Bus. Working Paper No. 2018-03-005, Charles A. Dice Ctr. Working Paper No. 2018-05, Eur. Corp. Governance Inst. (ECGI) - Finance Working Paper No. 605/2019 at 23 (Dec. 13, 2020), https://ssrn.com/abstract=3144080.

[30] Id.

[31] Pymetrics, Leading Food Production Company, https://www.pymetrics.ai/case-studies/highest-potential-talent-identified (last visited Jan. 16, 2023).

[32] Pymetrics, Leading Global Investment Firm, https://www.pymetrics.ai/case-studies/leading-global-investment-firm (last visited Jan. 17, 2023).

[33] See Amended Complaint, EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565-PKC-PK (E.D.N.Y. Aug. 3, 2022).

[34] See 42 U.S.C. § 2000e-2(k)(1)(A).

[35] Artificial Intelligence Video Interview Act, 820 Ill. Comp. Stat. Ann. 42/5 (West 2022).

[36] Id. at § 42/20.

[37] Md. Code Ann. Lab. & Empl. § 3-717 (West 2020).

[38] Fair Employment & Housing Council Draft Modification to Employment Regulations Regarding Automated-Decision Systems, Cal. Code Regs. tit2, § 11008 et seq. (proposed Mar. 15, 2022), https://calcivilrights.ca.gov/wp-content/uploads/sites/32/2022/03/AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf.

[39] Id. at § 11008(e).

[40] Id. at § 11016(c)(5).

[41] Id. at § 11008(a).

[42] N.Y.C. Admin. Code § 20-870.

[43] Id. at § 20-871(b).

[44] Hilke Schellmann, Auditors Are Testing Hiring Algorithms for Bias but There’s No Easy Fix, MIT Tech. Rev. (Feb. 11, 2021), https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/.

[45] Standards for Protecting At-Risk Groups in AI Bias Auditing, IBM (Nov. 2022), https://www.ibm.com/downloads/cas/DV4YNKZL.

[46] Everyday Ethics for Artificial Intelligence, IBM, https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (last visited Jan. 17, 2023).

[47] See 29 C.F.R. § 1607 (2022).

[48] Id. at § 1607.5(B) (2022).

[49] See U.S. Equal Emp. Opportunity Comm’n, EEOC-915.003, EEOC Compl. Man. § 10-III.

[50] Mass. Gen. Laws ch. 149, § 105A(d) (2018).