Testimony of Jordan Crenshaw

Chair Burrows and distinguished members of the Commission, thank you for your invitation to testify. My name is Jordan Crenshaw, and I serve as the Vice President of the U.S. Chamber Technology Engagement Center (“C_TEC”). C_TEC is the technology policy hub of the U.S. Chamber, and its goal is to promote the benefits of technology in the economy and advocate for rational policy solutions that drive economic growth, spur innovation, and create jobs.

Today's hearing titled "Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier" is an important and timely discussion, and the Chamber appreciates the opportunity to participate.

The world has quickly entered its fourth industrial revolution, in which technology and artificial intelligence (or, “AI”) are helping propel humanity. Americans are witnessing the benefits of using AI daily, from its value in adapting vaccines to tailoring them to new variants to increasing patient safety during procedures like labor and delivery.1 Artificial intelligence is also rapidly changing how businesses operate. From helping to assist employers in finding the perfect candidate who can help grow their business to new heights to the use of technology within organizations to help alleviate barriers for those with disabilities to participate fully within the workforce, the use of technology within organizations is a tremendous force for good in its ability to advance opportunities for all Americans.2

However, without public trust in the technology, the amazing benefits of this technology will never be fully realized. This is why the United States must lead globally in building trustworthy standards for artificial intelligence. These standards must be rooted in our unifying principles, such as individual liberties, privacy, and the rule of law. While the development and deployment of AI have become essential to facilitating innovation, this innovation will only reach its full potential and enable the United States to compete should the American public trust the technology and the guardrails placed around its use are limited and well supported by facts and data. The business community understands that fostering this trust in AI technologies is essential to advance its responsible development, deployment, and use. This has been a core understanding of the U.S. Chamber, as it is the first principle within the 2019 “U.S. Chamber’s Artificial Intelligence Principles:

Trustworthy AI encompasses values such as transparency, explainability, fairness, and accountability. The speed and complexity of technological change, however, mean that governments alone cannot promote trustworthy AI. The Chamber believes that governments must partner with the private sector, academia, and civil society when addressing issues of public concern associated with AI. We recognize and commend existing partnerships that have formed in the AI community to address these challenges, including protecting against harmful biases, ensuring democratic values, and respecting human rights. Finally, any governance frameworks should be flexible and driven by a transparent, voluntary, and multi- stakeholder process.3”

AI also brings a unique set of challenges that should be addressed so that concerns over its risks do not dampen innovation and US trustworthy AI leadership. C_TEC shares the perspective with many of the leading government and industry voices, including the National Security Commission on Artificial Intelligence (NSCAI)4, the National Institute of Standards and Technology (NIST)5, that government policy to advance the ethical development of AI- based systems, sometimes called “responsible” or “trustworthy” AI, can enable future innovation and help the United States to be the global leader in AI.

Last year, the U.S. Chamber launched its Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation to advance U.S. leadership in using and regulating trustworthy AI technology.6 The Commission, led by co-chairs former Congressmen John Delaney and Mike Ferguson, is composed of representatives from industry, academia, and civil society to provide independent, bipartisan recommendations to aid policymakers with guidance on artificial intelligence policies as it relates to regulation, international research, development competitiveness, and future jobs.

Over a span of multiple months, the Commission heard oral testimony from 87 expert witnesses7 over five separate field hearings. The Commission heard from individuals such as Jacob Snow, Staff Attorney for the Technology & Civil Liberties Program at the ACLU of Northern California. In his testimony, he told the Commission that the critical discussions on AI are “not narrow technical questions about how to design a product. They are social questions about what happens when a product is deployed to a society, and the consequences of that deployment on people’s lives.”8

Doug Bloch, then Political Director at Teamsters Joint Council 7, referenced his time serving on Governor Newsom’s Future of Work Commission: “I became convinced that all the talk of the robot apocalypse and robots coming to take workers’ jobs was a lot of hyperbole. I think the bigger threat to the workers I represent is the robots will come and supervise through algorithms and artificial intelligence.”9

Miriam Vogel, President and CEO of EqualAI and Chair of NAIAC, also addressed the Commission. She stated, "I would argue that it’s not that we need to be a leader, it’s that we need to maintain our leadership because our brand is trust.”

The Commission also received written feedback from stakeholders answering numerous questions that the Commission has posed in three separate requests for information (RFI), which asked questions about issues ranging from defining AI, balancing fairness and innovation,10 and AI’s impact on the workforce.11 These requests for information outline many of the fundamental questions that the Commissioner looks to address in its final recommendations, which will help government officials, agencies, and the business community. The Commission is working on its recommendations and will look to release them this upcoming Spring, and we will make sure EEOC receives a copy.

While the Chamber is diligently taking a leading role within the business community to address many of the concerns which continue to be barriers to public trust and consumer confidence in the technology, this testimony will highlight how industry is using the technology, the importance of regulatory balance, and finally specific areas in which government can assist in providing the necessary incentives for the technology to be appropriately designed and deployed in a manner that helps all within society. Although these discussions do not explicitly address matters under EEOC’s purview, their broad applicability will make them relevant to furthering EEOC’s understanding of AI’s place in the workplace.

The following issues are considered in this testimony:

  • Opportunities for the federal government and industry to work to together to develop Trustworthy AI
  • How are Different Sectors Adopting Governance Models and Other Strategies to Mitigate Risks that Arise from AI Systems?
  • Policy implications to consider while looking at regulating new technologies such as AI
  • What Recommendations do you Have for how the Federal Government can Strengthen its Role for the Development and Responsible Deployment of Trustworthy AI Systems?

 

  1. Opportunities for the Federal Government and Industry to Work Together to Develop Trustworthy AI

 

A.       Support for Alternative Regulatory Pathways, Such As Voluntary Consensus Standards

New regulation is not always the answer for emerging or disruptive technologies. Non- regulatory approaches can often serve as tools to increase safety, build trust, and allow for flexibility and innovation. This is particularly true for to emerging technologies such as artificial intelligence as the technology continues to evolve rapidly, while regulations are static and modifications are often obsolete upon issuance.

This is why the Chamber supports the National Institutes of Science and Technology’s (NIST) work to draft the Artificial Intelligence Risk Management Framework (AIRMF). The AI RMF is meant to be a stakeholder-driven framework, which is “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

The AI_RMF also will look to develop "profiles" which enable organizations to “establish a roadmap for reducing the risk that is well aligned with organizational and sector goals, considers legal/regulatory requirements and industry best practices, and reflects risk management priorities." These profiles are beneficial in allowing sector-specific best practices to be developed openly and voluntarily.

Another example of non-regulation approach is the National Highway Traffic Safety Administration’s (“NHTSA”) Voluntary Safety Self-Assessments (“VSSA”). More than two dozen AV developers have submitted a VSSA to NHTSA, which has provided essential and valuable information to the public and NHTSA on how developers are addressing safety concerns arising from AVs. The flexibility provided by VSSAs, complemented by existing regulatory mechanisms, provides significant transparency into the activities of developers without compromising safety.

Voluntary tools provide significant opportunities for consumers, businesses, and the government to work together to address many of the underlying concerns with emerging technology while at the same time providing the necessary flexibility to allow the standards not to stifle innovation. These standards are pivotal in the United States’ ability to maintain leadership in emerging technology as it is critical to ensuring our global economic competitiveness in this cutting-edge technology.

 

B.    Stakeholder Driven Engagement

The U.S. Chamber of Commerce stands by and is ready to assist EEOC in any opportunity to improve consumer confidence and trust in AI systems for employment purposes. The business community has always viewed trust as a partnership, and only when government and industry work side by side can that trust be built. The opportunities to facilitate this work are great, but there are essential steps that industry and government can make today.

Last year the Chamber asked Americans about their perception of artificial intelligence. The polling results were very eye-opening, as there was a significant correlation between the trust and acceptance of AI and an individual’s knowledge and understanding of the technology.12 To build the necessary consumer confidence to allow artificial intelligence to grow for the betterment of all, all opportunities must be pursued of for industry and governments to work together in educating stakeholders about the technology.

Inclusive stakeholder engagement and transparency between government and industry are vital to building trust. C_TEC has continued highlighting NIST and its stakeholder engagement on the AI RMF as what agencies should strive to replicate. NIST has had three workshops during their process. The agency has also included three engagement opportunities for stakeholders to provide written feedback on the development, direction, and critique of the AI RMF. This engagement by NIST has allowed for the development of trust between industry and the federal government. While extolling the virtues of the NIST process and their action on the RMF, it is prudent to highlight that NIST is only one entity within the federal government and that as other agencies, such as EEOC, look to receive crucial feedback from the business community, this open and transparent process should look to be modeled.

In contrast, policymakers should more skeptically view the Office of Science and Technology Policy (OSTP) and its development of the AI Bill of Rights, which needed a more transparent and open drafting process. Although OSTP in its “Blueprint,13” claims to highlight organizations for which OSTP met and received feedback, the process to obtain sufficient stakeholder input about these complex was substantively lacking . Furthermore, the only formal request for information from OSTP relied upon in the Blueprint focused on biometrics14 and not a comprehensive Bill of Rights”. OSTP failed to create a complete record of the use of the technology, which harmed trust in being a part of these critical conversations.

C.       Awareness of the Benefits of Artificial Intelligence

 

Another excellent opportunity for industry and government to work together is highlighting the benefits and efficiencies of using technology within the government. The government’s utilization of AI can lead to medical breakthroughs15 to help to predict risk for housing and food insecurities.16 AI is helping government provide better assistance to the American public and is becoming a vital tool. Developing these resources does not occur in a vacuum, and most of these tools are brought about in partnership with the industry.

The Office of Personal Management (OPM) estimates that the Federal Workforce currently has 1.9 million17 employees. The number of government employees eligible for retirement is 14%, with the number jumping to 30% in 202318. This significant reduction in the federal workforce and loss of knowledge could affect governments’ ability to operate. However, the federal government has an excellent opportunity to work alongside the private sector to help hire the future workforce through the use of Automated Employment Decision Tools (AEDT), which could provide significant efficiencies to the process and allow for the benefits of the technology to be seen by all.

Highlighting these workstreams and the benefits such as AEDT and the efficiency they deliver for the American public can assist in fostering trust in technology and build overall consumer confidence in technology use outside of government. However, this would also require a foundational change in how government works, which includes addressing the “legacy culture” that has stifled the necessary investment and buildout of 21st-century technology solutions and harnessing data analytics.

  1. How are Different Sectors Adopting Governance Models and Other Strategies to Mitigate Risks that Arise from AI Systems?

 

AI is a tool that does not exist in a legal vacuum. Policymakers should be mindful that activities performed and decisions aided by AI are often already subject to existing laws. Most notable for EEOC would be Title VII of the Civil Rights Act of 1964,19 which already protects employees and applicants against discrimination based on race, color, sex, national origin, and religion, as well as the Americans with Disabilities Act20.

However, where new public policy considerations arise, governments should consider maintaining a sector-specific approach while removing or modifying those regulations that act as a barrier to AI’s development, deployment, and use. In addition, governments should avoid creating a patchwork of AI policies at the state and local levels and should coordinate across governments to advance sound and interoperable practices.

To begin with, companies are very risk-averse regarding potential legal liabilities associated with their use of technology. Further, companies have a market incentive to address associated risks with using artificial intelligence. This is why the Chamber applauds NIST’s development of a “Playbook,” which is “ designed to inform AI actors and make the AI RMF more usable.”21 The playbook will provide a great resource for the business community and industry in helping them evaluate risk.

 

Every sector will have different risks associated with using AI, which is why it is important to maintain a sector-specific approach. However, policymakers must do necessary oversight to close current legal gaps. These critical assessments provide lawmakers and the industry with a comprehensive and baseline understanding of relevant regulations that are already in place and give more dialogue on where additional guidance may be necessary.

 

A.       Standards and Best Practices have yet to be developed for audits.

As legislatures and agencies look to potentially regulate the use of AI, they must be aware that the technology is continuing to be developed and that the current processes to mitigate potential bias or concerns could soon be obsolete, which is why the Chamber discourages the use of one-size fits all solutions such as third-party audits. While outside assistance should never be discouraged, it should be noted that there is a well-documented risk22 of engaging third-party auditors. Given that there currently are no standards and certifications regarding third-party auditors, there is no guarantee that reviewers can deliver verifiable measurement methods that are valid, reliable, safe, secure, and accountable.

 

B.       Perspective is key

Lawmakers need to be cognizant of the alternative to not using such technology and the implications that it can lead to. One of the critical benefits of AI is that it provides society with a tool that continues to help complement the workforce and provide efficiency and insights that have led to increased productivity and better outcomes. For this reason, it is essential that lawmakers also consider this when it's framing risks associated with the use of AI. This is why lawmakers should understand the importance of the "human-baseline approach," which asks that the outcomes of the use of the system be compared to the alternative of it being done by a human, not against vague AI-related risks without meaningful context. Furthermore, leaving out the human-baseline comparison could ultimately limit AI adoption, as organizations are only at the risk of the technology and not the totality of AI's benefit for the specific application.

Furthermore, Policymakers might be tempted to rush to regulate or ban the use of artificial intelligence in practices like hiring and employment but it is important to understand that AI can be used to provide opportunities to communities that have historically suffered from bias. Regulation should not hinder the very tools which promise to further equality of opportunity.

 

  1. Too much information on the system can be a bad thing:

While transparency around creating automated decision systems and their outputs is critical for building public trust in AEDT, policymakers should avoid any federal mandate requiring the internal working of these systems to be fully divulged, as doing so could lead to the system being gamed and harming the overall trust in these systems. For this reason, should lawmakers look at how companies should provide transparency around the use of the systems, the Chamber would encourage them only to look to provide summaries or a set of takeaways that would provide the necessary transparency to create public confidence for the system while at the same time protecting intellectual property.

 

  1. What Recommendations do you Have for how the Federal Government can Strengthen its Role for the Development and Responsible Deployment of Trustworthy AI Systems?

 

The federal government has the ability to take a leading role in strengthening the development and deployment of artificial intelligence. We believe that the following recommendations should be acted on now.

First, the federal government should conduct fundamental research in trustworthy AI: The federal government has played a significant role in building the foundation of emerging technologies through conducting fundamental research. AI is no different. A recent report that the U.S. Chamber Technology Center and the Deloitte AI Institute23 released surveyed business leaders across the United States had found that 70% of respondents indicated support for government investment in fundamental AI research. The Chamber believes that enactment of the CHIPS and Science Act was a positive step as the legislation authorizes $9 Billion for the National Institutes of Standards Technology (NIST) for Research and Development and advancing standards for “industries of the future,” which includes artificial intelligence.

Furthermore, the Chamber has been a strong advocate for the National Artificial Intelligence Initiative Act, which was led by then-Chairwoman Eddie Bernice Johnson and Ranking Member Lucas, which developed the office of the National AI Initiative Office (NAIIO) to coordinate the Federal government’s activities, including AI research, development, demonstration, and education and workforce development.24 The business community strongly advises Congress to appropriate these efforts fully.

Second, the Chamber encourages continued investment into Science, Technology, Engineering, and Math Education (STEM). The U.S. Chamber earlier this year polled the American public on their perception of artificial intelligence. The findings were clear; the more the public understands the technology, the more comfortable they become with its potential role in society. Education continues to be one of the keys to bolstering AI acceptance and enthusiasm as a lack of understanding of AI is the leading indicator for a push-back against AI adoption.25

The Chamber strongly supported the CHIPS and Science Act, which made many of these critical investments, including $200 million over five years to the National Science Foundation (NSF) for domestic workforce buildout to develop and manufacture chips, and also $13 billion to the National Science Foundation for AI Scholarship-for-service. However, the authorization within the legislation is just the start; Congress should appropriate the funding for these important investments.

Third, the government should prioritize improving access to government data and models: High-quality data is the lifeblood of developing new AI applications and tools, and poor data quality can heighten risks. Governments at all levels possess a significant amount of data that could be used to improve the training of AI systems and create novel applications. When C_TEC asked leading industry experts about the importance of government data, 61% of respondents agree that access to government data and models is important. For this reason, the Chamber encourages EEOC to look at opening up government data which can assist with the training of AEDT’s.

Fourth, Increase widespread access to shared computing resources : In addition to high- quality data, the development of AI applications requires significant computing capacity.

However, many small startups and academic institutions lack sufficient computing resources, which in turn prevents many stakeholders from fully accessing AI's potential. When we asked stakeholders within the business community about the importance of shared computing capacity, 42% of respondents supported encouraging shared computing resources to develop and train new AI models. Congress took a critical first step by enacting the National AI Research Resource Task Force Act of 2020. Now, the National Science Foundation and the White House's Office of Science and Technology Policy should fully implement the law and expeditiously develop a roadmap to unlock AI innovation across all stakeholders.

Fifth, Enable open source tools and frameworks : Ensuring the development of trustworthy AI will require significant collaboration between government, industry, academia, and other relevant stakeholders. One key method to facilitate collaboration is by encouraging the use of open source tools and frameworks to share best practices and approaches to trustworthy AI. An example of how this works in practice is the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (RMF), which is intended to be a consensus-driven, cross-sector, and voluntary framework, akin to NIST's existing Cybersecurity Framework, whereby stakeholders can leverage as a best practice to mitigate risks posed by AI applications. Policymakers should recognize the importance of these types of approaches and continue to support their development and implementation

Conclusion

AI leadership is essential to global economic leadership in the 21st century. According to one study, AI will have a $13 trillion impact on the global economy by 2030.26 The federal government can play a critical role in incentivizing the adoption of trustworthy AI applications through the right policies. The United States has an enormous opportunity to transform our economy and society in positive ways through leading in AI innovation. As other economies around the world contemplate their approach to trustworthy AI it is imperative that U.S. policymakers pursue a wide range of options to advance trustworthy AI domestically and empower the United States to maintain global competitiveness in this critical technology sector. The United States must be the global leader in AI trustworthiness for the technology to develop in a balanced manner and takes into account fundamental values and ethics. The United States can only be a global leader if the public and private sectors work together on a bipartisan basis.

 

https://www.5newsonline.com/article/news/health/northwest-health-introducing-new-technology-to- enhance-maternal-and-fetal

2       https://americaninnovators.com/research/data-for-good-promoting-safety-health-and-inclusion/

3 https://www.uschamber.com/technology/us-chamber-releases-artificial-intelligence-principles

4 https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

5   https://www.nist.gov/artificial-intelligence

6 http://www.americaninnovators.com/aicommission

http://www.americaninnovators.com/aicommission

8 https://www.uschamber.com/technology/ai-for-all-experts-weigh-in-on-expanding-ais-shared-prosperity-and-reducing-potential-harms

9 https://www.uschamber.com/technology/ai-for-all-experts-weigh-in-on-expandinais-shared- prosperity-and-reducing-potential-harms/

10 https://americaninnovators.com/wp-content/uploads/2022/04/CTEC_RFI-AIcommission_2.pdf?utm_source=sfmc&utm_medium=email&utm_campa

11     https://uschambermx.iad1.qualtrics.com/jfe/form/SV_cMw5ieLrlsFwUP

12 [7] https://americaninnovators.com/wp-content/uploads/2022/01/CTEC-US-Outlook-on-AI-Detailed-Analysis.pdf

13 https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf 1414 Federal Register :

15 https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-ai-institute-government-and-public-dossier.pdf

16 https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-ai-institute-government-and-public-dossier.pdf

17 https://www.opm.gov/policy-data-oversight/data-analysis-documentation/federal-employment-reports/#url=Overview

18https://federalnewsnetwork.com/mike-causey-federal-report/2021/06/retirement-tsunami-this-time-for-sure/

19 42 U.S.C. § 2000e-2(a).

20 42 U.S.C. § 12101.

21      https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook-faqs

Policy implications lawmakers should consider while looking at regulating new technologies such as AI

22     https://www.gmfus.org/sites/default/files/2022-11/Goodman%20%26%20Trehu%20-%20Algorithmic%20Auditing%20-%20paper.pdf

23     https://www.uschamber.com/technology/investing-trustworthy-ai

24 https://www.ai.gov/naiio/

25 https://americaninnovators.com/wp-content/uploads/2022/01/CTEC-US-Outlook-on-AI-Detailed-Analysis.pdf

26 https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-w