1. Home
  2. Compliance Plan for OMB Memorandum M-25-21

Compliance Plan for OMB Memorandum M-25-21

September 2025

 

Subject:Compliance Plan for OMB Memorandum M-25-10
 
Prepared by:      Sivaram Ghorakavi
Deputy Chief Information Officer & Chief AI Officer
 
Issued by:Andrea R. Lucas
Acting Chair

PURPOSE:

The Executive Order 14179  and the OMB Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, direct each agency to submit to OMB and post publicly on the agency’s website either a plan to achieve consistency with M-25-21 or a written determination that the agency does not use and does not anticipate using covered AI.

In accordance with this requirement the Equal Employment Opportunity Commission (EEOC or “Agency”) is actively engaged in efforts to align its internal principles, guidelines, and policies to ensure the responsible and trustworthy deployment and use of AI. This document outlines the Agency’s plans to meet those requirements applicable to a non-CFO Act Federal agency related to M-25-21’s main goals of:

  1. Driving AI Innovation
  2. Improving AI Governance
  3. Fostering Public Trust in Federal Use of AI

DRIVING AI INNOVATION

Over the past years, the EEOC has invested in strategic IT modernization efforts including establishing scalable Cloud infrastructure and building a knowledge base in AI techniques. These investments along with existing policies for identifying and deploying additional software tools will facilitate adopting AI.

Removing Barriers to the Responsible Use of AI

The EEOC recognizes the importance of cultivating AI knowledge and skills to effectively meet the Agency’s mission and strategic objectives. In that, the agency is committed to fostering an environment where AI technologies can be used, developed, and deployed responsibly to enhance operations and serving the public.

Barrier: AI expertise and skills. Funding. Data Readiness.

Mitigation Strategy: We have been conducting reviews to identify barriers to AI adoption, including issues related to data, Infrastructure, organizational readiness, AI skills, and funding. Additionally, we are assessing our current and future needs for AI expertise across the Agency and are actively reviewing professional development and learning opportunities for our staff to support AI and other technology-related enforcement and policy-making activities. As the Agency’s AI strategy and approach evolves, we will continue to monitor developments and remain open to collaborative opportunities to attract, retain, and develop talent within the agency.

Sharing and Reuse

EEOC is streamlining processes to expedite value delivery, including business need intake and technical solutions. The Office of the Chief Information Officer (OCIO) determines suitable technical solutions, prioritizing existing capabilities like AI code and models, or developing new ones for the Agency. To enhance sharing and reuse, EEOC will expand its Agile CI/CD operations to AI pipelines integrated with GitHub.

EEOC is also building an enterprise data inventory and strategy to improve data sharing and provide AI-ready data for broader reuse of AI solutions.

AI Talent

The Agency is dedicated to developing a workforce skilled in AI to assist with deployment and oversight. The plan involves identifying workforce requirements, recruiting AI talent, enabling current staff to acquire AI skills, training and upskilling employees, and providing growth opportunities through various assignments and detail positions.

2. IMPROVING AI GOVERNANCE

AI Governance Board

The Agency has an established AI Governance Board led by the Chief AI Officer and members include from the Office of the Chair (OCH), Office of Chief Information Officer (OCIO), Office of Federal Operations (OFO), Office of Field Programs (OFP), Office of General Counsel (OGC), Office of Legal Counsel (OLC), and Office of Communications & Legislative Affairs (OCLA). The Board members serve as stewards of responsible AI adoption across the agency, commit to ongoing engagement, and collaborate and communicate effectively with their Program Offices. They exercise independent judgment, stay current with AI developments, and ensure decisions are based on objective risks and robust mitigation strategies.

Following the guidance in M-25-21, the board ensures appropriate guardrails are in place to embrace innovation in a way that assists, complements, and enables the Agency’s staff’s work in service of its mission. It also ensures the Agency is managing AI systems properly and sharing information clearly with the public when needed.  To accomplish these goals, the group collaborates closely across offices, seeking advice from external experts as needed. 

Agency Policies

The Agency has established AI Principles to guide any development and use of AI and other emerging technologies by the EEOC. Furthermore, the Chief AI Officer with support from the AI Governance Board and other relevant stakeholders is in the process of reviewing and updating internal policies on IT infrastructure, data, cybersecurity, privacy, and procurement where needed to accommodate the nuances of AI use cases and systems.

The Agency is revising its internal AI guidelines and processes to be consistent with M-25-21 and M-25-22. EEOC is also improving and streamlining its overall system development lifecycle, which includes improved technical solutioning as well as integration with the new AI high-impact review process and the streamlined AI acquisition process aligned with OMB M-25-22.

AI Use Case Inventory

The EEOC has a centralized AI Use-Case Tracker to log and monitor AI use cases, ensuring compliance with protocols and supporting the Agency AI Strategy. The tracker helps manage risks by evaluating AI use based on identified risks, alignment with principles, and safeguards. Outcomes may include approval, requests for more information, or rejection. The AIGB can assign actions to Use Case Owners to maintain governance, including risk management, performance briefings, and regular reviews.

  1. Use case Owner submits a request using AI Governance Board Review Request Form with the following information:
    1. Business Case
    2. Detailed Impact Assessment (required for “High Impact” use cases)
    3. Testing Summary
    4. Development/Implementation Plan
    5. Product or vendor documentation (e.g., solution overview, case studies, impact assessments)
    6. Market research results
    7. Customer references
  2. CAIO reviews the request for completeness and sends notification to AI GB for review.
  3. The Board meets bi-weekly to review use case submissions. It evaluates AI use cases based on risks and alignment with federal and agency principles and guidelines.
  4. If risks are identified and managed, the Board will approve the use case, document requirements or next steps for governance, and allows it to proceed.
  5. If the use case does not align with AI principles or risks are too high, the Board will reject it and provide its rationale.
  6. In other scenarios, the Board will request more information or recommendations before re-engaging.
  7. If the use case is considered high-impact, the system owner will be required to follow specified risk management requirements. These outputs will be reviewed during the appropriate gate review(s) in the system development lifecycle process. The AI Use Case Inventory will reflect these updates.

3. FOSTERING PUBLIC TRUST IN FEDERAL USE OF AI

As the primary federal agency charged with enforcing the federal laws against employment discrimination, in evaluating the agency’s potential uses of AI, the EEOC considers the impact of such use considering applicable civil rights laws and the agency’s mission, among other criteria.

As the Agency explores new use cases, the AI Governance Board will evaluate the intended use of AI to determine if it is a High-Impact use case. While the Agency encourages Use Case Owners, the EEOC staff members who are accountable for the mission outcome of the AI use cases and their risks, to consider and implement certain safeguards for any use case, the AI Governance Board will ensure that minimum risk practices are required for High Impact use cases. Given its mission and commitment to protecting civil rights and civil liberties, the Agency currently has no intention of issuing any waivers for required safeguards. If the minimum risk management practices cannot be met for High Impact use cases, the use case will not be implemented. However, the Chief AI Officer will continue to review the needs and criteria for waivers in the future. High Impact designations and, if applicable, any associated waivers will be captured in a centralized tracking system and will be reviewed annually prior to the AI use case inventory publication.

In addition, the AI Governance Board will document minimum risk management practices proposed and implemented by the Use Case Owners. Guidance for evaluating risks and identifying risk management practices has been established to support Use Case Owners. For High Impact use cases, the AI Governance Board will oversee the implementation and monitor status to ensure that the safeguards are in place and effectively managing risks. If safeguards are deemed to be ineffective or the use case is non-compliant during or after deployment, the AI Governance Board will work with the Use Case Owner to pause deployment to either implement additional safeguards or decommission the use case. The Agency is also establishing an escalation process to address any issues that arise from the use of AI technologies, which will minimize any negative impact by ensuring that the appropriate parties are informed of and can quickly respond to issues.

The Agency is committed to fostering public trust in its use of AI. Therefore, the Chief AI Officer, AI Governance Board, and agency staff will continue to refine the Agency’s approach to risk management to preserve its mission and trust with the American public.

Enabled In-page Navigation