November 2024
Introduction
National Credit Union Administration
Created by the U.S. Congress in 1970, the NCUA is an independent federal agency that insures deposits at federally insured credit unions, protects the members who own credit unions, charters and regulates federal credit unions, and promotes financial education and consumer financial protection. The NCUA is responsible for the regulation and supervision of 4,499 federally insured credit unions with more than $2.31 trillion in assets across all states and U.S. territories.1
The NCUA Artificial Intelligence Compliance Plan (plan) outlines the agency’s approach to managing the use of artificial intelligence (AI), as required by the AI in Government Act of 20202 and Office of Management and Budget (OMB) Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.3 The plan was developed under the direction the NCUA’s acting Chief Artificial Intelligence Officer, Amber Gravius.
AI Governance
The NCUA is developing AI specific guidelines and policies to ensure they are consistent with the requirements and standards set forth in OMB Memorandum M-24-10. This includes aligning with the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF)4 and the NCUA’s information technology policies to foster responsible use and governance of AI.
Governance Bodies
The NCUA has several governance bodies comprised of senior executives and management that play an important role in ensuring responsible AI use, managing risks, monitoring data management practices, and ensuring transparency and accountability in AI implementations.5 These include:
- An Information Technology Oversight Council responsible for setting the direction for information technology by prioritizing projects and ensuring alignment with the NCUA mission.
- A Data Governance Council responsible for establishing data standards, facilitating development of strategic objectives, and championing prudent data management practices.
- A Cybersecurity Council responsible for evaluating internal and external information security risks to the NCUA and credit unions.
- An Enterprise Risk Management Council responsible for oversight of the NCUA’s risk management framework and functions.
The NCUA also employs a rigorous review and approval process of all guidance and instructions issued to staff.
The NCUA will continue to seek guidance from external experts, as appropriate. The NCUA actively participates in several interagency working groups and engages with external stakeholders. These include the Financial Stability Oversight Council, the Federal Financial Institutions Examination Council, the Financial and Banking Information and Infrastructure Committee, the Federal Chief Data Officer Council, and the Chief Artificial Intelligence Officer Council (CAIO Council).
The NCUA encourages open discussions with the credit union industry and has regular meetings with industry trade organizations. The NCUA periodically participates in roundtables and panels on industry topics, including AI.
Use Case Inventory
The NCUA has established a centralized process to solicit and collect AI use cases from all offices. Each office is required to provide comprehensive details of their AI applications, ensuring the inventory is complete and up to date. Additionally, the NCUA requires offices share proposed use cases with the Office of Business Innovation and Office of the Chief Information Officer. This process is designed to ensure each use case undergoes a thorough security, privacy, and technical review and mitigates the risk of duplicative initiatives. The NCUA will also solicit annual updates to the AI use case inventory to ensure the agency captures changes to existing use cases.
Reporting on AI Use Cases Not Subject to Inventory
As part of the NCUA AI use case inventory data collection, the agency will review all use cases and determine if any qualify for exclusion from individual inventory as specified in Section 3(a)(v) of OMB Memorandum M-24-10. This process includes periodic reviews and validation of use cases to ensure they still meet the exclusion criteria, if applicable. The criteria for exclusion include considerations such as the sensitivity of the data, potential impact on privacy, and strategic importance.
Advancing Responsible AI Innovation
Barriers and Mitigation Steps
The NCUA is taking a methodical approach to AI. The NCUA is focused on identifying the use cases of greatest utility to help the agency effectively and efficiently achieve its mission of protecting the system of cooperative credit and its member-owners. The NCUA carefully evaluates investments in new technologies to ensure the agency has the financial and operational capacity to sustain any such investment.
The NCUA is evaluating our staffing needs, opportunities for AI to improve processes, and policies to ensure ethical and responsible use of AI. Offices use the NCUA annual budgeting process to request funding for software tools and information technology development activities, including AI. The Information Technology Oversight Council provides a prioritized recommendation to the NCUA Board for approval.
The NCUA has issued guidance to staff about the responsible use of AI, focusing on risk management, data privacy, and ethical considerations. The NCUA is updating various policies and procedures as needed to facilitate the responsible use of AI under a sound governance framework.
Talent
The NCUA is evaluating and determining staffing and training needs to ensure the NCUA has the necessary skills and knowledge to effectively implement and use AI technologies. As part of that process, the NCUA Board authorized the hiring of total of 3 AI officers as part of the agency’s budgets for 2025 and 2026. The Office of Human Resources is evaluating the use of special AI hiring authorities to acquire AI talent.
Harmonization of Requirements
The NCUA participates on several interagency working groups and attends the CAIO Council meetings. These groups and councils document and share best practices in AI governance, innovation, and risk management. Some groups, such as the CAIO Council, are focused on the safe and responsible implementation of AI within the government. Other interagency groups have a regulatory focus for related risks within the financial industry.
Managing Risks from the Use of Artificial Intelligence
The impacts or consequences from the use of AI systems can be positive, negative, or both, and can result in opportunities or threats. AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values.
Minimum Risk Management Practices
While risk management processes generally address negative impacts, the NIST AI RMF offers approaches to minimize anticipated negative impacts of AI systems and to maximize positive impacts. Effectively managing the risk of potential harm could lead to more trust in AI systems and unleash potential benefits to people, organizations, and the government. The NCUA is creating a set of controls based on the NIST AI RMF including:
- Preventive Controls: Stringent assessment processes and preventive controls to ensure that non-compliant safety-impacting or rights-impacting AI systems are not deployed to the public.
- Monitoring and Auditing: Continuous monitoring of AI systems and auditing mechanisms to ensure ongoing compliance with risk management practices and promptly detect deviations.
- Termination Procedures: Clear procedures for terminating non-compliant AI systems, including immediate deactivation and remediation steps.
In certain circumstances, it may be necessary to issue waivers for one or more of the minimum risk management practices. The NCUA’s process will include:
- Criteria for Waivers: Develop criteria to guide the decision to waive risk management practices, ensuring that waivers are granted only when necessary and justified.
- Issuance and Revocation: Establish procedures for issuing, denying, revoking, tracking, and certifying waivers, with oversight from the CAIO and appropriate NCUA officials and governance bodies.
- Documentation and Transparency: Maintain detailed records of all waiver decisions to ensure transparency and accountability.
As previously indicated in the description of its Use Case Inventory process, the NCUA has established procedures to conduct security, privacy, and technical reviews on all proposed use cases prior to deployment. Additionally, the NCUA’s governance councils provide oversight from an information technology, cybersecurity, data, and risk perspective.
Safety-Impacting or Rights-Impacting Use Cases
The NCUA is reviewing its AI use cases to determine if any are safety-impacting6 or rights-impacting7 based on definitions in Section 6 of OMB Memorandum M-24-10. The NCUA has not created additional criteria for when an AI use case is safety-impacting or rights-impacting.
Termination of Non-Compliant AI
Preparedness for potential incidents involving AI is vital to managing risks effectively. If needed, the NCUA will terminate any AI that fails to meet compliance standards including:
- Incident Response Plans: Develop and maintain incident response plans tailored explicitly for AI systems, outlining the roles and responsibilities, communication protocols, and remediation actions.
- Redress Mechanisms: Establish mechanisms for redress to address any harms caused by AI systems, ensuring that affected individuals or entities can report issues and seek resolution.
- Continuous Improvement: Regularly review and update incident response and redress protocols based on lessons learned from past incidents and emerging best practices.
In addition to the NCUA’s existing information technology, security, and governance procedures, these processes help ensure the safe and responsible use of AI internally at the NCUA. The NCUA currently does not have any AI use cases deployed to the public.
Footnotes
1 As of September 30, 2024.
2 Pub. L. No. 116-260, div. U, title 1, § 104 (40 United States Code § 11301 note), https://www.congress.gov/116/plaws/publ260/PLAW-116publ260.pdf.
3 OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10- Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
4 Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Publication AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov).
5 The Councils include representatives from across the NCUA including the Office of the Chief Information Officer, Office of Business Innovation, Office of Examination and Insurance, Office of the Executive Director, Office of the Chief Financial Officer, Office of General Counsel, and the regions.
6 Safety-Impacting AI refers to AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety of: 1) human life or well-being; 2) climate or environment; 3) critical infrastructure, including the critical infrastructure sectors defined in Presidential Policy Directive 21; or 4) strategic assets or resources, including high-value property and information marked as sensitive or classified by the federal government.
7 Rights-Impacting AI refers to AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s: 1) civil rights, civil liberties, or privacy, including but not limited to freedom of speech, voting, human autonomy, and protections from discrimination, excessive punishment, and unlawful surveillance; 2) equal opportunities, including equitable access to education, housing, insurance, credit, employment, and other programs where civil rights and equal opportunity protections apply; or 3) access to or the ability to apply for critical government resources or services, including healthcare, financial services, public housing, social services, transportation, and essential goods and services.