Home | Resource Center | Articles

NIST's Responsibilities Under New AI Executive Order

President Biden Calls for Artificial Intelligence Guidelines

On Oct. 30, 2023, President Biden issued an Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (AI). The EO charges multiple agencies, including the National Institute of Standards and Technology (NIST), with producing guidelines and taking other actions to advance the safe, secure and trustworthy development and use of artificial intelligence (AI).

The scope of the executive order tasks numerous government agencies with gathering information to enable future legislative and regulatory activities. The executive order is a clear indicator of the potential profound disruptive impact that AI will likely have across industries, governments and societies worldwide. For the next year, these activities will likely influence not only the direction of future legislative activities and regulations but will also enable private businesses to better understand the potential benefits and risks posed by AI.

What Areas Does the Executive Order Cover?

There are six areas covered within the EO:

Technology Initiatives: Standards and guidelines for AI safety and security.

  1. Compliance Considerations: Privacy, intellectual property and consumer protections.
  2. Workforce and Employer Considerations: Labor standards, nondiscrimination, training, AI talent development and workplace rights.
  3. Government use of AI.
  4. Industry considerations: Healthcare, financial institutions, education, critical infrastructure, transportation, housing, energy, supply base management and defense.
  5. International Engagement: Collaborating to establish international standards for managing AI benefits and risks.

To accomplish these objectives, government agencies were directed not to implement an outright ban on the use of AI but to evaluate the potential risks and benefits across the government. Numerous agencies were charged with developing guidance on how AI could be used responsibly to meet their goals while educating government workers on the appropriate use of AI. In tandem, agencies with regulatory authority were directed to understand the potential impact on current regulations and to consider whether AI risks warrant new regulations within their authority.

The initial research will benefit both governments and businesses by helping them to understand the potential benefits and risks that AI brings. AI has the potential to accelerate our ability to understand our world in ways that we never imagined while enabling us to increase productivity and efficiency to the speed of AI to enable solving complex problems, generating new content and making discoveries that were previously limited by the human capacity to process information.

Let’s think back to the creation of nuclear capabilities. We have the ability to wield great power to produce energy, but with it comes the ability to inflict great harm, whether deliberate or accidental. Our businesses will face new financial, operational, technology, strategic and compliance risks posed by AI, and we must be prepared to embrace them as well as manage them.

AI will have staying power as an emerging disruptive technology across virtually all industries. To prepare businesses and our workforce, organizations will need a framework to evaluate and make decisions regarding AI. NIST, through public-private partnerships, will be at the forefront of further developing the methodology and resources to enable government agencies and businesses to understand and manage the benefits and risks arising from AI. Additionally, we can expect the U.S. federal government to continue positioning itself as a global leader in adopting the benefits of AI while managing emerging risks and providing guidance on its safe and secure use.

The NIST AI Risk Management Framework is referenced throughout the executive order, which will provide a foundational piece for guidance for governing, building, monitoring and managing responsible, trustworthy and transparent AI. The speed of AI accelerates the rate and impact of errors and bias that it takes on from the attribute of its underlying data, the learning model and the people who trained it. Our businesses are still responsible for meeting our contractual, regulator, and compliance requirements and will be on the hook to demonstrate that our AI services are compliant. The initiatives that NIST oversees will provide a path to ensure trustworthy and transparent AI.

What are NIST’s responsibilities per the EO?

NIST has detailed a number of its these specific responsibilities under the EO, which we’ve provided below:

  • Developing guidelines and best practices to promote consensus industry standards that help ensure the development and deployment of safe, secure and trustworthy AI systems.
  • Developing a companion resource to the AI Risk Management Framework focused on generative AI.
  • Developing a companion resource to the Secure Software Development Framework to incorporate secure-development practices for generative AI and dual-use foundation models.
  • Launching a new initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities that could cause harm.
  • Establishing guidelines and processes for developers of generative AI to conduct AI red-teaming tests.
  • Coordinating or developing guidelines related to assessing and managing the safety, security and trustworthiness of dual-use foundation models and related to privacy-preserving machine learning.
  • Developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure and trustworthy AI technologies, as well as support the design, development and deployment of associated privacy-enhancing technologies (PETs).
  • Engaging with industry and relevant stakeholders to develop and refine specifications for effective nucleic acid synthesis procurement screening, best practices for managing sequence-of-concern databases, technical implementation guides for effective screening and conformity assessment best practices and mechanisms.
  • Developing a report to the Director of the Office of Management and Budget (OMB), identifying existing standards, tools, methods and practices, as well as the potential development of further science-backed standards and techniques, for authenticating content and tracking its provenance, labeling synthetic content, detecting synthetic content, preventing generative AI from producing Child Sexual Abuse Material or producing non-consensual intimate imagery of real individuals, testing software used for the above purposes and auditing and maintaining synthetic content.
  • Creating guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.
  • Developing guidelines, tools and practices to support agencies’ implementation of minimum risk-management practices.
  • Assisting the Secretary of Commerce in coordinating with key international partners and standards development organizations to drive the development and implementation of AI-related consensus standards, cooperation, and information sharing.

Guidance from NIST will become the foundational support structure that will enable other agencies tasked with ensuring current regulations and structures are modernized to consider both the potential benefits and risks that AI can provide. Through public-private partnerships, the guidance will be pivotal in enabling businesses to understand and manage AI risk in order to obtain its promised benefits and will hopefully provide a framework to test and provide third-party assurance on AI models.

NIST has been directed to complete most of these tasks within 270 days of the issuance of the EO. NIST also plans to engage with the private sector, academia and civil society, in addition to working with government agencies, to produce guidance called for within the EO.

NIST’s work on these tasks is meant to help ensure that AI is developed and used safely and securely. Windham Brannon will continue to monitor new updates from NIST and other agencies as they arise regarding issues of AI and other cybersecurity topics. For more information, contact your Windham Brannon advisor.