April 7, 2023
Deep Dive into the NIST AI Risk Management Framework
Take a closer look at the NIST AI Risk Management Framework and learn how BreezeML can help your organization follow these guidelines.

The AI Risk Management Framework (AI RMF) created by the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce serves as voluntary guidance to help organizations improve their ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

Recently, the NIST AI RMF has gained widespread attention from and adoption by American enterprises that seek guidance from the U.S. government on AI governance, risk, and compliance as they await a federal law regulating AI use. While the NIST AI RMF serves as a helpful guide for improving AI trust and safety, how can companies practically apply the framework’s recommendations to their own AI development process? Moreover, what application or software can they use to establish governance over and manage risks posed by their AI models, code, and datasets?

To answer these questions, in the table below, we explain how BreezeML — a governance tool that helps enterprise organizations document, manage, and continually monitor their AI models and datasets to ensure compliance with internal and external regulations — enables organizations to meet the guidelines listed under each of the four (Govern, Map, Measure, and Manage) categories of the NIST AI RMF.

The following are key highlights of the full table below:

  • For the Govern category, BreezeML enables organizations to establish internal AI policies aligned with regulations, document processes, assign user roles/responsibilities, and integrate stakeholder feedback.
  • For Map, BreezeML allows organizations to specify system context, risks, impacts, and requirements in policy documents. It also provides model cards to document AI system details.
  • For Measure, BreezeML offers bias/fairness testing to evaluate AI systems and continuous monitoring to track risks over time. Organizations can document approaches for measurements in policies.
  • For Manage, BreezeML notifies users of compliance violations, enables policy documentation of risk prioritization/mitigation strategies, and facilitates the regular monitoring of risks.
  • Overall, BreezeML centralizes and relates AI artifacts, automates compliance monitoring, facilitates collaboration between teams, and allows thorough policy documentation to address NIST AI RMF’s requirements.

In summary, the table above highlights BreezeML’s value as a one-stop AI governance platform that can help organizations follow the guidelines put forth by the NIST AI RMF. By serving as a nexus between compliance/legal teams and data scientists, and employing a “governance from the ground up” methodology, BreezeML enables compliance teams to effortlessly specify and monitor custom governance policies over every AI workflow in their organization without relying on manual and tedious coordination with data science teams. BreezeML integrates with common machine learning systems (e.g., MLOps, data stores, etc.) to version and track provenance for all ML artifacts in an organization. Along the way, data scientists and compliance teams benefit from BreezeML’s real-time guardrails that continually perform compliance-related checks as ML artifacts are developed and deployed — any violations or detected risks are flagged as warnings with the appropriate context and potential mitigation strategies, and all compliance efforts are documented for internal or external use.

Interested in learning more about how BreezeML can help your organization with AI governance, risk management, and compliance? For more information, please visit breezeml.ai or contact info@breezeml.ai. To schedule a demo, please submit a request here.

More Articles

Follow our team as we dive into the latest AI regulations and keep up with academic research on state-of-the-art techniques for AI evaluation.