October 17, 2024
Why AI Governance Is Now a Must to Sell AI Products to Enterprise Customers
Companies are increasingly turning to AI to enhance the appeal of their products. However, integrating AI into a product introduces various risks related to the quality and fairness of the service it provides...

Companies are increasingly turning to artificial intelligence to enhance the appeal of their products. However, integrating AI into a product introduces various risks related to the quality and fairness of the service it provides, particularly when AI is involved in decision-making automation. These risks are often subject to regulatory scrutiny, as the global trend towards governing and regulating AI gains momentum. A case in point is the recent provisional agreement of the EU AI Act, which proposed that non-compliance could result in fines amounting to up to 7% of a company’s annual global revenue.

At first glance, the need to mitigate these risks might appear to be a problem largely limited to companies that develop foundational AI products, such as large language model providers like OpenAI. However, in reality, other companies that either purchase those foundational products or build services that run on top of them also face this issue. This is because utilizing an AI product from a third-party vendor exposes enterprises to hidden risks associated with the underlying model, given the difficulty in accurately assessing its quality and fairness. Thus, companies that deploy AI in their services are now responsible for ensuring that third-party vendors construct and operate their AI products in a compliant manner, in addition to validating that their own in-house AI model training techniques are in line with the latest responsible AI frameworks and regulations. As a result, a growing number of enterprises insist that vendors furnish proof of an established AI governance system and disclose information on their model training and validation processes.

As a reflection of this trend, several Fortune 500 companies have recently created internal AI governance boards that consist of legal and technical experts who are tasked with examining AI solutions offered by vendors or the open-source community. This evaluation usually entails an iterative question-and-answer process, with a particular emphasis on determining if the vendor has implemented an AI governance system capable of continuously monitoring model development pipelines, consistently being in line with the best practices set forth by leading AI policy documents such as the EU AI Act or NIST AI Risk Management Framework, and promptly identifying risks for non-compliant behavior. To better understand the criteria that AI vendors are assessed against when selling to enterprise customers, we asked the AI governance boards of several large U.S. companies for a list of key evaluation questions they ask vendors, which include:

What is your philosophy on ethics and AI?

Do you have any policies and standards intended to mitigate the risks in AI or machine learning systems, such as the adoption of best practices outlined in the NIST Risk Management Framework or the EU AI Act? If yes, please provide evidence of such policies.

Are models assessed before being put into production? If yes, how?

Do you inventorize all your models and regularly assess them? If yes, how?

What actions are taken when risks in a model are identified? Please describe the tools you use for model risk identification and remediation.

Could you help us verify if generative AI tools such as ChatGPT are used in your service?

Could you indicate any potential risks or access permissions you may need for model training/validation (e.g., PHI, PII, PCI, Medicare, Medicaid, etc.)?

Were your AI models trained with properly licensed content? If yes, please provide evidence.

Does your company face any pending AI-related litigation, threats of litigation, or regulatory inquiries?

Given the increasingly large adverse impact that non-compliance with AI regulations could have on their sales revenue, vendors of AI products must carefully and thoroughly prepare their answers to the aforementioned questions. Indeed, they should provide robust evidence demonstrating their ability to monitor data flows to a model and promptly sound the alarm when any issues in the model’s production and deployment pipeline arise — all of which are features supported by BreezeML, a platform that helps enterprise organizations build their AI governance system from the ground up.

BreezeML tracks the development history of AI models, centralizes related documentation and data, and generates instant and customizable reports for auditors, allowing vendors of AI products to quickly and easily demonstrate to customers that their AI is transparent and trustworthy. Because BreezeML notifies its users of any non-compliant AI training behavior as soon as it occurs, vendors can have confidence from the very beginning of their model training process that they are developing technology that meets the standards required to earn their clients’ trust.

Interested in learning more about BreezeML? Please visit breezeml.ai for more information. To book a demo, please contact info@breezeml.ai.

More Articles

Follow our team as we dive into the latest AI regulations and keep up with academic research on state-of-the-art techniques for AI evaluation.