Scale-Specific AI Governance for Deployment

AI and Its Governance - Not All Scales Are Equal

· 14 min read

There is no precedent for informationally agentic technology operating within an organization. It's all brand new. With more than two million models at last count, evolving before our very eyes, the effects reverberate everywhere throughout the AI supply chain. So why would it be even remotely reasonable to expect that traditional technological governance techniques within enterprises are sufficient to manage these accelerating dynamics? Answer: it isn't - because they're not.


When things go wrong, whether due to model drift or external drift, malicious weaponization, human error, hallucination or reckless disregard, interested and affected parties quite reasonably demand explanations and detailed receipts. Direct victims, lawyers, customers, investors, insurers, regulators, and the market in general, all want to know the how and why. And that presents a real and unprecedented challenge for enterprises. It's one thing to adjust what you know about; it's quite another to detect what you don't. The ground is shifting beneath our feet, which is more than unsettling for risk management, but there is one thing we know for sure - a “static” response will not make the grade.


One of my go-to paradigms is what I call SCOPE. It's a mnemonic of mine I use to analyze pretty much anything. It represents S for Scale, CO for Context, and PE for Perspective. It’s a useful lens as my starting point for interpretation, and also how I perceive the overall system of AI and its governance. Applying SCOPE to the AI superstructure and the issues of safety, security and reliability, we see that there are many layers. Each layer is its own level, or Scale, with differing COntexts of measurement and interaction, and similarly varying observer standpoints, or PErspectives, on needs and expectations.


What are those layers? As described by the insightful Jensen Huang, CEO of Nvidia, it is helpful to think of the AI ecosystem as a layered cake. There is the energy and resources layer; that for AI microchips; AI server infrastructure for the models themselves; cloud-based and platform architectures; the integration layer with data sources and AI applications. Huang suggests that we need to think about this ecosystem holistically, and I agree with that. Notwithstanding this geopolitical appreciation, however, I would add that differentiation is also critical from a governance perspective in order to be competitive. Both approaches need to be taken simultaneously. None of these layers should be analyzed in exactly the same fashion as the others in terms of safety, security and reliability. Anything else would seem to be a failure of logic. In my view, the scale invariant ubiquity of these concerns does not equate to their uniform application. For all I know, this may well be his thinking also.


Rather, though forming part of the overall gestalt, each layer embodies its own recipe of risk and reward and requires appropriate treatment. There seems to be a pervasive category error in understanding this issue of scale specificity. This is a sadly all-too-familiar, simplistic and false reductionism, often combined with false binaries. But no problem can be solved by pretending it isn't there. Governance is not an appendage to AI, nor is it in conflict with or a constriction on AI - it is the approach that enables its very use. Ethical, legal, functional, computational, social, temporal, energy-related, ecological, reputational, economic, geopolitical, macroscopic and microscopic, and even civilizational concerns, may all be involved to varying degrees, and they all demand attention. Reality is a stubborn condition, with nuance requiring discernment!


We see this conflation in many areas, the AI arena being no exception. There’s no time here for the wider discussion, but perhaps the most notorious example in this context is the use of the term “agent”. It is used to describe everything from smart back office automation to fully autonomous planning and decision-making bots. As such, the term is used and misused to encompass so many scenarios as to become functionally useless and very often misleading. Marketing hype is not to be confused with the specifics of operation on the ground.


Similarly, the terms “Ethical”, “Responsible” and “Trustworthy” AI are also good cases in point as they are often used interchangeably but, as Kyle David, PhD helpfully points out, they are usefully distinct in their intended meaning. Ethical AI is fundamentally aspirational. Its principles act as a guide for determining the degree of alignment of AI systems to the values underpinning it, such as transparency and fairness. Responsible AI is more grounded in the implementation of these aspirations. It shows up in the actions taken, the teams established for operationalization, and the infrastructure installed to implement and concretize the values in Ethical AI. Finally, Trustworthy AI, as a concept, is focused on the outcomes of AI use. Trustworthy AI, therefore, is the result of attention to the first two categories.


Distinguishing these terms helps ensure that minds are focused and that required actions are not glossed over. The crucial bridge between Ethical AI and Trustworthy AI is the doing of Responsible AI. Where the sausage is made. It helps prevent death by vacuous, unrealized policy, that ultimately helps no one. Definitions clearly matter. With this in mind, and turning our attention to AI safety, security and reliability at the deployment level within enterprises, a vital resource is the U.S. NIST AI Risk Management Framework (RMF) of Map, Measure and Manage, in order to Govern and use AI. The purpose of the deployment-agnostic AI RMF is to support organizations to ‘prevent, detect, mitigate, and manage AI risks’. The framework describes Trustworthy AI as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent and accountable, explainable and interpretable.


So what exactly are the Responsible AI action steps required to reach these outcomes at the deployment level necessary for Trustworthy AI? As noted, the set of considerations, tasks and multi-disciplinary collaboration required are by necessity scale-specific. Granted, some organizations are both developers of AI models and deployers, and are considered to be such in the eyes of the law. Many are deployers only, however. Even in a dual role capacity, the roles are quite distinct. For the developer, the AI model is its market offering and utility, with safety driving its starting value.


In the context of enterprise deployment, however, the key to AI value lies not just in these elements but also in clear business objectives; rigorous and ongoing testing and monitoring; model choice flexibility; incentivized and trained multidisciplinary and authorized teams; role-based and configurable access controls; robust threat detection and containment; thorough workflow integration; precise data labeling, mapping and selection; privacy by design coupled with standardized use procedures; effective third party management; human oversight and look-back auditability of outputs from the AI system. These are the necessary systemic features and action steps - the true implementation of Responsible AI - that enable organizations to use AI as a business asset as opposed to a disconnected technology. In other words, it is AI governance that informs the value of AI in deployment, turning a “thing” into a process and thereby generating both positive and measurable results.


This leads us to another important idea - the nature of information itself. Information is the lifeblood of any organization. Indeed, of society itself! After all, we humans are storytellers. But information is to be distinguished from its raw material - data. It is interpretation, framing and decision-making that turns data into information. Assumptions, values, language, logic and reasoning are involved. And here’s the rub. Since information can be so influential, the potential for harm by compromised, biased or inaccurate AI systems is great. Of course, not all information is of equal value. Chatbot interactions do not carry the same weight as mission-critical analysis, complex and interdependent computer code, professional advice or other sensitive matters. Risk stratification is required. However, it is important to understand that system inaccuracy is not the only concern. Since AI is agentic and convincingly persuasive, it is a powerful tool in the hands of cybercriminals in social engineering and other attacks.


Weaponization of information by bad actors and the potential for misalignment of AI systems in deployment scenarios stem in part from the fact that AI is, at root, a seemingly authoritative source of information. Outputs seem convincing even when the information is fabricated. The best advice can only be to proceed with caution. Some even consider AI itself to be a potentially competitive intelligence to human flourishing, raising the possibility of human extinction! Whether time shows this to be true or not, the extremity of the risk hardly recommends a roll of the dice.


So for these reasons and others, it is surely necessary to think differently about the risks associated with its use. Applying the NIST rubric, true “Management” demands a highly proactive approach in addition to advanced reactive capability. It is not enough simply to seek to contain AI risks when they arise. We are compelled to develop a fit-for-purpose control infrastructure to use and monitor AI use in real time. Interface alone with AI not inadequate for the job at hand. This is far more than training AI models and personnel in advance and hoping for the best. No one wants to be putting out fires at the eleventh hour - and no one can at scale.


What principles, then, should inform this governance approach? How about, don’t trust completely and verify continuously. Active prevention. Investment in infrastructure for mitigation and explainability for when things do wrong, which inevitably they will at times. Building for the unknown. This is a vital shift in understanding. In this context, while legal compliance is important, the issue of AI governance runs far deeper than meeting legal standards. Ethical, social, market and reputational risks are present no matter the legal framework. Without guardrails and a high degree of real-time observability for risk management, there is arguably no measurable value to be derived from AI at all - only risk. I argue that using AI is rather like driving a vehicle - reward and risk are inseparable.


Before we wrap, let's examine briefly the main arguments for why governance is the key to the effective deployment of AI systems within enterprises, the key features required, and the benefits to be derived:


  • Strategic Alignment and Value Creation:


Governance ensures that all AI initiatives are aligned with the organization's broader business strategy and objectives. Specificity matches deployment to clearly defined use cases.


  • Trust and Accountability:


Clear governance establishes accountability mechanisms within organizations, defining who is responsible when an AI system is used incorrectly or produces errors, security threats, violations or biased outcomes.


  • Ethical Decision-Making:


AI systems perpetuate existing biases present in data, absent modification. Governance provides the necessary checks and balances, including bias detection and mitigation strategies, to ensure AI outputs are fair, transparent, and ethical.


  • Risk Mitigation and Compliance:


International governance standards such as ISO/IEC 42001, in conjunction with regular bias and other impact assessments and technical control measures, help enterprises identify, assess and mitigate risks associated with AI. Such risks extend beyond ethical violations to reputational harm, data privacy breaches, and regulatory non-compliance.


  • Ensuring Data Quality and Security:


Governance protocols mandate stringent and standardized data management practices, ensuring AI systems are trained on and are used with high-quality, secure, and appropriate data.


  • Operational Efficiency and Scalability:


Frameworks and control architectures, combined with targeted and ongoing training, provide clear processes for deploying and monitoring AI systems across configurable teams, including in conjunction with third parties. Technical privacy guardrails, mechanisms for human oversight in workflow as needed, and audit trail explainability, are all essential.



R. Scott Jones

About R. Scott Jones

R. Scott Jones, Esq. CIPP/US is a New York attorney with an increasing focus on privacy law issues. He is also lead partner of The Generative Company, a generative AI consulting firm, and co-founder of an AI Governance SaaS platform currently in development.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.