USING AI IS A LOT LIKE DRIVING….
Imagine AI as an automobile. Not as a driverless car (even though such a vehicle is closely monitored by centralized control systems) but as a regular vehicle that transports its passengers from A to B with a driver.
Such vehicles on the road, of course, come in lots of shapes and sizes, as they are used for vastly different purposes. Yet all such vehicles simultaneously carry common risks to their occupants and those impacted by moving through their environment. AI is very similar. And in the deployment context, the parallel with the automobile extends even further. AI and automobiles are not merely things but more akin to processes. Tools require users.
Let’s explore the features of the automobile and its AI counterparts from the standpoint of capability, performance, safety, security and reliability in the context of deployment within enterprises. In this vein, it’s helpful to think in three distinct categories of features - Production Essentials, Driving Controls (i.e. operational use) and Incident Safety Mechanisms (damage limitation) for when things go wrong.
PRODUCTION ESSENTIALS
- Design and Purpose - Goal
It could be anything from an everyday commuter car (comparable to a chatbot, for example), to a working pick up truck (perhaps a classification or generative model), to a high end sports car (think healthcare diagnosis, finance and defense). Both the automobile and the AI model must be fit for purpose. For AI, what is the business objective? What is the applicable use case?
- Chassis - Architecture
Similarly, the chassis of the vehicle is designed for particular performance. AI models have different architectures based upon optimization for different functions. Some specialize in visual outputs, some provide decision-tree analysis, some are focused on generative natural language processing.
- Suspension - Flexibility and Quality
Any driver knows to expect the unexpected. The vehicle itself needs to withstand shocks, roll with the road and absorb the odd pothole, some of which can feel like chasms. Suspension avoids disintegration. So too should the use of AI soften the impact of its potential hazards. Workflow architectures, model flexibility, training and standardized procedures all play a role in maintaining momentum and output quality.
- Engine - Compute Capability
The size of the engine and its capability is comparable to the number of parameters of the model and the volume and breadth of its training data. Some models are so extensive as to be considered foundational models. Others may be derivatives (smaller models) for specific functions. Don’t make the mistake of thinking smaller models are less effective, however. They are carrying less baggage too.
- Wheels - Traction
The wheels are literally where the rubber meets the road. Where the driver and vehicle become one. The net effect of all the vehicle components and driver behaviors operating in synthesis. Model features, AI fluency, training, processes, observation and repeatability all play their role in allowing AI use to gain traction. Truly driving forward.
—--------------------------------------------
DRIVING CONTROLS
- Fuel - Data
Every automobile runs on fuel. Its quality also dictates the health of the system and the quality of the ride. Garbage in, garbage out. Data is just the same. The data used for fine-tuning a model for a particular function is critical. If the need calls for premium fuel, then rigorous cleansing, labeling and high-grade classification are all, nothing less will do.
- Oil - Ongoing Monitoring
As we all know, engines seize up without the lubrication of oil. Algorithmic testing, continuous monitoring, and periodic re-evaluation around performance and applicability, are all best practices for a well-oiled operation.
- Driver’s License - User Training
Responsible use of a car requires knowledge of the vehicle's condition, judgment about speed and constant situational awareness, combined with observation of road markings and signs as appropriate. All drivers are required to reach a minimum standard of proficiency before being licensed. Similarly, AI use with an enterprise requires training and a minimum degree of AI fluency. In operation, attention by the user to model capabilities, prompt techniques, data privacy standards, and any warning signs from outputs, are all necessary behaviors of the safe AI driver.
- Mirrors - Conformity with Model Cards
Mirror - Signal - Maneuver, as they say, which is code for Check, Notify and Act. It may be questionable whether all actual drivers are signed up to this mantra, but failure to follow this rubric with AI will likely result in failure. It starts with testing but also extends to the active user: is the AI model doing what it says on the tin? Is the architecture behind the model doing what is intended? Alignment is required. Model cards should provide clear developer instructions on how to use the models in question. Deviation may result not simply in poor results but even a determination that a deployer is to be considered a developer in the eyes of the law, with additional legal implications.
- Dashboard - Control Panel
The dashboard is the automobile’s monitoring system. Not just as a speed check but also regarding engine efficiency, consumption, systems health, distance traveled, and even the route navigated. And so it is with AI also. Observability and access to metrics are vital. Return On Investment, Key Performance Indicators on personnel involved and value generated, and indicators of compliance health, are all critical information in deployment. What architecture is in place to automate and share this information?
- Lights - Blueprints and Protocols
Lights are not only important for driving in the dark but also for daytime visibility. In the world of AI, we can ask the question (and monitor in workflow to confirm) whether blueprints are being used for AI prompts in collaborative projects. Blueprints help leverage best practices across the enterprise for economies of scale. Protocols being observed demonstrate that AI Use Policies are being properly and consistently applied.
- Indicator Signals - Disclosures
As with a car signalling its intent to turn or to alert the existence of a hazard, so is it best practice with AI to disclose its use and abide by ethical principles of fairness and transparency in its embedding of AI in enterprise operations. Public-facing AI disclosures, the watermarking of AI-generated multi-media content, and explainability charters detailing how AI is being deployed, are all important signals to the world at large.
- Keys and Locks - Cybersecurity
An automobile without locks is lacking basic security. In the AI world, multi-factor authentication, role-based access controls, and encryption techniques, along with zero-trust AI threat detection, are all central to cybersecurity. Oh, and without the keys you cannot even start the vehicle!
- Rear-view Camera - Audit Trails
We don't have eyes in the back of our heads - unless we are in an automobile! In AI, as we generate multi-party and multi-discipline projects, understanding workflow, interventions and modifications is crucial for real-time responsiveness. Upon completion of any given project, the ability to look back after the fact is also required. Audit assessment also requires recorded data for compliance purposes.
- Accelerator Pedal and Gears - Innovation and Scaling
Sometimes you just want to give it some gas. Even if the pedal is not to the metal, however, knowing when to hold 'em and when to fold ‘em makes all the difference. The accelerator in AI could, by example, be exploring all the varied and various connections between different subject areas or using chain-of thought reasoning to penetrate deeper layers. When the baseline procedures and positive results have been achieved from relatively straightforward use cases, perhaps in marketing, maybe then it is time to kick it up a gear. Increasing complexity and scale require collective confidence and trust. Developing reliable techniques and understanding how to leverage blueprints both create space for further experimentation.
- Steering Wheel - Governance
Speaking of wheels, imagine driving without one of these! The correlation with AI has to be the AI Governance Policy. A classic misconception around governance is that it constricts AI use, but that is nonsensical. Far from it - governance presupposes use! AI is an innate synthesis of risk and reward. As such, Governance, as expressed in a formal policy for implementation and ongoing evaluation, is simply the other side of use on the same coin. Steering ensures staying in lane also. There may be danger ahead. The AI model may have drifted from the reality it is intended to serve. Or conditions may have changed enough that a course correction is required. Keep your eyes on the road.
- Brakes - Self-Regulation
It's all very well moving forward, but do you know how to slow down? AI presents a broad canvas of possibilities, but it is important to maintain focus and bring the entire team along for the journey. The augmentation of human cognition is empowering. Substitution for it is the road to intellectual atrophy within the organization. Knowing how to self-regulate is essential. The importance of making discriminating decisions on who to involve and when cannot be overstated. That means a flexible but secure system for workflow management. It also requires centralized oversight of AI operations. When the wheels are set in motion, if for any reason you need to make an emergency stop (perhaps an AI model is misbehaving) users also need a facility to terminate operations and escalate the issue. Cybersecurity can take over and hopefully provide an alternative vehicle in short order to minimize disruption for the driver and to carry on the journey.
- Reckless Driving - Reckless Use of AI
It speaks for itself. In AI terms, disregard as to AI use disclosure, negligent attention around deepfakes, shadow AI (unauthorized use within organizations), unmoderated biased training data or outputs, and Automated Decision Making without human review, are all high risk activities.
-------------------------------------------------
INCIDENT SAFETY MECHANISMS
- Seat Belts - Damage Limitation
With all the best will in the world, when driving we cannot legislate for that which we cannot control. In AI, the seat belts that enable damage limitation come in the form of technical controls, audit trail explainability of outputs, and human oversight of decision-making and content where necessary. They demonstrate reasonable efforts to minimize loss and also act as the record of processing activity.
- Airbags - Incident Response
We covered the role of brakes - but accidents do happen. At the point of impact, airbags can be lifesavers. They should spring into action at the very point they are required. In terms of AI, they can be equated to the mobilization of incident response to suddenly emergent threats. The important point to remember here is that in order to act, pre-installed sensors are required. In order to be truly effective, incident response should also be accompanied by mechanisms for real-time and coordinated communication strategies, including with third parties.
------------------------------------------------
Ultimately, of course, this is nothing more than an analogy, a thought experiment. We should not take these things literally nor too seriously (although it wouldn’t be the first time that a metaphor has been mistaken for reality). Nevertheless, there are some interesting parallels here. Both vehicles and AI are agents abroad in the world. Both have drivers. Both have financial incentives that are pushing their industries forward. They are also both now ubiquitous. In every location, performing almost every function in their respective domains. For the vehicle, the transportation of the physical. For AI, the communication of the mental. The challenge with both is to extract their benefits while managing their respective risks.
About R. Scott Jones
R. Scott Jones, Esq. CIPP/US is a New York attorney with an increasing focus on privacy law issues. He is also lead partner of The Generative Company, a generative AI consulting firm, and co-founder of an AI Governance SaaS platform currently in development.

