The horse, the car and the AI Doctor
The recent news out of Utah feels like a glitch in the Matrix—or at least a glitch in the regulatory system.
Last week, the Utah Medical Licensing Board called for the immediate suspension of a first-in-the-nation pilot program involving Doctronic, an AI startup authorized by the state to autonomously renew prescriptions for chronic conditions. The twist? The Board claims they only found out the pilot existed after it was already live.
It is a classic collision between "move fast and break things" tech culture and the "do no harm" ethos of medicine. But while the procedural failure is obvious, the reaction from the medical community reveals a deeper tension about the future of care.
The Power vs. Predictability Problem
Think back to the early 1900s. For centuries, the horse was the predictable standard. A horse has biological safeguards: it has a "mind of its own," it won't walk off a cliff, and it generally stops when it senses danger.
Then came the car. In its early, unregulated state, the car was undeniably more dangerous than a horse. It had no instinct to stop for a pedestrian, no standardized brakes, and at high speeds, its failures were catastrophic. However, the car was infinitely more effective. It eliminated the urban sanitation crises of the horse era and unlocked a level of societal mobility that biology simply couldn't provide.
In healthcare, we are currently at that 1908 inflection point.
Why the "Proper Channels" Matter
Doctronic and the Utah Office of AI Policy attempted to drive a Ferrari through a crowded intersection without first installing stop signs. By bypassing the Medical Licensing Board, they ignored the fact that in medicine, regulation is the infrastructure of trust.
An unregulated AI "doctor" can cause systemic harm at a velocity a human practitioner never could. When 11 out of 14 board members sign a letter demanding a halt, they aren't just being "anti-tech"—they are pointing out that you cannot have a road system without a highway code.
From Gatekeepers to Architects
The temptation for medical boards is to issue a blanket "no" to autonomous AI. But the reality is that our current "horse and buggy" system is straining under the weight of physician burnout and massive provider shortages.
We didn't progress as a society by banning the car to protect the horse; we progressed by inventing traffic lights, speed limits, and driver’s licenses.
The Medical Board’s true opportunity here isn't to stop the clock, but to become the Department of Transportation for AI Care. Instead of a reactive veto, we need them to proactively define the "rules of the road":
What is the "clinical driving test" for an algorithm?
How do we standardize the "brakes" (escalation protocols to human doctors)?
What does a "speed limit" look like for autonomous prescribing?
The Bottom Line
Regulation shouldn't be the enemy of innovation; it is the safety frame that makes innovation viable for the public. If we treat AI like a faster horse, we miss the point. If we treat it like a car, we realize that we don't need fewer cars—we need better roads.
Utah’s pilot may be on ice for now, but the conversation it started is just beginning. The question isn't whether AI belongs in the clinic, but who is going to write the manual on how to drive it safely.