Utah opens the door a little wider for AI in medicine

Utah has decided that an AI system can renew some psychiatric prescriptions without a doctor directly handling each refill. It is only the second time the state, and the country, has handed this kind of clinical authority to artificial intelligence. In other words, the machine now gets a say, at least for a very specific slice of care.

State officials argue the move could lower costs and ease staffing shortages. Physicians, naturally, are less enthusiastic. Their concerns center on the same questions that tend to follow AI into any room: how it works, what it misses, and whether the promised efficiency is just a polished way of saying "good luck."

What the pilot actually allows

The one-year pilot, announced last week, lets Legion Health's AI chatbot renew certain psychiatric medications in limited cases. The San Francisco startup says Utah patients can get "fast, simple refills" through a $19-a-month subscription. The program is set to begin sometime in April, though for now the company is only running a waitlist.

The scope is intentionally narrow. Under Legion's agreement with Utah's Office of Artificial Intelligence Policy, the chatbot can renew only 15 lower-risk maintenance medications that were already prescribed by a clinician. Those include:

  • fluoxetine (Prozac)
  • sertraline (Zoloft)
  • bupropion (Wellbutrin)
  • mirtazapine
  • hydroxyzine

These drugs are commonly used for anxiety and depression.

Patients also have to fit a narrow clinical profile. They must be considered stable, and anyone with a recent dose change, medication change, or psychiatric hospitalization in the last year is excluded. A healthcare provider must check in after every 10 refills or after six months, whichever happens first.

The chatbot cannot issue new prescriptions. It also cannot handle medications that require closer clinical monitoring, including drugs that need blood-test supervision. Controlled substances are off limits too, which rules out many ADHD medications.

That leaves out benzodiazepines, antipsychotics, and lithium, which is widely regarded as a gold-standard treatment for bipolar disorder. So the pilot is not dealing with the complicated cases, which is reassuring in the way a pilot program is often reassuring: by avoiding most of the things that might go wrong.

How the system is supposed to work

To use the service, patients must opt in, verify their identity, and prove they already have a prescription. Legion says that can be done with a photo of the label or pill bottle.

After that, the chatbot asks about symptoms, side effects, and whether the medication is working. It also screens for suicidal thoughts, self-harm, severe reactions, and pregnancy so it can flag possible problems.

If the answers fall outside the pilot's low-risk criteria, the case is supposed to go to a clinician before any refill is issued. Patients and pharmacists can also ask for a human review.

When the pilot was announced, state officials said that by "safely automating the renewal process for maintenance medications," they can let patients get care "much more quickly and affordably." They also said the program could free providers to "focus their time on more complex, higher-risk patient needs" and help address shortages that have left 500,000 Utah residents without access to mental health care.

Legion's cofounder and CEO, Yash Patel, has described the effort in even bigger terms. He has called it a global first that could dramatically expand access to healthcare and mark "the beginning of something much bigger than refills."

Why psychiatrists are not sold

Not everyone is ready to hand psychiatric refills to a chatbot and call it progress.

Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, told The Verge that the "advantages of an AI-based refill system may be overstated." He said he doubts the tool "will not increase access for those who are most in need of care." The people most likely to use it, he noted, would already need to be on a treatment plan with a psychiatrist.

He also raised a broader concern: automation could contribute to what he called an "epidemic of over-treatment" in psychiatry, where some patients remain on medication longer than they actually need.

John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center and a professor of psychiatry at Harvard Medical School, made a related point. Some patients do need long-term psychiatric medication, he said, while others benefit from reducing or stopping it. That kind of decision, he said, "require[s] more active management, changes, and careful consideration." A chatbot is not exactly famous for nuance.

Torous also questioned whether any current AI system can understand the full context of a person's medication plan. Prescribing is not just a matter of checking a box for interactions. Kious put it bluntly: "This is something that could be safe in principle, but it all depends on the details."

Both doctors pointed to how new and opaque these systems remain. Kious said, "It feels a bit like alchemy right now." He added, "It would be better if there were greater transparency, more science, and more rigorous testing before people are asked to use this."

The familiar risks of a chatbot, now with prescriptions

There are also more immediate safety problems to worry about.

Kious said the system could miss something during screening. It might fail to ask the right question, a patient might not recognize a side effect, or someone might answer inaccurately. Some patients, he noted, may simply tell the system what they think it wants to hear to speed things along.

That is not a problem unique to chatbots. Psychiatry already depends heavily on self-report. But human clinicians can usually rely on more than the patient's answers alone. Kious said that when he sees patients, he pays attention not just to what they say, but also to what they do not say and how they present themselves. A chatbot, for its part, does not usually notice much beyond the text box.

Patients can also mislead human clinicians, of course, but Kious said a chatbot may make it easier to tweak answers until the software produces the desired result. Efficient, in the least comforting sense.

Torous said there are additional risks that will sound familiar to anyone who has watched chatbots misbehave in public. Utah's rollout with Legion is the state's second AI prescribing experiment, following a broader primary care pilot with Doctronic that began in December. Within weeks of launch, security researchers managed to push Doctronic's system into spreading vaccine conspiracy theories, generating meth-cooking instructions, and tripling a patient's opioid dose.

State officials say the Legion pilot is different because it is much narrower and aimed directly at Utah's mental health shortage.

Guardrails, reports, and a lot of trust

Legion says it is operating under tight restrictions. The company says the pilot includes what it calls "conservative eligibility gates." Under its agreement with the state, Legion must provide detailed monthly reports and have the first 1,250 requests closely reviewed by human physicians. After that, it says, about 5 to 10 percent of requests will be sampled periodically.

Legion cofounder and president Arthur MacWaters told The Verge that "risks exist in any remote care model, whether AI-assisted or fully human-led." He said the company's "workflow does not rely on a single self-reported answer to unlock treatment." According to MacWaters, the main safeguards are the pilot's narrow medication list, patient eligibility rules, built-in AI safety checks, pharmacist involvement, and the ability to escalate to a clinician.

"We see this as critical to expand access to hundreds of thousands of people in Utah who live in mental health shortage areas, as well as an important proving ground for AI in medicine," he said.

MacWaters would not discuss additional uses, more medications, or expansion to other states. He also declined to give a timeline for broader rollout. Still, Legion's public signals suggest it is thinking far beyond Utah. Its refill site says the service will be available "nationwide 2026," and MacWaters has said it "will be in every state very very quickly."

The basic question behind the experiment

For the psychiatrists who commented on the pilot, the real issue is simpler than the marketing suggests: what problem is this actually solving?

Kious said established patients often do not even need an appointment to get a refill. Most psychiatrists, he said, are probably "happy to refill prescriptions for free and without an appointment" unless they are worried about the patient or the medication carries meaningful risk. The ironic part is that those higher-risk situations are exactly the ones Legion's AI is not allowed to handle.

Torous said he would "personally avoid it for now," adding that if a patient has found a treatment plan that works, it is probably best to stay with that clinician.

Utah's experiment may end up as a useful test of whether AI can safely handle routine psychiatric refills. It may also end up as a reminder that some jobs look simpler from a startup slide deck than they do from a clinic.