
Omer Liran, MD, MSHS
Co-Director, Cedars-Sinai Virtual Medicine; Assistant Professor, Department of Psychiatry & Behavioral Neurosciences; Co-Founder and CTO, Xaia, Los Angeles, CA.
Dr. Liran is co-founder of Xaia, a company that develops AI-assisted therapy and clinical documentation tools. This content has been peer-reviewed to ensure it remains balanced and educational. Relevant financial relationships have been mitigated.
CHPR: Dr. Liran, please begin by telling us a little about yourself.
Dr. Liran: I’m a psychiatrist at Cedars-Sinai Medical Center in Los Angeles, and I co-direct the medical center’s Virtual Medicine lab. Our lab uses artificial intelligence (AI) and virtual reality (VR) to improve care for patients and lighten the bureaucratic load on physicians.
CHPR: What drew you to bringing these kinds of technologies to psychiatry?
Dr. Liran: I’ve been fascinated by AI for a long time, and especially with how VR, augmented reality (AR), and AI can be applied in health care. Psychiatry faces a worsening shortage, and we’re not training enough new psychiatrists to meet demand. I believe technology has to be part of the solution.
|
Glossary of Terms
| |
|
CHPR: How can AI and VR be a solution to that problem?
Dr. Liran: We see them as ways to extend a psychiatrist’s reach. The field is moving toward platforms that can assist with every stage of the patient encounter—for example, by helping with intakes through synthesizing information from the chart and from structured AI chat interactions that gather basic history before the visit. Some next-generation systems also incorporate an AI scribe or copilot during the encounter to help document the conversation. They can also highlight potential safety concerns and instantly pull up information you might need, like for medication side effects. And they’re also evolving toward assisting with after-visit documentation, such as drafting clinical notes that include a differential, proposed treatment plan, and even coding recommendations. Taken together, these tools give psychiatrists more time to focus on their patients and boost patient and provider satisfaction.
CHPR: For clinicians who are interested in integrating AI into their work, what’s the best place to start?
Dr. Liran: The easiest entry point is to use AI scribes or speech-to-text tools that many EHRs already support. These can cut down on documentation almost immediately. Beyond that, the APA Learning Center and the Digital Medicine Society have courses on topics like generative AI in health care (Editor’s note: See “Where to Start With AI” table on page 7).
CHPR: Does the technology also have a direct therapeutic role for patients, beyond supporting clinicians?
Dr. Liran: Absolutely. Depending on the platform, AI tools can be used to teach skills such as relaxation training or breathing interventions. A growing area of development is longitudinal care, like programs in the style of cognitive behavioral therapy in which patients can practice skills between sessions, complete homework, then review their progress with the AI at the next session. The idea is to extend the reach of psychotherapy by supporting the skills-based components with AI while the clinician handles the relational and diagnostic parts of care.'
“Even when the technology flags a safety concern, escalation still falls on a human clinician. The AI isn’t calling 911. That’s also the point where human connection really matters. I don’t believe AI, even when it’s super-intelligent, will ever make psychiatrists obsolete, because there’s something special about human connection, about people talking to each other.”
Omer Liran, MD, MSHS
CHPR: Are there ethical or legal issues we should be aware of?
Dr. Liran: Safety is the biggest concern. If the AI mishandles a suicidal patient, the results could be tragic. Good systems have multiple built-in safeguards, but the risk isn’t zero. And there’s the worry about people using general chatbots to replace therapists. Unsupervised models can just tell people what they want to hear or amplify delusions, and that can be very dangerous. If a manic patient believes they’re the emperor of the world, a chatbot might just agree with that. There have been recent reports of chatbots reinforcing psychosis (Fieldhouse R, Nature 2025;646:18–19).
CHPR: How can clinicians tell whether an AI therapy tool is safe and reputable?
Dr. Liran: Look for four things: (1) Clinical oversight: Does it connect to a provider? (2) Evidence: Has it been studied? (3) Safeguards for crises, and (4) Credibility: Is it affiliated with a trusted health system or university? Those are good signs you’re dealing with a responsible product.
CHPR: How do different systems out there compare to one another?
Dr. Liran: There is now a broad spectrum of digital mental health tools, ranging from unsupervised wellness chatbots like Woebot, Wysa, or Replika, to app-based structured therapy programs such as Headspace, SilverCloud, or Meru Health, and finally to clinician-integrated platforms that are designed to work alongside psychiatric care. The key distinctions are safety guardrails, evidence base, and how care is escalated when symptoms worsen.
CHPR: Speaking about how care is escalated, what happens if a patient is in crisis—for example, if they are suicidal?
Dr. Liran: Some platforms already attempt to detect when a patient may be at risk and notify the clinician, and this is likely to become more sophisticated over time. But even when the technology flags a safety concern, escalation still falls on a human clinician. The AI isn’t calling 911. That’s also the point where human connection really matters. I don’t believe AI, even when it’s super-intelligent, will ever make psychiatrists obsolete, because there’s something special about human connection, about people talking to each other. I’d be worried about a future where AI that only mimics empathy is left to care for patients on its own.
|
Where to Start With AI |
|
Examples of AI Documentation/AI Speech Tools Already in Use |
|
| Educational Resources |
|
CHPR: How have patients felt about sharing their personal details with an AI?
Dr. Liran: Survey data so far suggest that many people feel comfortable disclosing sensitive information to AI, reporting for example that they find it to be nonjudgmental and patient (Spiegel BMR et al, NPJ Digit Med 2024;7(1):22). Some patients do report that the tone can feel robotic or emotionally flat, which is a reminder that this isn’t a substitute for human connection. But for many, especially early in treatment, that sense of psychological safety can lower the barrier to opening up.
CHPR: What are the interfaces usually like?
Dr. Liran: Many systems run on VR and AR headsets like the Quest and the Apple Vision Pro, but those are quite expensive. Mobile versions are generally much more accessible. On a phone, patients can talk with it by voice just like a conversation, or switch to text mode (which younger patients seem to prefer these days) and it looks like any other chat app.
CHPR: It's too bad the headsets are so expensive. They provide such an immersive experience.
Dr. Liran: It really is the future, but we’re not quite there yet. When headsets become lighter, more comfortable, and more like glasses, people will use them more. Right now, I can’t wear a headset for more than 30 minutes before it feels too heavy. And in hospital settings, especially psychiatric units, there are added concerns. You don’t want to hand patients a device with cords and straps that could pose risks. So, while the technology is promising, it still has practical limitations.
CHPR: Are there certain patients whom you think AI tools are better suited for than others?
Dr. Liran: AI tends to be most helpful for patients who are stable enough to engage with structured therapeutic content. That includes many patients with anxiety disorders, mild to moderate depression, insomnia, chronic pain, or stress-related conditions. We need to be more cautious with patients who are highly dysregulated, actively psychotic, manic, or in acute crisis, where misinterpretation of language or delayed escalation could cause harm (Grabb D et al, arXiv preprint arXiv:2406.11852). There are also practical considerations. For example, VR headsets are not a good fit for patients who are severely agitated or behaviorally disorganized, because the hardware itself can become a safety risk. However, a non-immersive tool, such as a scribe assisting the clinician or a simple breathing or meditation module on a mobile device, may still be appropriate in those cases—as long as the clinician remains in charge of the overall course of care.
CHPR: Is VR used for trauma, like PTSD exposure therapy?
Dr. Liran: Yes. VR exposure therapy is well studied and used by the VA, although it hasn’t gone through FDA clearance as a psychiatric indication (www.tinyurl.com/yhvy7tx5). One of the ongoing questions is how AI might eventually assist with therapist-guided trauma work in a safe, regulated way.
CHPR: Where do you see this technology in five years?
Dr. Liran: The technology is accelerating extremely fast. If we had this conversation six months ago, it would be different from today. These tools will become far more capable, but also riskier. AI models have complex internal decision-making processes that we don’t fully understand, with unpredictable outputs. We need strong safeguards and clinician oversight to steer them toward good.
CHPR: Thank you for your time, Dr. Liran.

Please see our Terms and Conditions, Privacy Policy, Subscription Agreement, Use of Cookies, and Hardware/Software Requirements to view our website.
© 2026 Carlat Publishing, LLC and Affiliates, All Rights Reserved.