Ever since platforms like ChatGPT, Perplexity, and Gemini became mainstream tools, people are relying on AI for productivity and answers to different questions…including health and wellness.
More patients are turning to AI before they ever turn to a clinician, uploading labs, symptoms, even private medical records, hoping technology will give them clarity and answers faster than the healthcare system ever has.
On the surface, it feels empowering. But beneath that empowerment is a growing problem most people don’t see yet. Unlike a clinician, AI doesn’t see your sleep, your stress, trauma, your body language, or the patterns that only show up when a human being is actually in the room.
When patients start treating AI like a doctor, they often end up going deeper into anxiety, confusion, and rabbit holes that delay real care instead of improving it.
And it’s not just patients; doctors are also using AI in the same way.
But that doesn’t mean using AI is all bad. If we ask better questions, protect our data, and use AI through a holistic lens, AI can actually be a powerful tool for clarity instead of confusion.
And that’s what Dr. Cheng Ruan is working on. As an internal medicine physician and AI engineer, he’s been working at the intersection of trauma-informed care, consciousness, and healthcare systems for over a decade. What if the problem isn’t AI itself, but the way we interact with it? How do you get AI to support your health instead of silently steering it?
In this episode, Dr. Ruan unpacks what AI is actually good at, where it becomes dangerous, and how it’s reshaping the future of medicine.
Things You’ll Learn In This Episode
AI can make health anxiety worse
AI can trap people in endless rabbit holes. How do you stop AI from amplifying anxiety instead of clarity?
The hidden danger of uploading your medical data into AI tools
Many popular platforms aren’t secure or compliant, yet people are uploading labs, discharge summaries, and even social security numbers. What should patients actually know before trusting AI with their private health data?
How to ask better questions so AI works for you
“I feel” statements and emotional prompting radically change how modern reasoning models respond. Why does this approach lead to more useful insights and fewer dead ends?
Why AI won’t replace doctors
As AI becomes better at knowledge retrieval, clinicians are being valued less for information and more for judgment, context, and relationships. What does the future of medicine look like when connection matters more than credentials?
PS. If you enjoy the show, remember to leave a review on your favorite podcast app! Reviews help the podcast reach a wider audience and help more people.
