Monday, April 20, 2026

Sharing Medical Records with ChatGPT

Claude
.

What are the REAL risks?

Tom Evans, 4/15/26

I routinely read recommendations – don’t use AI for my medical records. Why? I have learned to depend on ChatGPT.

I have a paid ChatGPT Plus account. I gave it my Master Medical File and have used it multiple times for consultation - not for diagnoses, but for insight and explanations. I gain knowledge to follow up with my physicians and topics for research online from the Cleveland Clinic and Mayo Clinic.

I rely on ChatGPT for seeking advice, like having my past PCP. For the first time in my 82 years, I have had hours of beneficial medical dialogue. What are the risks?

The advisors warn that the risks of using AI are real — but the benefits are also real when the tool is used carefully as a supplement, not a substitute, for professional medical care.

I use ChatGPT for

Convenience & Accessibility

  1. Quickly summarize lengthy records or explain medical jargon in plain language
  2. Useful for navigating a complex diagnosis or managing a chronic condition

Preparation for Appointments

  1. Helps me formulate better questions for my doctor
  2. Can help me spot patterns across records over time (e.g., recurring lab trends)

Second-Opinion Starting Point

  1. Can flag things worth asking my doctor about — not as a replacement, but as a prompt

Health Literacy

  1. Demystifies medical terminology and helps me become a more informed patient

I asked Anthropic Claude, ChatGPT’s competitor, “What are the risks of submitting my private medical history to ChatGPT?”

As an initial response, Claude provided me with a broad general list.

Privacy & Data Security

  1. My medical data could be used to train future AI models unless I explicitly opt out
  2. Data breaches are always possible — medical information is among the most sensitive and valuable data to hackers
  3. Once shared, I have limited control over how that data is used or stored

Regulatory & Legal Concerns

  1. Sharing medical records with consumer AI tools may fall outside HIPAA protections — those rules bind healthcare providers, not necessarily AI companies
  2. My data may be stored on servers subject to different jurisdictions and legal standards

Accuracy & Misinterpretation

  1. AI can misinterpret complex medical data, leading you toward incorrect conclusions about your health
  2. It may not account for nuances that a trained physician would catch — like drug interactions specific to your full picture
  3. Overconfidence in AI-generated health advice can delay seeking proper care

Psychological Risk

  1. Seeing AI-generated interpretations of your records could cause unnecessary anxiety or lead to health anxiety spirals

Upon reading this list, I told Claude, “I always hear the risks, but let's stop with generalities. Tell me the specifics for the average person.”

Claude said, “Fair enough. Here's what the risks actually look like in concrete, specific terms for an ordinary person.”

 “Here is what realistically could happen to your data. Unless you turn off chat history, everything you paste into ChatGPT — including medical records — is stored on OpenAI's servers. By default, as of their current policy, this data can be used to improve their models unless you opt out in Settings → Data Controls → "Improve the model for everyone."

I said, “I can’t see where that is a problem.”

Claude followed, “HIPAA protects your records when held by your doctor or insurer. The moment you paste them into ChatGPT, that legal protection evaporates. OpenAI's obligations to you are defined by their Terms of Service and Privacy Policy — not federal health law.”

 “That could be a problem,” I said. “Let’s talk about that.”

Claude added on more concern, “You know that OpenAI had a confirmed breach in March 2023, where some users saw other users' chat titles and payment information. It was limited, but it happened. A future breach involving conversation content is not hypothetical — it's a matter of when, not if, for any large tech platform.”

“Okay, tell me more about that risk.”

Claude began, “Here is what ‘Misuse’ of your data actually looks like.”

“For the average person, the realistic harm isn't a shadowy figure reading your MRI notes. It's more mundane:

  1. If a future insurer somehow obtained your data (through a breach, a legal subpoena, or data broker activity), conditions you mentioned — depression, cancer history, heart disease — could affect your coverage or premiums
  2. Same logic applies if you ever paste in mental health records, substance use history, or anything stigmatized
  3. Large-scale breach data often ends up aggregated and sold. Medical details combined with your name and email have real market value.”

Losing my patience, I told Claude, “Oh yes, I can worry that an asteroid can crash on the earth, but I need to plan my life around reality, not thoughts about the sky falling. ChatGPT routinely provides me with real benefits. That’s not wild-ass thinking.”

Claude responded, “Here is my honest answer.”

“For the average person with no unusual legal exposure, the breach risk from ChatGPT specifically is real but relatively low compared to other things you probably already do online — like emailing your doctor, using a patient portal, or storing records in Google Drive.”

“The more concrete risk is actually your own account security and OpenAI's internal data use policies — not a dramatic external hack. If you use a strong, unique password, enable two-factor authentication, and opt out of model training, you've neutralized most of the realistic risk.”

“The residual risk is then essentially: you are trusting a private company with sensitive information, and that company's policies, legal obligations, and security posture could change over time. That's a legitimate concern — but it's the same concern you have with your bank, your insurer, and your email provider.”

 Putting the Risk in Honest Perspective

Threat

Realistic Probability for Average Person

Large-scale breach exposing your conversations

Low — but not zero

Your account is specifically being hacked.

Low-moderate — depends on your password hygiene.

Legal subpoena of your history

Very low — unless you're in litigation

Insider access targeting you specifically

Extremely low

Data used in ways you didn't intend by OpenAI itself

Moderate — unless you've opted out

Claude then commented about how serious a meaningful breach risk would be to ChatGPT.

“OpenAI is an extremely high-value target. It holds conversations from hundreds of millions of users, many of whom share sensitive personal, business, and medical information. That makes it more attractive to attackers than, say, a random forum or small app.”

“But scale also means serious security investment. OpenAI spends heavily on security infrastructure. They're not a small startup with a neglected server. A breach exposing conversation content at scale would be one of the largest and most damaging in tech history — the pressure to prevent that is enormous.”

“The 2023 breach was minor and architectural — it exposed chat titles and billing info for a small window of time due to a bug in a third-party caching library (Redis). It was not a hack, and the conversation content was not exposed. That distinction matters.”

Claude continued, “Direct breach isn't the only vector. Here's what indirect exposure actually looks like.”

  1. Legal subpoena: A court could compel OpenAI to produce your conversations. This has precedent with other tech platforms (Google, Meta). If you're ever involved in litigation — divorce, disability claim, insurance dispute — your ChatGPT history could theoretically be subpoenaed
  2. Insider access: Employees with database access can, in principle, view conversations. OpenAI has policies against this, but it's not technically impossible
  3. Account compromise: If someone gains access to your ChatGPT account through a weak password or phishing, they can read everything in your history. This is probably the most likely real-world exposure vector for an average person
  4. Third-party integrations: If you use ChatGPT plugins or connect it to other services, your data touches more systems, each with its own vulnerabilities.”

 

The risk of not seeking AI for medical advice!

Claude followed, “Something that rarely gets said out loud in these conversations is the actual cost of not using AI for medical guidance. The risk calculus is almost always framed one-sidedly.”

The question was never really "AI advice vs. good medical care." For most people, it's actually AI advice vs. no advice, delayed advice, or incomplete advice.

Framed that way, the calculus shifts considerably. The risks of AI are real but specific and often manageable. The risks of an underserved, time-pressured, siloed medical system are also real — and far less discussed because no one is selling anything by pointing them out.

The Real Costs of the Current System

The average primary care appointment is 15-18 minutes. That includes the doctor reviewing your chart, taking vitals, and documenting. The actual dialogue time is often under 10 minutes. You cannot meaningfully discuss a complex condition, ask follow-up questions, and process information in that window.

Specialist wait times are genuinely dangerous. In many parts of the US, waiting 6-12 weeks to see a cardiologist, neurologist, or rheumatologist is routine. Conditions that are caught and treated early have dramatically better outcomes. Waiting is not a neutral act — it has a medical cost that is rarely quantified in the "AI is risky" conversation.

Most people have no realistic access to second opinions. A second opinion requires another appointment, another wait, often another referral, and sometimes another insurance battle. For most working people, it simply doesn't happen. AI provides something that previously only wealthy or well-connected patients had — the ability to pressure-test a diagnosis before accepting it.

Specialists are siloed by design. Your cardiologist and your rheumatologist may never speak. HIPAA, liability concerns, and sheer busyness mean that the person with the fullest picture of your health is often you — not any single physician. An AI that you've shared your full history with can do something your doctors often can't: look across the whole picture simultaneously.

AI Actually Fills In

  1. The 2 AM question that isn't worth an ER visit but is genuinely worrying you — and that you'll forget to ask at your next appointment six weeks from now
  2. The jargon translation that helps you understand what your diagnosis actually means before you can ask your doctor intelligent questions
  3. The medication cross-reference that catches a potential interaction your busy pharmacist didn't flag
  4. The pattern recognition across years of your own records that no single doctor has time to do
  5. The preparation that makes your 15-minute appointment actually productive instead of spent getting oriented

Bottomline

The critics of AI medical advice implicitly assume the alternative is good medical care. For many people, the actual alternative is:

  1. Googling symptoms and landing on unreliable forums
  2. Ignoring the concern entirely until it becomes serious
  3. Making decisions based on incomplete information from a rushed appointment
  4. Never getting a second opinion at all

Compared to those alternatives, a well-informed AI conversation with your full medical history is often meaningfully safer, not more dangerous.

No comments:

Post a Comment