#45 AI in Healthcare: What You Should Know About ChatGPT’s New Health Tool

ChatGPT Health is a new OpenAI feature (announced Jan 2026) that creates a dedicated “Health” tab in the ChatGPT interface. In this space, users can upload medical records and link wellness apps (Apple Health, MyFitnessPal, etc.) to get personalized answers to health questions. OpenAI promises that Health conversations are isolated: they’re stored in an encrypted, separate “Health” area and are not used to train the AI models. The company pitches ChatGPT Health as a tool to help patients “feel more informed, prepared, and confident” when discussing health and wellness. But while the interface (see illustration below) looks reassuringly slick, experts warn the reality may be more complicated.

OpenAI unveils ChatGPT Health, says 230 million users ask about health each week | TechCrunch

How ChatGPT Health Works

OpenAI designed ChatGPT Health as a “separate space” within the chatbot, only for health-related queries. For example, you might upload lab results or fitness metrics, and ask the AI to interpret them in context. OpenAI notes that if you ask about health topics in the regular ChatGPT, the app will encourage you to move into Health, and vice versa. Early reports say Health can recall your past health conversations and relevant data: for instance, the system might remember you’re a marathon runner and tailor its fitness advice accordingly.
According to OpenAI and Tech reporters, ChatGPT Health integrates with third-party services for medical records and tracking data. The system uses a partner called b.well to access U.S. electronic health records, and it connects to apps like Apple Health, Function, MyFitnessPal and others. In practice, this means ChatGPT can automatically incorporate your doctor’s lab values, prescription history, and wearable stats into its answers. The company stresses that these health chats “operate as a separate space with enhanced privacy to protect sensitive data,” and that none of the Health conversations are used to improve (train) its underlying AI model.

Despite the marketing, OpenAI is careful to say ChatGPT Health is not a replacement for medical care. The service includes a disclaimer: “it is not intended for diagnosis or treatment,” but rather to help you navigate questions and prepare for doctor visits. In other words, it’s supposed to be a supportive tool, not a licensed clinician. In OpenAI’s words, the system should support – not substitute – professional health advice.

Promises and Pitfalls of an AI Health Assistant

OpenAI pitches ChatGPT Health’s benefits aggressively. As the company notes, hundreds of millions of people already ask ChatGPT about health and wellness every week. The new feature is meant to harness that demand: by learning from your own records and devices, the AI could “explain medical jargon,” interpret lab results, or even weigh insurance choices based on your health history. In press statements, OpenAI executives compare it to having an informed health companion that never forgets details and works 24/7.

Some early testers have praised these capabilities. For instance, ChatGPT can summarize recent test results, suggest questions for your doctor, or craft a personalized diet plan. It even claims to “remember” previous discussions (unlike normal chat) so it can build a longitudinal view of your health journey. In theory, this helps continuity of care: for example, if you later mention a symptom, the AI might recall related details from earlier chats or apps.

However, experts immediately caution that there are major limitations and risks. By design, large language models (LLMs) like ChatGPT do not truly understand facts or medicine – they generate text by predicting plausible word sequences. This means they can hallucinate (confidently state false or nonsensical information) and have no real concept of accuracy. As TechCrunch notes, ChatGPT “operates by predicting the most likely response… not the most correct answer,” making it prone to mistakes. Indeed, medical professionals emphasize that a friendly AI answer is not a guarantee of medical truth. In fact, there have already been real-world cases of dangerous AI errors. For example, The Guardian reported a case of a patient who developed toxic hallucinations after following ChatGPT’s advice. The AI had wrongly suggested he use sodium bromide instead of table salt, causing poisoning.

ChatGPT Health amplifies these concerns in two ways. First, its specialized interface and personal data may make users over-trust the chatbot’s answers. As one AI researcher noted, it may be “not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalized”. In other words, Health’s conversational style could mask its errors. Second, because ChatGPT’s base models are trained to be agreeable, there is a risk of sycophancy: the AI might echo a patient’s fears or biases rather than challenge them. For instance, if someone is fixated on a symptom, a flattering chatbot might reinforce the worst-case scenario instead of providing balanced perspective. Critics warn this tendency could “meet a hypochondriac with a headache” and amplify anxiety, or worse, encourage self-harm under the guise of sympathy.

In sum, even enthusiastic technologists stress that ChatGPT Health should be used with caution. The tool can be helpful for straightforward queries – for example, explaining common symptoms or scheduling reminders – but even its advocates concede it shouldn’t replace professional judgment. One tech commentator argues the AI’s strength is in making medical jargon accessible, “far more understandable than most medical jargon”. Yet he also cautioned that relying on it extensively for decisions “isn’t a good idea.” The bottom line: patients should always verify AI suggestions with a doctor, and not treat ChatGPT’s answers as definitive.

Privacy and Legal Safeguards: The U.S. and Europe

ChatGPT Health’s approach to user data has ignited a privacy debate. On one hand, OpenAI claims to build “privacy and security at the core” – for example, saying all Health data is encrypted and compartmentalized. Users must explicitly opt in to share each type of data, and the AI’s privacy page says third-party app data requires permission. On the other hand, no technology system is perfectly secure, and even private chat logs can be disclosed in legal or adversarial settings. In recent legal cases (unrelated to healthcare), judges have ordered companies to hand over ChatGPT transcripts, overturning deletion settings. This history reminds us that electronic records – even allegedly ephemeral ones – may become evidence if a subpoena compels them.

More fundamentally, who is legally responsible for your health data and its protection? In the United States, the Health Insurance Portability and Accountability Act (HIPAA) normally governs medical records but only when handled by certain entities. HIPAA applies to “covered entities” such as doctors, hospitals and insurance companies – and their business associates – that transmit health info in standard electronic formats. A technology firm like OpenAI does not fall under that definition, and thus its products are not bound by HIPAA’s strict rules. Privacy experts point out that if you upload your medical records or symptoms to ChatGPT Health, “it would remove the HIPAA protection from those records, which is dangerous,” as EPIC attorney Sara Geoghegan warns.

In practice, this means OpenAI’s handling of your health data is governed only by its own Terms of Service and privacy policy, not by fixed federal law. And those policies could change, with little external oversight.
The lack of comprehensive U.S. privacy law compounds the problem. In the U.S. there is no single law covering all consumer data, and HIPAA leaves a big gap for tech platforms. As one observer notes, without a general privacy rule, “it’s up to each company to set the rules” for sensitive data like health. This has led to calls for new legal frameworks. Notably, OpenAI’s CEO Sam Altman himself has suggested that certain AI conversations might need privileged treatment, akin to doctor-patient confidentiality. But for now, the default is: ChatGPT Health operates under standard tech terms, not medical confidentiality.

In contrast, the European Union offers a much stricter regime. The GDPR classifies health data as a “special category” of personal data, imposing strong restrictions. Generally, processing health information requires a very specific legal basis, typically explicit consent or use for medical care. For example, Article 9 of GDPR outright prohibits processing medical data except under limited conditions – say for healthcare delivery or public health purposes – and even then only with safeguards. In practice, this means that any ChatGPT Health user in Europe would have to give a clear, informed “opt-in” consent for each type of data (medical records, fitness metrics, etc.), and OpenAI would have to uphold all GDPR rights (access, correction, deletion) for those users.

OpenAI’s initial rollout acknowledges these hurdles. The company has explicitly excluded the entire European Economic Area (plus Switzerland and the UK) from the early tests of ChatGPT Health. OpenAI’s blog and press coverage emphasize that only users outside these regions can join the health beta so far. Critics interpret this exclusion as telling: these jurisdictions have “the strongest data protection laws in the world, including the GDPR,” notes privacy watchdog Conscious Digital. In other words, OpenAI appears to be avoiding regions where additional consent and compliance steps would be required.
Europe’s regulators have already proven vigilant about ChatGPT. In 2023 the Italian data protection authority temporarily banned the standard ChatGPT app for GDPR violations, and in late 2024 it fined OpenAI a record €15 million for collecting user data without proper consent or transparency. Other EU regulators (e.g. France, Ireland, Spain) have also announced or opened probes of AI chatbots under GDPR rules. These actions send a clear message: any handling of Europeans’ health data by ChatGPT would draw intense scrutiny. Under the GDPR and upcoming EU AI Act, a healthcare-focused AI like this would likely be classified as “high risk,” triggering obligations for risk assessments, human oversight, and detailed documentation. It remains to be seen how (or if) ChatGPT Health will launch in Europe once those legal issues are addressed. For now, it’s absent from the market where privacy rules are tightest.

Ethical and Safety Concerns

Legal compliance is only half the story; ethical questions loom large too. Notably, ChatGPT Health is not a regulated medical device. In most countries, any software that presents health advice as diagnostic or treatment guidance would have to meet medical safety standards. But OpenAI explicitly markets Health as a personal assistant, not a diagnostic tool. As one tech reporter observed, “ChatGPT Health is not regulated as a medical device or diagnostic tool. So there are no mandatory safety controls, no risk reporting, no post-market surveillance,” and no obligation to publish independent test data. In practice, this means there is no independent agency vetting ChatGPT’s health recommendations before people use them. (By contrast, even many health apps must conform to some FDA or CE marking standards.)

Another ethical issue is user understanding and education. Surveys suggest that non-experts often struggle to distinguish medical-grade information from general advice. A feature like ChatGPT Health could exacerbate this gap: patients may not notice the fine-print disclaimers about accuracy. Consumer advocates warn that without clear guardrails and education, people “will take the advice at face value,” says one health nonprofit leader. They stress that tech companies are moving faster than governments, setting their own rules on privacy and transparency in healthcare.

Finally, there is the question of liability. If ChatGPT Health suggests a harmful action and a patient follows it, who is legally responsible? OpenAI’s terms likely disclaim liability, but in practice, injured users may look to courts or regulators for remedies. The U.S. has rarely dealt with AI-related malpractice claims, and in Europe any harm from an AI medical advice tool could trigger product liability laws or emerging AI rules. In either case, the legal framework is unsettled. Until legislation catches up, the lack of clear accountability may leave consumers and professionals wary.

Final Thoughts

In sum, ChatGPT Health represents a bold step in AI-assisted healthcare – and one that blurs many lines. It promises greater patient empowerment, but it also sidesteps much of the legal and ethical guardrails that normally govern medical information. U.S. users handing over their health records are protected only by OpenAI’s user agreements; European users currently can’t use the feature at all, though GDPR would require them to give explicit consent and expect robust rights. In the absence of clear regulation or long-term studies, experts urge caution.

Even OpenAI acknowledges the need for oversight: its CEO has advocated new legal frameworks for sensitive AI interactions.
Ultimately, trust in ChatGPT Health will depend not only on code and controls, but on accountability. Users should remember that, for now, this AI is a tool, not a doctor. Patients and providers alike will have to watch carefully as the system evolves, keeping legal and ethical perspectives front and center.

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Check out previous posts for more exiting insights!