Table of Contents

OpenAI’s introduction of ChatGPT Health points toward a future where people can access health-related information more easily than ever. At the same time, the rollout raises difficult questions about how safely and responsibly sensitive medical data can be handled outside traditional healthcare systems.

The company announced ChatGPT Health in early January, describing it as a dedicated experience that combines users’ health information with ChatGPT’s capabilities to help them better understand and manage their health. OpenAI says the product is meant to make users feel more informed and prepared, not to diagnose conditions or replace professional medical care.

Health-related questions are already one of the most common ways people use ChatGPT. According to OpenAI, the new product is intended to offer a more secure environment for those interactions, with protections designed specifically for health data. These include isolating health conversations from other chats and applying additional encryption measures to keep sensitive information compartmentalised.

OpenAI has also emphasised that data shared with ChatGPT Health will not be used to train its foundation models.

A Dedicated Space With High Stakes?

Given how widely AI chatbots are already used for health questions, a separate product focused on protecting sensitive personal data may seem like a positive step. Medical information is among the most private data individuals have, and a siloed environment could, in theory, reduce risk.

However, ChatGPT Health allows users to connect their medical records and link third-party wellness apps if they choose to do so. That means users are being asked to trust a private technology company with deeply personal health information in exchange for guidance and explanations.

With strong security controls, this  decision  still carries an inherent risk as  health data can be exploited , and no system operates in a world free from breaches, misuse, or unforeseen vulnerabilities.

What We Know and Don’t Know About Security

OpenAI has said that conversations and files in ChatGPT are encrypted by default both while stored and while being transmitted, and that users can enable controls such as multi-factor authentication. Beyond that, details about how health data is protected at a technical and regulatory level remain limited.

To enable access to trusted U.S. healthcare providers, OpenAI partnered with b.well, the largest and most secure network of live, connected health data for U.S. consumers. b.well adheres to the highest industry standards in data security and privacy. You can remove access to medical records at any time in the “Apps” section of Settings.

For third-party integrations, such as wellness apps, OpenAI says participating apps must meet its privacy and security requirements, collect only the minimum necessary data, and undergo additional review. Users can also disconnect apps at any time, which stops future access.

Still, digital rights advocates caution that once data is shared, it can be difficult if not impossible to fully reclaim control over it. Disconnecting an app may prevent further sharing, but it does not undo what has already been accessed or stored elsewhere.

In complex data-sharing environments, consent alone may not be enough to protect users.

Encryption and Regulation Gaps in ChatGPT Health

Healthcare data typically demands the highest possible security standards. It is not clear whether conversations within ChatGPT Health are protected with end-to-end encryption. Encryption “at rest and in transit,” while important, does not provide the same guarantees.

Regulatory protections are also uncertain. ChatGPT Health is positioned as a consumer-facing educational tool, not a clinical system, which places it outside many healthcare compliance frameworks. OpenAI has pointed to a separate product designed for healthcare organisations that does meet stricter regulatory requirements, but that offering is distinct from ChatGPT Health.

This distinction matters. While ChatGPT Health is not intended to diagnose or treat medical conditions, many users already share sensitive health information with general-purpose chatbots and rely on them for guidance. Some users may also overestimate the capabilities of large language models, treating them as authoritative sources rather than systems that can still produce incorrect or misleading responses.

Concerns about over-reliance, hallucinations, and the broader psychological impact of AI-driven health advice remain unresolved.

Legal, Behavioral, and Policy Risks for Users

Legal and privacy experts note that even companies with strong privacy commitments are subject to data breaches and legal demands, such as subpoenas and warrants. Sharing health data with any third party inevitably means giving up some degree of control.

AI-powered health assistants are also relatively new, and their long-term effects on users are not well understood. Questions remain about dependency, error recognition, and potential harm caused by incorrect information.

From a policy perspective, it is notable that ChatGPT Health is not initially launching in regions with stricter data protection regimes. That raises concerns about how consistently privacy principles such as data minimisation and purpose limitation are being applied.

Balancing Convenience, Privacy, and Trust

ChatGPT Health may offer practical benefits, especially for people trying to navigate a complex healthcare system or make sense of their own information. But the product also highlights the unresolved tensions between accessibility, privacy, and trust when AI systems handle medical data.

For now, users should approach AI health tools with caution, clear expectations, and an understanding of their limits. Convenience alone is not a substitute for strong safeguards especially when the data involved is as sensitive as personal health information.

Categorized in:

Blog,