Trust and privacy are at the core of our products.
We give you tools to control your data—including easy opt-outs and permanent removal of deleted ChatGPT chats(opens in a new window) and API content from OpenAI’s systems within 30 days.
The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely.
This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections.
We strongly believe this is an overreach by the New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first.
—Brad Lightcap, COO, OpenAI
Why are The New York Times and other plaintiffs asking for this?
-
The New York Times is suing OpenAI. As part of their baseless lawsuit, they’ve recently asked the court to force us to retain all user content indefinitely going forward, based on speculation that they might find something that supports their case.
-
We strongly believe this is an overreach. It risks your privacy without actually helping resolve the lawsuit. That’s why we’re fighting it.
Is my data impacted?
-
Yes, if you have a ChatGPT Free, Plus, Pro, and Team subscription or if you use the OpenAI API (without a Zero Data Retention agreement).
-
This does not impact ChatGPT Enterprise or ChatGPT Edu customers.
-
This does not impact API customers who are using Zero Data Retention endpoints under our ZDR amendment.
What have you done to challenge this order to date?
-
From the outset, we argued that the plaintiffs’ request to preserve “all output data” was vastly overbroad and conflicted with our privacy commitments. We filed a motion asking the Magistrate Judge to reconsider the preservation order, highlighting that indefinite retention of user data breaches industry norms and our own policies.
-
When we appeared before the Magistrate Judge on May 27, the Court clarified that ChatGPT Enterprise is excluded from preservation.
-
We have also appealed this order with the District Court Judge.
What if I am a business customer and I have a Zero Data Retention agreement?
-
You are not impacted. If you are a business customer that uses our Zero Data Retention (ZDR) API, we never retain the prompts you send or the answers we return. Because it is not stored, this court order doesn’t affect that data.
If I delete my data from ChatGPT, will it still be retained under this order?
-
The New York Times is demanding that we retain even deleted ChatGPT chats(opens in a new window) and API content that would typically be automatically removed from our systems within 30 days.
-
This does not impact ChatGPT Enterprise or ChatGPT Edu customers.
How will you store my data and who can access it?
-
The content covered by the court order is stored separately in a secure system. It’s protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations.
-
Only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.
Will this data be shared with the New York Times, other plaintiffs, or anyone else?
-
This data is not automatically shared with The New York Times or anyone else. It’s locked under a separate legal hold, meaning it’s securely stored and can only be accessed under strict legal protocols.
-
If plaintiffs continue to push for access in any way, we will fight to protect your privacy at every step.
How long will OpenAI keep this data? Is there a known end date or review period for the court order?
-
Right now, the court order forces us to retain consumer ChatGPT and API content going forward. That said, we are actively challenging the order, and if we are successful, we’ll resume our standard data retention practices.
Does this court order violate GDPR or my rights under European or other privacy laws?
-
We are taking steps to comply at this time because we must follow the law, but The New York Times’ demand does not align with our privacy standards. That is why we’re challenging it.
Does this change your training policies?
-
Business customers: We don’t train our models on business data by default, and this court order does not change that.
-
Consumer customers: You control whether your chats are used to help improve ChatGPT within settings, and this order doesn’t change that either.
Will you keep us updated?
-
Yes. We’re committed to transparency and will keep you informed. We’ll share meaningful updates, including any changes to the order or how it affects your data.
What are your data retention policies?
-
ChatGPT Free, Plus, Pro(opens in a new window): When you delete a chat (or your account), the chat is removed from your account immediately and scheduled for permanent deletion from OpenAI systems within 30 days, unless we are required to retain it for legal or security reasons or as described here(opens in a new window).
-
ChatGPT Team: Each of your end users controls whether their conversations are retained. Any deleted or unsaved conversations are removed from our systems within 30 days, unless we are legally required to retain them.
-
ChatGPT Enterprise and ChatGPT Edu: Your workspace admins control how long your customer content is retained. Any deleted conversations are removed from our systems within 30 days, unless we are legally required to retain them.
-
API: Business customers who build on top of our API control how long customer content is retained for application state based on the endpoints they use and their configurations (see here(opens in a new window)). After 30 days, API inputs and outputs are removed from OpenAI logs, unless we are legally required to retain them.
-
Zero Data Retention API: If a business customer is using Zero Data Retention endpoints, inputs and outputs are never logged and are not retained for application state.