Source Gadgets Now
San Francisco / Global — On November 27, 2025, OpenAI disclosed a security incident involving a third-party analytics vendor, Mixpanel, which resulted in the exposure of limited user data of certain API clients — including names and email addresses.
What happened
According to OpenAI, the breach did not affect its own systems or the core infrastructure of ChatGPT. The attacker gained unauthorized access to Mixpanel’s systems and exported a dataset containing “non-sensitive analytics data” tied to some users of OpenAI’s API platform.
The exposed information may have included:
The name associated with the API account
The account’s email address
Approximate location (city, state, country) based on browser or IP metadata
Operating system, browser type, referring websites, and user or organization IDs linked to the API account
Importantly, OpenAI stressed that no sensitive data — such as chat contents, prompts/responses, passwords, payment information, credit-card details, government IDs, or API keys — was compromised.
Response from OpenAI
Upon learning of the breach, OpenAI promptly terminated its use of Mixpanel for analytics and began notifying affected organizations, administrators, and users directly. The company also announced expanded security reviews across its vendor ecosystem and raised the security standards required of all future partners.
In a blog post, OpenAI wrote:
> “Trust, security, and privacy are foundational to our products … We are committed to transparency … After reviewing this incident, OpenAI has terminated its use of Mixpanel.”
What users should do
While the compromised data was limited, OpenAI has warned that names and email addresses can be abused in phishing or social-engineering attacks. Affected users have been urged to:
Be cautious of unexpected emails or messages — especially those containing links, attachments, or requests for sensitive information.
Verify that any communication claiming to be from OpenAI comes from an official domain.
Enable multi-factor authentication (MFA) on their accounts as an extra security guard.
OpenAI reassured that the incident does not impact normal ChatGPT users who do not use the API: their chats, history, credentials and payments remain unaffected.
What this means in the bigger picture
The breach highlights the growing security risks associated with analytics tools and third-party services — even when the core platform remains secure. As more organizations adopt AI tools and APIs, ensuring the security of partner ecosystems is becoming as important as protecting the main infrastructure.
For many developers relying on OpenAI’s API to build apps, this incident is a reminder to follow best practices for data hygiene — e.g. minimal data collection, enabling MFA, and being alert to phishing threats.
