Skip to main content
Secured AI - Protecting You in the AI Age
AI Security

ChatGPT Security: What Enterprises Need to Know in 2026

Your IT team spent three months evaluating ChatGPT Enterprise. Six weeks later, a compliance audit reveals that forty-seven employees are also using the free version on personal devices - pasting customer records, financial projections, and HR documents into it. The Enterprise deployment was secure. The actual behavior was not.

January 19, 202614 min read

TL;DR

ChatGPT's security depends on which version you use, how it is configured, and whether employees actually follow the policies you set. ChatGPT Enterprise and Team offer stronger data controls than the free or Plus tiers, including no model training on business data and admin visibility. But the real risk is data exposure at the prompt layer - the moment an employee types or pastes sensitive information into any AI tool. Addressing ChatGPT security requires both platform-level controls and data-layer protection that works regardless of which version or device employees use.

How ChatGPT Handles Your Data

This is the ChatGPT security problem in practice. The risk is not primarily in the tool's architecture. It is in the gap between how organizations configure ChatGPT and how employees actually use it - across devices, accounts, and versions that may not have the same protections.

Understanding ChatGPT security starts with understanding what happens to the data you put into it. OpenAI operates ChatGPT across several tiers, and each one handles data differently.

On the free tier, conversations are stored by OpenAI and may be used to improve their models unless the user manually opts out through the settings. Most users do not change default settings, which means most free-tier conversations are available for model training. This is the version your employees are most likely using on personal devices.

ChatGPT Plus provides the same model capabilities with a subscription, but the default data handling is similar to the free tier. Users can opt out of model training, but the opt-out is per-account and requires the user to navigate to the correct setting. There is no way for an organization to enforce this setting across employee accounts because Plus is an individual subscription.

ChatGPT Team is designed for small teams and provides a significant security improvement: conversations are not used for model training by default. The workspace admin has visibility into billing and some usage controls. However, data is still transmitted to and processed by OpenAI's servers, which means the data leaves your environment.

ChatGPT Enterprise is OpenAI's offering for large organizations. Business data is not used for model training. The product includes SSO integration, admin console access, domain verification, and data retention controls. Enterprise also provides a commitment to SOC 2 Type 2 compliance from OpenAI. This is the tier that security teams evaluate when they assess ChatGPT for organizational use.

The distinction matters because the security posture of "ChatGPT" depends entirely on which tier is in use. When a CISO says "we have deployed ChatGPT with appropriate controls," they typically mean the Enterprise tier. When an employee says "I use ChatGPT for work," they may mean the free version on their personal phone.

ChatGPT Tier Comparison

Here is how security features compare across the four main ChatGPT tiers that enterprises consider.

FeatureFreePlusTeamEnterprise
Model training on dataYes, unless opted outYes, unless opted outNo (default off)No (contractual)
Admin consoleNoNoLimitedYes
SSO integrationNoNoNoYes
Domain verificationNoNoNoYes
Data retention controlsNoNoLimitedYes
SOC 2 Type 2 complianceNoNoNoYes
Data Processing Addendum (GDPR)NoNoLimitedYes
Organizational visibilityNoneNonePartialFull
Intended audienceIndividualsIndividualsSmall teamsLarge organizations

The Seven ChatGPT Security Risks That Matter

ChatGPT security risks fall into categories that security teams need to address regardless of which tier is deployed.

1. Data exposure through prompts

Every time an employee types or pastes information into ChatGPT, that data leaves the organization's controlled environment and travels to OpenAI's servers. Even on Enterprise, where the data is not used for training, the information is still processed by OpenAI's infrastructure. The risk is not theoretical. Documented incidents include the March 2023 bug that exposed user chat histories to other users, and cases where organizations discovered sensitive data in employees' ChatGPT conversations that should never have left internal systems.

2. Version fragmentation

Most organizations that deploy ChatGPT Enterprise also have employees using free, Plus, or Team versions outside the enterprise deployment. These shadow accounts have weaker data protections and zero organizational visibility. The Enterprise deployment creates a false sense of security because it only covers the employees who use it.

3. Prompt injection attacks

Prompt injection is a technique where malicious instructions are embedded in content that ChatGPT processes, causing the model to ignore its system instructions and follow the injected commands instead. In enterprise settings, this risk is relevant when ChatGPT processes external content - emails, documents, or web pages - that may contain hidden instructions. The attack surface grows as organizations integrate ChatGPT into automated workflows where it processes untrusted input.

4. Data retained in conversation history

ChatGPT stores conversation history to provide continuity for users. This means that sensitive data entered in one conversation persists in the system and can be accessed by anyone with access to that account. In shared environments or when employees leave the organization, conversation history containing business-sensitive information may remain accessible.

5. Third-party integrations and plugins

ChatGPT supports integrations with external services through plugins and custom GPTs. Each integration creates a new data flow path where information from ChatGPT conversations can reach additional third parties. The security posture of these third parties is outside your organization's control, and the data sharing happens within the context of what feels like a single tool.

6. Compliance exposure

Depending on the data your employees enter into ChatGPT, usage may trigger regulatory obligations under GDPR, HIPAA, CCPA, and other frameworks. Under GDPR, sending personal data to OpenAI constitutes data processing that requires a lawful basis and appropriate safeguards. Under HIPAA, sending protected health information to ChatGPT without a Business Associate Agreement creates a potential violation. These obligations exist regardless of which ChatGPT tier is in use.

7. Output reliability and hallucination

While not a security risk in the traditional sense, ChatGPT's tendency to generate confident but incorrect responses creates operational risk when employees rely on AI output for business decisions without verification. In regulated industries, acting on hallucinated information can create compliance violations or liability exposure.

What ChatGPT Enterprise Actually Protects

ChatGPT Enterprise addresses several of these risks, but not all of them. Understanding what it covers and what it does not is essential for security planning.

What Enterprise does protect

  • Model training on your data. OpenAI's contractual commitment for Enterprise is that business data is not used to train models. This eliminates one of the primary concerns organizations have about ChatGPT.
  • Administrative controls. SSO integration connects ChatGPT access to your identity management system. Domain verification confirms that users accessing the Enterprise workspace are employees. The admin console provides visibility into usage patterns and the ability to manage users and settings. Data retention controls allow organizations to set how long conversations are stored.
  • A compliance framework. OpenAI publishes a SOC 2 Type 2 report for ChatGPT Enterprise, and the Enterprise terms include a Data Processing Addendum that addresses GDPR requirements.

What Enterprise does not protect

  • Data at the prompt layer. When an employee types sensitive information into ChatGPT Enterprise, that data is transmitted to and processed by OpenAI's servers. The contractual protections ensure OpenAI handles that data according to its commitments, but the data still leaves your environment. If the risk you are managing is data leaving your environment at all, Enterprise alone does not address it.
  • Employees using other versions. An employee with a ChatGPT Enterprise account can still use the free version on a personal device. Enterprise provides no mechanism to prevent this, and most organizations lack the visibility to detect it without additional tooling.
  • All AI tools. Employees who use ChatGPT Enterprise may also use Claude, Gemini, Copilot, or specialized AI tools for specific tasks. Enterprise protections apply only to ChatGPT. A security strategy that focuses exclusively on ChatGPT Enterprise leaves other AI data flows unaddressed.

The Gap Between Enterprise Controls and Actual Security

Think of ChatGPT Enterprise like a corporate credit card policy. The company issues approved cards with spending limits, receipt requirements, and fraud monitoring. But employees still have personal credit cards, and nothing stops them from putting a business expense on a personal card when the corporate card is inconvenient. The policy covers the corporate card. It does not control the personal ones.

The security gap is similar. ChatGPT Enterprise provides controls for the approved channel. But the data protection problem exists across every channel where employees interact with AI - approved or not, enterprise or free, ChatGPT or any other tool.

Closing this gap requires controls that operate at the data layer rather than the platform layer. Instead of trying to control which tools employees use - a strategy that fails reliably, as shadow AI demonstrates - data-layer controls protect the sensitive information itself before it reaches any AI tool.

This is where prompt-layer protection becomes essential. When sensitive data is detected and masked before it reaches the AI model, the protection applies regardless of which AI tool the employee uses, which tier they are on, which device they are using, or whether they followed the policy. The data is protected because the controls operate on the data itself, not on the tool.

Solutions like Secured AI work at this prompt layer, detecting sensitive data in real time and replacing it with context-preserving tokens before it reaches ChatGPT, Claude, Gemini, or any other AI model. The AI processes the masked data and generates a useful response. The original sensitive data never leaves the organization's controlled environment. When authorized users need the real data, Reveal Technology restores it with a full audit trail. This approach complements ChatGPT Enterprise's platform controls with data-layer protection that closes the gaps Enterprise cannot address on its own.

How to Build a ChatGPT Security Strategy

An effective ChatGPT security strategy addresses both the platform layer and the data layer. Here is how to build one.

Start with visibility

Before configuring controls, you need to know the current state. Which ChatGPT versions are in use across your organization? How many employees have Enterprise accounts versus personal accounts? What data categories are flowing into ChatGPT conversations? Which departments have the highest usage and the most sensitive data exposure? Network monitoring, SaaS management tools, and employee surveys can answer these questions.

Deploy the Enterprise tier where it fits

For organizations where ChatGPT is a primary AI tool, Enterprise provides meaningful platform-level protections. SSO integration, admin controls, and contractual commitments around data handling are valuable. Deploy Enterprise to the teams that need it, configure the admin controls, and document the protections it provides.

Add data-layer protection

Deploy detection and masking at the prompt layer to protect sensitive data before it reaches any AI model. This covers the gaps that Enterprise cannot address: personal accounts, other AI tools, mobile devices, and any scenario where an employee interacts with AI outside the controlled Enterprise channel.

Establish monitoring and reporting

Continuous monitoring of AI data flows provides the visibility you need to maintain security over time. Which data types are being detected and masked most frequently? Which teams generate the most sensitive AI interactions? Are there patterns that suggest policy gaps or training needs? This monitoring data also provides the compliance evidence that regulatory frameworks require.

Review and adapt quarterly

AI tools change rapidly. OpenAI updates ChatGPT's features, pricing, and data handling policies regularly. New AI tools emerge. Employee usage patterns evolve. A security strategy built for ChatGPT today needs quarterly review to remain effective.

ChatGPT Security Settings You Should Configure Today

If your organization uses ChatGPT, here are the settings to review and configure immediately, organized by tier.

Free and Plus accounts

For free and Plus accounts that your organization cannot eliminate but can influence, advise employees to disable model training by navigating to Settings, then Data Controls, then toggling off "Improve the model for everyone." This prevents their conversations from being used for model training. Also advise disabling chat history for sensitive conversations - though this requires employee discipline and does not prevent data from being transmitted to OpenAI during the conversation itself.

Team workspaces

For Team workspaces, confirm that the workspace admin has reviewed data handling defaults. Verify that model training is disabled at the workspace level. Review the member list regularly to ensure only current employees have access. Establish clear guidelines for what data categories are appropriate for Team conversations.

Enterprise deployments

For Enterprise deployments, enable SSO and connect it to your identity provider. Configure domain verification to prevent unauthorized access. Set data retention policies that align with your organization's requirements. Review the admin console regularly for usage patterns. Document the protections in place for compliance evidence.

The one control that applies across every tier

Across all tiers, the most impactful security measure is data-layer protection that detects and masks sensitive information before it reaches OpenAI's servers. No configuration setting eliminates the fundamental risk that data entered into ChatGPT is transmitted to and processed by external infrastructure. Data-layer masking addresses this risk at the source.

The Regulatory Dimension of ChatGPT Security

Using ChatGPT in a business context creates specific regulatory obligations that organizations must address.

GDPR

Under GDPR, sending personal data to OpenAI constitutes data processing. Organizations need a lawful basis for this processing under Article 6, must conduct a Data Protection Impact Assessment if the processing is high-risk under Article 35, and must ensure that OpenAI's Data Processing Addendum meets the requirements of Article 28. Italy temporarily banned ChatGPT in 2023 over GDPR concerns, signaling regulatory willingness to take enforcement action on AI data flows.

HIPAA

Under HIPAA, any protected health information sent to ChatGPT requires a Business Associate Agreement with OpenAI. Without a BAA, sending PHI to ChatGPT constitutes an unauthorized disclosure. OpenAI does not currently offer a BAA for ChatGPT, which means sending identifiable patient data to any ChatGPT tier is a compliance risk for covered entities. Data masking that removes PHI identifiers before data reaches ChatGPT reduces this exposure.

CCPA

Under CCPA, California residents' personal information sent to ChatGPT may require disclosure under the right-to-know provisions. If OpenAI is considered a service provider under CCPA, additional contractual requirements apply.

EU AI Act

Under the EU AI Act, organizations deploying ChatGPT for specific use cases may face transparency and risk management obligations depending on the risk classification of their use case.

The common thread: using ChatGPT with sensitive data creates regulatory obligations that are significantly easier to manage when the sensitive data is masked before it reaches OpenAI. Fewer regulated data flows mean fewer compliance requirements, fewer vendor agreements, and fewer potential violations.

Frequently Asked Questions

Is ChatGPT secure for business use?
It depends on which version you use and how you configure it. ChatGPT Enterprise provides the strongest business protections, including no model training on your data, SSO, admin controls, and SOC 2 Type 2 compliance. However, even Enterprise transmits data to OpenAI's servers for processing, and it cannot prevent employees from using less secure versions on personal devices. A complete security approach combines platform controls with data-layer protection.
Does ChatGPT use my data to train its models?
On the free and Plus tiers, conversations may be used for model training unless the user manually opts out. ChatGPT Team and Enterprise do not use business conversations for training by default. However, model training is only one data risk - the data is still processed by OpenAI's infrastructure regardless of the training setting.
What data should I never put into ChatGPT?
Without data-layer protection in place, avoid entering personally identifiable information such as names, Social Security numbers, and addresses. Avoid protected health information including diagnoses, medications, and patient identifiers. Avoid financial account numbers, passwords, API keys, and attorney-client privileged communications. With context-preserving masking in place, these data types are detected and replaced with tokens before reaching ChatGPT.
How is ChatGPT Enterprise different from ChatGPT Plus?
Enterprise provides no model training on business data by default, SSO integration, admin console, domain verification, data retention controls, and SOC 2 Type 2 compliance. Plus is an individual subscription with the same model capabilities but weaker data handling defaults, no admin controls, and no organizational visibility. The security difference is significant for business use.
Can ChatGPT comply with HIPAA?
OpenAI does not currently offer a Business Associate Agreement for ChatGPT, which is a prerequisite for HIPAA-compliant processing of protected health information. Organizations that need to use AI with PHI should either avoid sending identifiable patient data to ChatGPT or use data masking that removes PHI identifiers before the data reaches OpenAI's servers.
What is the biggest ChatGPT security risk for enterprises?
The biggest risk is version fragmentation - employees using free or Plus accounts alongside or instead of the enterprise deployment. These shadow accounts have weaker data protections and zero organizational visibility. The data exposure from one employee pasting customer records into a free ChatGPT account can undermine the entire enterprise security posture.

Protect sensitive data before it reaches ChatGPT

Secured AI detects and masks sensitive data at the prompt layer - before it ever reaches ChatGPT, Claude, Gemini, or any other model. Platform controls cover the approved channel. Data-layer protection covers everything else.