Is OpenAI Codex Safe for Freelance Devs Handling Client Code?

Transparency Notice: This article contains affiliate links. If you purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in. Read our full disclosure.

When a freelance developer pipes a client’s proprietary repository through an AI coding agent, two questions become very practical very fast: where does that code go, and who else gets to see it. OpenAI Codex sits at the center of that question because it has become the default agent for many indie devs and small consultancies after OpenAI’s May 2026 push to position it as a safe, audit-friendly coding partner. The pitch is sandboxes, approvals, and agent-native telemetry. The reality, for a one-person shop billing a client by the hour, is more layered. Free tier, paid Plus, business workspace, API key with Zero Data Retention — those are four very different privacy postures, and most freelancers don’t realize they’ve defaulted into the riskiest one. Verdict preview: use with caution.

What OpenAI Codex does with your data

Codex ships in two execution modes that matter for a freelancer: local CLI and cloud tasks. The data trail is different in each.

In local CLI mode, source code stays on the developer’s machine. When Codex needs to call an OpenAI model, it sends only the specific context and prompts the requested action requires — not the full repository tree (per coverage at milvus.io, retrieved 2026-05-09, paraphrasing OpenAI Codex documentation). The Codex client also pings OpenAI periodically with a small amount of anonymous usage and health data, described as “client analytics” in OpenAI’s own openai/codex GitHub Discussions thread #8291 on github.com (retrieved 2026-05-09).

In cloud-task mode, each task runs in a separate ephemeral cloud environment preloaded with the user’s repository, where the agent reads, edits, and runs code before returning results (per the OpenAI announcement “Running Codex safely at OpenAI”, abstract retrieved from the OpenAI RSS feed 2026-05-09 — the announcement page itself was inaccessible to direct retrieval at the time of this review due to a Cloudflare challenge). That means in cloud mode the full working repo is staged on OpenAI infrastructure for the duration of the task.

On training data, the consumer-versus-business split is the single most important fact. Per OpenAI’s Help Center article on how data is used to improve model performance, hosted at help.openai.com (retrieved 2026-05-09), consumer-tier conversations and Codex tasks may be used to train OpenAI models by default, and users can opt out through the privacy portal. Per coverage of OpenAI’s enterprise privacy posture by drainpipe.io (2026 “AI Privacy Trap” review, retrieved 2026-05-09), business-tier inputs and outputs (ChatGPT Team, Enterprise, Edu, the API, and business Codex usage) are not used for training by default, and Zero Data Retention is available to qualifying enterprise API customers.

Retention follows the same baseline split. Deleted ChatGPT and Codex consumer conversations are removed from OpenAI systems within 30 days of deletion. Workspace admins on Enterprise and Edu set retention windows for their workspaces. There is one live exception worth knowing about: the New York Times v. OpenAI litigation has at times required OpenAI to preserve deleted user data beyond the standard 30-day window under court order, per OpenAI’s own “Response to NYT Data Demands” post (retrieved 2026-05-09 via secondary citation).

For agent telemetry — separate from the anonymous client analytics ping — Codex offers OpenTelemetry export of every action the agent takes: code suggestions, file writes, tool approvals, MCP server calls, and network proxy events. Per coverage at artificialintelligenceherald.com of OpenAI’s security architecture announcement (retrieved 2026-05-09), this telemetry is off by default and must be opted into. Sandboxing uses platform-native enforcement (macOS Seatbelt, Linux/WSL2 namespaces, native Windows equivalents), per a deep-dive at medium.com by Micheal Lanham, “OpenAI Codex 2025: Inside the Sandbox That Keeps Your Code Safe” (retrieved 2026-05-09), and an approval policy gates sensitive operations behind explicit user confirmation.

What this means for solo freelancers

The consumer-tier default is the trap. If you signed up for ChatGPT Plus or Pro and you use Codex on a client’s repository through that account, you are on the consumer side of the line. By default, your prompts and the code excerpts Codex sends back can be used to train OpenAI’s next model — unless you have explicitly toggled training off in the privacy portal. That toggle is a per-account switch, not a per-project one. Most solo devs never visit the privacy portal after signup.

Three concrete risk scenarios, framed for a one-person consultancy:

A debugging session on a client’s payment processor integration. You paste the failing function plus the surrounding context into Codex CLI. Even in local mode, the relevant prompt and code excerpt go to OpenAI’s servers. On a consumer account with training on, that excerpt — including any proprietary logic, internal API patterns, or commented-out keys — enters a corpus that may train the next model. Based on the policy as written, this carries a real client-IP exposure risk. It is not a generic “AI is risky” warning; it is a specific consequence of leaving the consumer training default in place.

A cloud-task workflow on a private NDA repository. You connect Codex cloud-mode to a client’s GitHub repo for an end-to-end refactor. The full working tree is staged on OpenAI infrastructure for the duration of the task. If your contract includes a “no third-party processing of source code” clause — common in finance, healthcare, and government subcontracting — running this task on a personal Plus account likely violates that clause, even if OpenAI’s cloud environment is well isolated. The risk is contractual, not technical.

A GDPR data-controller mismatch. Your client is the data controller for any personal data their codebase processes. If you, as a freelancer, route their code (which may reference customer schemas, log formats, or fixture data) through OpenAI on a consumer account, OpenAI is acting as a sub-processor that your client may not have authorized. Based on the policy as written, the consumer tier does not offer a Data Processing Addendum, which is the standard mechanism for adding a sub-processor to a controller’s chain. The business tier does. Solo freelancers using consumer-tier Codex on EU clients’ code carry a specific GDPR exposure risk that is hard to remediate after the fact.

How to use it safely

Five concrete steps that materially change the privacy posture, in order of cost and effort.

First, opt out of training. Sign in at the privacy portal linked from your account settings, find the “do not train on my content” control, and confirm the toggle is off for both ChatGPT conversations and Codex tasks. Per OpenAI Help Center coverage retrieved 2026-05-09, this stops new conversations from being used in training. It does not retroactively remove past conversations from training corpora — that is a separate, harder problem.

Second, switch to Temporary Chat for any session that touches client code. Temporary Chat is exempt from training by design and does not appear in chat history or memory. The trade-off is no continuity between sessions, which matters less for code review than for long research threads.

Third, move your client work to a paid business workspace. ChatGPT Team starts at low double digits per seat per month. A business workspace defaults to no training and gives you admin-controlled retention. For a freelancer who bills above the cost of one seat per month, the payback is immediate — and the contractual story you can tell a client (“your code is processed under OpenAI’s business terms, no-training default, with a DPA”) is materially stronger than the consumer story.

Fourth, for cloud-task mode, either use a business workspace or do not use it on client repos at all. The full-repo staging behavior is the differentiating factor; consumer-tier cloud tasks carry the same training-corpus risk as consumer chats but with a much larger payload.

Fifth, enable agent telemetry only when you need an audit trail. The OpenTelemetry export is useful for compliance work but introduces a separate observability surface. Default-off is the correct default for most freelancers; turn it on only when a client contract requires audit logs.

Privacy-friendlier alternatives

For the freelance dev who reads the trade-offs above and decides Codex’s consumer tier is not for them, the alternative stack depends on the use case.

For pure local AI assistance on client code, Cursor has a “Privacy Mode” that disables both training and prompt retention, set per-workspace. Cursor still routes prompts to model providers (OpenAI or Anthropic), but it does not retain them server-side and the providers in question have no-training enterprise terms. Pricing sits in the same band as ChatGPT Plus. Target user: devs who want a Codex-equivalent surface without the consumer-default training risk.

For a setup that keeps client code entirely off third-party servers, run a local model. Ollama plus a code-tuned model (Qwen 2.5 Coder, DeepSeek Coder) gives reasonable performance for refactor and review tasks on a modern laptop. No training, no telemetry, no retention, because there is no server-side anything. The trade-off is quality: local models trail frontier models by a real margin on hard tasks, and you carry the hardware cost. Target user: devs whose contracts require zero third-party processing and who can accept slower iteration.

For password and credential storage that doesn’t end up pasted into a Codex session by accident, 1Password Business at roughly eight dollars per seat per month plus the 1Password CLI and the `op run` pattern injects secrets into your shell at runtime without ever putting them in plaintext on disk or in your clipboard buffer. Bitwarden Teams Starter is the cheaper open-source equivalent at three dollars per seat per month and supports the same secret-injection pattern via `bw` CLI. Target user: every freelance dev, regardless of which AI tool they pick.

For the network layer when you’re working from coffee shops or co-working spaces and your client’s repo is in a private VPN, a real VPN matters. NordVPN sits in the middle on price and is a known quantity for solo workers. Mullvad at five euros flat per month with no-account-needed signup is the privacy-maximalist choice. Either is better than no VPN on untrusted Wi-Fi; neither is a substitute for the training opt-out and tier choices above.

The verdict

Use with caution. OpenAI Codex on the business tier is a defensible choice for solo freelancers handling client code, with a no-training default, admin-controlled retention, optional Zero Data Retention on the API, and an audit-ready telemetry surface that is off by default. On the consumer tier, the default training posture and the lack of a Data Processing Addendum make it a poor fit for any work covered by an NDA, a “no third-party processing” clause, or GDPR controller-processor obligations. The technology is sound; the tier choice is what determines the privacy outcome. Pick deliberately, opt out of training, and move client work to a business workspace.

Frequently asked questions

Does OpenAI Codex train on my code by default?

On consumer accounts (ChatGPT Free, Plus, Pro), the default is yes — your prompts and Codex tasks may be used to train OpenAI models unless you opt out through the privacy portal. On business accounts (ChatGPT Team, Enterprise, Edu, and the API), the default is no, per OpenAI’s enterprise privacy posture as covered by drainpipe.io’s 2026 review (retrieved 2026-05-09). The per-account toggle in the privacy portal applies retroactively to new conversations only, not to past ones.

Is OpenAI Codex safe for client code under an NDA?

Based on the policy as written, the consumer tier carries real exposure risk for NDA-covered code because the default training posture and the absence of a Data Processing Addendum make it hard to satisfy a “no third-party processing” clause. The business tier (Team, Enterprise, Edu) is materially safer because no-training is the default and a DPA is part of the standard business terms. Always check the specific NDA wording; some require enterprise-only agreements regardless of training defaults.

Can I use OpenAI Codex for HIPAA-covered data?

OpenAI offers a Business Associate Agreement (BAA) for qualifying enterprise customers, per Artificial Intelligence Herald coverage of OpenAI’s security architecture (retrieved 2026-05-09). A solo freelancer would need to be on an enterprise plan with a signed BAA before any HIPAA-covered code or data could touch Codex. Consumer-tier Codex is not a HIPAA-aligned option, and using it on protected health information would create exposure for both the freelancer and the client.

What happens to my Codex prompts if I delete them?

Per secondary coverage of OpenAI’s privacy policy retrieved 2026-05-09, deleted consumer conversations and Codex tasks are removed from OpenAI systems within 30 days. There is one live exception: the New York Times v. OpenAI litigation has at times required OpenAI to preserve deleted user data beyond the 30-day window under court order, per OpenAI’s own “Response to NYT Data Demands” post. This is in active dispute and may have moved by the time you read this.

Does Codex CLI send my code to OpenAI?

In local CLI mode, your full repository stays on your machine; Codex sends only the specific context the requested action needs, per Milvus AI Quick Reference coverage (retrieved 2026-05-09). The Codex client also sends a small amount of anonymous usage and health data (“client analytics”) periodically, per OpenAI’s openai/codex Discussions thread #8291. Cloud-task mode is different — there, the full working repo is staged on OpenAI infrastructure for the duration of the task.

Is the Codex telemetry stream a privacy risk?

The agent-native telemetry stream — separate from anonymous client analytics — is off by default and must be opted into via configuration, per Artificial Intelligence Herald coverage retrieved 2026-05-09. When enabled, it logs every agent action via OpenTelemetry export. For solo freelancers, the default-off posture means you’re not creating an extra observability surface unless you turn it on. Enable it only when a client contract requires audit logs.

Sources

  • OpenAI Privacy Policy, https://openai.com/policies/privacy-policy/ — primary URL, fetch returned HTTP 403 (Cloudflare anti-bot) on 2026-05-09; verified via secondary sources below.
  • OpenAI Help Center, “How your data is used to improve model performance”, https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance — retrieved 2026-05-09 via WebSearch.
  • drainpipe.io, “AI Data Privacy 2026: The AI Privacy Trap”, https://drainpipe.io/ai-data-privacy-2026-the-ai-privacy-trap/ — retrieved 2026-05-09.
  • Milvus AI Quick Reference, “Is Codex CLI secure, and how is my code or data handled during execution?”, https://milvus.io/ai-quick-reference/is-codex-cli-secure-and-how-is-my-code-or-data-handled-during-execution — retrieved 2026-05-09.
  • Artificial Intelligence Herald, “OpenAI Reveals Security Architecture for Codex: Sandboxing, Approvals, and Agent-Native Telemetry”, https://artificialintelligenceherald.com/news/openai-codex-security-architecture-sandboxing-telemetry-2026 — retrieved 2026-05-09.
  • Medium, Micheal Lanham, “OpenAI Codex 2025: Inside the Sandbox That Keeps Your Code Safe”, https://medium.com/@Micheal-Lanham/openai-codex-2025-inside-the-sandbox-that-keeps-your-code-safe-f8d88079a6b5 — retrieved 2026-05-09.
  • OpenAI GitHub, openai/codex Discussions #8291 (Codex Client Analytics), https://github.com/openai/codex/discussions/8291 — retrieved 2026-05-09.
  • OpenAI announcement, “Running Codex safely at OpenAI”, https://openai.com/index/running-codex-safely/ — page itself blocked by Cloudflare on 2026-05-09; abstract retrieved from OpenAI RSS feed openai.com/blog/rss.xml the same day.

Reviewed by Jérémy, founder of AidTaskPro and GreenBudgetHub. Based in central France. Privacy posture sourced from public policies and vendor documentation as of 2026-05-09.

Get Your Free Cybersecurity Checklist

Protect your digital life in 5 minutes. Free checklist + weekly productivity & security tips.

Similar Posts