Appearance
Configuration (OpenAI Codex)
Complete Codex CLI configuration: API auth, model selection, MCP servers, approvals/permissions, and sane multi-directory patterns. Everything below is cross-checked against OpenAI's docs and the official repo.
API Authentication
Codex supports two official auth paths:
Option A — Sign in with ChatGPT (recommended) Run codex
and choose Sign in with ChatGPT. Works with Plus/Pro/Team/Edu/Enterprise and is the documented default.
Option B — API key (Developer Platform) Set the environment variable and start Codex:
bash
# macOS/Linux/WSL
export OPENAI_API_KEY="sk-...your-key..."
codex
The CLI page notes API-key use is supported (with extra setup) and can be combined with --model
for older models.
Model Selection
By default, Codex runs GPT-5. For agentic coding, OpenAI recommends GPT-5-Codex; switch via the slash command, or pass a flag at launch.
Switch inside a session
/model gpt-5-codex
Pick at startup
bash
codex --model gpt-5-codex
# example of using a smaller API model when on API-key auth:
codex --model o4-mini
Model guidance, defaults, and the /model
command are documented on the Codex CLI page; model catalog lives on the Models page.
Approvals & Permissions (the safety rails)
Codex can read files, edit code, and run commands. Approval modes gate what's automatic vs. what needs your consent:
- Auto (default): Reads/edits/runs inside the working directory automatically; asks before going outside or using network.
- Read Only: Plan/inspect with no writes or command execution.
- Full Access: Broad permissions (use intentionally). Switch anytime:
/approvals
This behavior and the exact defaults are documented in the CLI page.
Tip: For CI-like runs, see
codex exec "..."
to run non-interactively (useful for scripted checks).
Where configuration lives
Codex persists preferences in:
~/.codex/config.toml
The CLI docs and repo point here for advanced options (models, MCP, sandbox/approval behavior, etc.). ([GitHub][2])
You can also override specific keys ad-hoc (useful for scripts), e.g.:
bash
codex -c model='"gpt-5-codex"'
(-c
override pattern is discussed in issues and doc comments in the repo.) ([GitHub][3])
MCP (Model Context Protocol) Servers
Codex supports MCP stdio servers to add tools (filesystem, fetchers, security scanners, etc.). Enable them by adding an mcp_servers
section to ~/.codex/config.toml
. ([GitHub][2])
Example (~/.codex/config.toml
)
toml
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Desktop", "/path/to/allowed/dir"]
[mcp_servers.memory]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-memory"]
[mcp_servers.fetch]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-fetch"]
Once configured, restart Codex to pick up the new tools. Each MCP server adds specific capabilities (file operations, web requests, etc.) that Codex can invoke as needed.
Project-Level Configuration
AGENTS.md
OpenAI officially supports AGENTS.md
as a "README for agents"—project-level guidance that steers Codex behavior. Place it in your repo root:
markdown
# Project: MyApp
## Rules
- Use TypeScript strict mode
- Run `npm test` before committing
- Follow existing naming patterns
## Architecture
- /src/components/ for React components
- /src/utils/ for shared utilities
- /src/types/ for TypeScript definitions
## Forbidden
- No `any` types without explicit justification
- No direct DOM manipulation in components
Directory-Specific Rules
Create AGENTS.md
in subdirectories for specialized rules. Deeper files override shallower ones:
project/
├── AGENTS.md # Global rules
├── src/
│ └── AGENTS.md # Source-specific rules
└── tests/
└── AGENTS.md # Test-specific rules
Multi-Directory Workflows
For teams working across multiple projects, create a consistent setup:
- Global config:
~/.codex/config.toml
for your personal MCP servers and preferred models - Per-project:
AGENTS.md
in each repo for project-specific rules - Shared agents: Version control custom agents in
.codex/agents/
directories
This pattern scales from individual workflows to team-wide standardization.