Appearance
Context Inspection (Codex CLI)
What it is. In Codex CLI, the /context
slash-command (recently added in the open-source repo) prints a breakdown of what's in your current context and how much headroom you have left. It's the quickest way to see whether tokens are going to instructions, tools, or history—so you can trim the right thing instead of guessing. ([GitHub][1])
What you'll see
/context
reports a per-component view of context usage (tokens/percent) along lines like:
- System prompt — core operating instructions
- Built-ins / system tools — Codex's own helper scaffolding
- MCP tools — Model Context Protocol servers you've attached
- Instruction files — e.g. your AGENTS.md / project guidance
- History — running conversation/messages
This mirrors the granularity requested and implemented in the repo (previously /status
only showed totals). Use it to spot the big spenders fast. ([GitHub][2])
How to use it
Inside an interactive Codex session:
/context
If you're planning before execution, stay in Read Only via /approvals
so Codex analyzes without writing or running anything—and measure the footprint while you iterate. The CLI docs describe approval modes and switching via /approvals
. ([OpenAI Developer][3])
Strategic context engineering (what to optimize first)
Instruction budget (AGENTS.md). Keep rules crisp; move long examples to separate files and link only what's required. Codex officially surfaces AGENTS.md as the place for agent guidance—use it, but keep it lean. ([OpenAI Developer][3])
MCP servers. Each attached server can add overhead. Only mount what you need in `~/.utm_source=chatgpt.com "Codex changelog" [6]: https://developers.openai.com/codex/pricing/?utm_source=chatgpt.com "Codex Pricing"