Skip to content

Hooks (OpenAI Codex counterpart)

What "hooks" mean in the OpenAI stack

OpenAI's Agents SDK exposes a stream of lifecycle events during a run (e.g., response.created, response.completed, response.output_text.delta, response.error). You can subscribe to these events and execute your own code when they fire—functionally a "hook."

Outside the model runtime, deterministic pre/post actions are typically implemented with Git hooks or GitHub Actions that trigger on repo events (push, PR, tag, schedule). Those are the standard, supported "hooks" around deployments.

There is no first-party "pre-tool/post-tool" hook feature in OpenAI Codex/ChatGPT today; you wire your own logic to model event streams and CI/CD triggers. (Agents SDK events are the closest match.)

Real-world pattern: pre-/post-deploy checks

Goal: run validations whenever your site deploys and annotate the PR with model feedback.

Pre-deploy: use a GitHub Actions workflow that runs on push/workflow_dispatch, builds the site, lints JSON/links, and—optionally—calls an OpenAI agent to summarize issues. Actions are triggered by repo events.

During analysis: stream agent events and react in code (e.g., start a timer on response.created, collect deltas on response.output_text.delta, fail the job on response.error).

Post-deploy: a second job posts the agent's summary back to the PR.

This reproduces "hooks" deterministically with supported triggers + agent event callbacks.

Scope precisely (why it matters)

If you subscribe to all events or trigger Actions on every command, your pipeline slows down. Prefer:

  • Narrow GitHub events (e.g., only push to main or pull_request for paths: docs/**).
  • Filter agent events you actually need (e.g., ignore token-delta spam unless you stream a live log).

Where to "hook" in OpenAI

Run/Response lifecycle: subscribe to response.* and tool.* stream events and act when they occur (start timers, write logs, gate merges, etc.).

Tool calls (function calling): your server code is the "hook"—OpenAI emits a tool_call and your function executes with the provided arguments. (That's the supported way to run side-effects.)

Outside the agent: use Git hooks (pre-commit, pre-push) or GitHub Actions for pre/post steps around builds and releases.

Good practices

  • Deterministic triggers: prefer repo events & explicit API calls over trying to infer intent from free-form prompts.
  • Lightweight streaming: only process the event types you need (e.g., response.completed, response.error).
  • Fail fast: treat response.error/tool errors as hard failures in CI.
  • Idempotency: make tool functions safe to retry—streaming clients may reconnect.
  • Security: never run shell commands from model output without allow-lists/argument validation (function calling should deserialize into typed arguments).

TL;DR (mapping to Claude Code "Hooks")

Claude "PreToolUse/PostToolUse" → subscribe to Agents SDK event stream and run code on response.* (closest supported analogue).

"Deploy hooks" → use Git hooks/GitHub Actions as your deterministic pre/post steps.

Primary sources:

  • OpenAI Agents SDK docs: Streaming events & Lifecycle (how to subscribe and what events exist).
  • Git hooks (official Git docs) and GitHub Actions triggers (official GitHub docs).