Skip to main content
GPT-5.2 Codex is available through AnyFast via the OpenAI Responses API (/v1/responses).

Key capabilities

  • Responses API — Uses the newer /v1/responses endpoint with input instead of messages
  • Reasoning control — Configure reasoning effort (minimal / low / medium / high) and summary
  • Verbosity control — Set response verbosity to low, medium, or high
  • Streaming — Supports real-time token streaming via SSE
  • Tool use — Supports function calling and tool use

Quick example

curl https://www.anyfast.ai/v1/responses \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2-codex",
    "input": [
      { "role": "user", "content": "Explain quantum entanglement in simple terms." }
    ],
    "reasoning": {
      "effort": "high",
      "summary": "auto"
    },
    "text": {
      "format": { "type": "text" },
      "verbosity": "medium"
    },
    "store": true
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesMust be gpt-5.2-codex
inputarrayYesList of { role, content } objects
streambooleanNoEnable SSE streaming. Default: false
temperaturefloatNo0\u20132. Controls randomness. Default: 1
top_pfloatNoNucleus sampling threshold. Default: 1
max_output_tokensintegerNoMaximum output tokens to generate
stopstring / arrayNoSequences that stop generation
reasoningobjectNo{ effort, summary } — controls reasoning depth
textobjectNo{ format, verbosity } — controls output format and verbosity
toolsarrayNoList of tools the model may call
storebooleanNoStore response for later retrieval. Default: true

API Reference

View the interactive API playground for GPT-5.2 Codex.