Skip to main content
POST
/
v1
/
responses
Create Response
curl --request POST \
  --url https://www.anyfast.ai/v1/responses \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-5.2",
  "input": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "stream": false,
  "temperature": 1,
  "top_p": 0.5,
  "frequency_penalty": 0,
  "presence_penalty": 0,
  "max_output_tokens": 2,
  "stop": "<string>",
  "n": 1,
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "low"
  },
  "reasoning": {
    "effort": "minimal",
    "summary": "auto"
  },
  "tools": [
    {}
  ],
  "tool_choice": "<string>",
  "store": true,
  "user": "<string>"
}
'
{
  "id": "resp_0ed94fe7ec73cf560169afd97bba648194b8661d445cfa4244",
  "object": "response",
  "created_at": 1773132155,
  "model": "gpt-5.2",
  "status": "completed",
  "output": [
    {
      "id": "msg_0ed94fe7ec73cf56",
      "type": "message",
      "role": "assistant",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "text": "Hi! How can I help you today?",
          "annotations": [
            {}
          ]
        }
      ]
    }
  ],
  "usage": {
    "input_tokens": 8,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 15,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 23
  },
  "completed_at": 1773132156,
  "reasoning": {
    "effort": "high",
    "summary": "detailed"
  },
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
enum<string>
required

Model ID

Available options:
gpt-5.2
Example:

"gpt-5.2"

input
object[]
required

Text or images for the model to generate a response.

Minimum array length: 1
Example:
[{ "role": "user", "content": "Hello!" }]
stream
boolean
default:false

If true, stream partial response deltas using SSE.

temperature
number
default:1

Sampling temperature. Higher values make output more random.

Required range: 0 <= x <= 2
Example:

1

top_p
number

Nucleus sampling threshold.

Required range: 0 <= x <= 1
frequency_penalty
number
default:0

Penalizes repeated tokens based on their frequency in the text so far.

Required range: -2 <= x <= 2
presence_penalty
number
default:0

Penalizes tokens that have already appeared in the text.

Required range: -2 <= x <= 2
max_output_tokens
integer

The maximum number of output tokens to generate.

Required range: x >= 1
stop
string

Sequences where the model will stop generating further tokens.

n
integer
default:1

How many response choices to generate.

Required range: x >= 1
text
object

Configuration options for model text response.

reasoning
object

Configuration for model reasoning / thinking.

tools
object[]

A list of tools the model may call.

tool_choice
string

Controls which (if any) tool is called by the model.

store
boolean
default:true

Whether to store the generated model response for later retrieval via API.

user
string

A unique identifier representing your end-user.

Response

Response generated successfully

id
string
required
Example:

"resp_0ed94fe7ec73cf560169afd97bba648194b8661d445cfa4244"

object
string
required
Example:

"response"

created_at
integer
required

Unix timestamp when the response was created.

Example:

1773132155

model
string
required
Example:

"gpt-5.2"

status
enum<string>
required
Available options:
completed,
failed,
in_progress,
incomplete
Example:

"completed"

output
object[]
required
usage
object
required
completed_at
integer

Unix timestamp when the response was completed.

Example:

1773132156

reasoning
object
text
object