Skip to main content
Sora 2 is OpenAI’s video generation model, available through AnyFast API. It supports text-to-video, image-to-video (first frame reference), and video remix.

Key capabilities

  • Text-to-Video — Generate videos from natural language descriptions
  • Image-to-Video — Use a reference image as the first frame
  • Video Remix — Reuse structure, motion, and framing from a previous video
  • Flexible Duration — 4, 8, or 12 seconds
  • Multiple Resolutions — Portrait (720x1280), Landscape (1280x720), and more

Workflow

The Sora 2 API is asynchronous. Follow these steps:
  1. Create taskPOST /v1/videos
  2. Query statusGET /v1/videos/{id} (poll until status is completed)
  3. Download videoGET /v1/videos/{id}/content

Quick example

Step 1: Create task

curl https://www.anyfast.ai/v1/videos \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=sora-2" \
  -F "prompt=A cat walking through a sunflower field at golden sunset" \
  -F "seconds=12" \
  -F "size=1280x720"

Step 2: Query status

curl https://www.anyfast.ai/v1/videos/video_69b131ea03548190925a6a06febf993b \
  -H "Authorization: Bearer YOUR_API_KEY"

Step 3: Download video

curl https://www.anyfast.ai/v1/videos/video_69b131ea03548190925a6a06febf993b/content \
  -H "Authorization: Bearer YOUR_API_KEY"

Parameters

ParameterTypeRequiredDescription
modelstringYesMust be sora-2
promptstringYesNatural language description of the video. Include shot type, subject, action, setting, lighting. Keep single-purpose for best results.
secondsstringNo4, 8, 12. Default: 4
sizestringNo720x1280, 1280x720, 1024x1792, 1792x1024. Default: 720x1280
input_referencefileNoReference image for the first frame. Accepts image/jpeg, image/png, image/webp.
remix_video_idstringNoID of a completed video to reuse its structure, motion, and framing.

API Reference

View the interactive API playground for Sora 2.