Skip to main content
POST
/
api
/
v1
/
chat
/
completions
curl https://aibackend.net/api/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash-lite",
"messages": [{"role":"user","content":"Write a haiku about Lagos rains"}],
"max_tokens": 1000
}'
{
    "id": "chatcmpl_01HTK9abcdef",
    "object": "chat.completion",
    "created": 1735661400,
    "model": "google/gemini-2.5-flash-lite",
    "choices": [
{
    "index": 0,
    "message": {
    "role": "assistant",
    "content": "Clouds drum on warm roofs\nStreetlights blur in silver mist—\nPalm fronds whisper rain."
},
    "finish_reason": "stop"
}
    ],
    "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 22,
    "total_tokens": 36
}
}
Quick SDK integration for all platforms

Integrate AI 10x faster, with a single Backend

Integrate 255+ AI models for text, image, video, and audio generation through one stable API. Save time, cut costs, and scale without limits.

5-minute integration

Install the SDK and ship. Our interface stays stable while we add providers behind the scenes.
// TypeScript / JavaScript
const response = await fetch('https://aibackend.net/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
model: 'google/gemini-2.5-flash-lite',
messages: [{ role: 'user', content: 'Write a haiku about Lagos rains' }],
max_tokens: 1000
})
});

const json = await response.json();
console.log(json.choices?.[0]?.message?.content);

Try it

Use the interactive panel to call your API directly from this page.
model
string
required
Model identifier (provider/model-name)
Example: google/gemini-2.5-flash-lite
messages
array
required
Conversation messages in order
Array of message objects with role and content
max_tokens
integer
Max tokens to generate (1-8192)
Default: No limit
temperature
number
Sampling temperature (0-2)
Default: 1
top_p
number
Nucleus sampling (0-1)
Alternative to temperature
stream
boolean
If true, returns an event stream (SSE)
Default: false
curl https://aibackend.net/api/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash-lite",
"messages": [{"role":"user","content":"Write a haiku about Lagos rains"}],
"max_tokens": 1000
}'
{
    "id": "chatcmpl_01HTK9abcdef",
    "object": "chat.completion",
    "created": 1735661400,
    "model": "google/gemini-2.5-flash-lite",
    "choices": [
{
    "index": 0,
    "message": {
    "role": "assistant",
    "content": "Clouds drum on warm roofs\nStreetlights blur in silver mist—\nPalm fronds whisper rain."
},
    "finish_reason": "stop"
}
    ],
    "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 22,
    "total_tokens": 36
}
}

What you can build

  • Text Generation — chatbots, assistants, RAG apps, automations
  • Image Generation — create & edit with leading diffusion models
  • Video Generation — short clips, ads, explainers from text or images
  • Audio Generation — music, voice, SFX, plus processing (separate/enhance/convert)

Why one backend?

  • Unified API: One schema across 255+ providers & models
  • Version-proof: We track provider changes so you don’t have to
  • Scale & savings: Smart routing, retries, and cost controls built-in
  • First-class docs: OpenAPI + MDX examples for every endpoint

Next steps

  • Read the Getting Started guide
  • Grab your API Key and run the 5-minute example
  • Explore Text, Image, Video, and Audio endpoints
Need custom pricing, SLAs, or on-prem? Contact us — we’ll help you ship fast.

Authorizations

Authorization
string
header
required

🔑 API Authentication - All endpoints require Bearer Token authentication.

Get API Key: Visit API Key Management

Usage: Authorization: Bearer YOUR_API_KEY

Body

application/json
model
string
default:openai/chatgpt-4o-latest
required

Model identifier (provider/model-name).

Example:

"openai/chatgpt-4o-latest"

messages
object[]
required

Conversation messages in order.

Minimum array length: 1
max_tokens
integer
default:1000

Max tokens to generate.

Required range: 1 <= x <= 8192
temperature
number
default:1

Sampling temperature.

Required range: 0 <= x <= 2
top_p
number

Nucleus sampling (alternative to temperature).

Required range: 0 <= x <= 1
stream
boolean
default:false

If true, returns an event stream (SSE).

metadata
object

Arbitrary key/value metadata for your app.

Response

OK

id
string
required

Unique identifier for the completion.

object
string
required

Object type, always 'chat.completion'.

Example:

"chat.completion"

created
integer
required

Unix timestamp of when the completion was created.

model
string
required

The model used for the completion.

choices
object[]
required

List of completion choices.

usage
object

Token usage information.