Download OpenAPI specification:Download
Please note that this API is not affiliated with OpenAI and is not endorsed by OpenAI in any way.
Create Chat Completions by calling an available model in a format that is compatible with the OpenAI API
model required | string ID of the model to use |
required | Array of objects |
temperature | number The sampling temperature to be used |
top_p | number An alternative to sampling with temperature |
n | integer The number of chat completion choices to generate for each input message |
stream | boolean If set to true, it sends partial message deltas |
stop | Array of strings Up to 4 sequences where the API will stop generating further tokens |
max_tokens | integer The maximum number of tokens to generate in the chat completion |
presence_penalty | number It is used to penalize new tokens based on their existence in the text so far |
frequency_penalty | number It is used to penalize new tokens based on their frequency in the text so far |
logit_bias | object Used to modify the probability of specific tokens appearing in the completion |
user | string A unique identifier representing your end-user |
{- "model": "meta-llama/Meta-Llama-3-70B-Instruct",
- "messages": [
- {
- "role": "system",
- "content": "You are a helpful assistant."
}, - {
- "role": "user",
- "content": "Please say hello."
}
], - "temperature": 0.7,
- "top_p": 0.9,
- "n": 1,
- "stream": false,
- "stop": [
- "\n"
], - "max_tokens": 1000,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- "logit_bias": { },
- "user": "user-123"
}
{- "id": "string",
- "choices": [
- {
- "finish_reason": "string",
- "index": 0,
- "message": {
- "role": "string",
- "content": "string",
- "tool_calls": [
- "string"
]
}
}
], - "created": 0,
- "object": "string",
- "model": "string",
- "system_fingerprint": "string",
- "usage": {
- "prompt_tokens": 0,
- "completion_tokens": 0,
- "total_tokens": 0
}
}
Create Completions by calling an available model in a format that is compatible with the OpenAI API
model required | string ID of the model to use |
prompt required | string The prompt to generate completions from |
temperature | number The sampling temperature to be used |
top_p | number An alternative to sampling with temperature |
n | integer The number of chat completion choices to generate for each input message |
stream | boolean If set to true, it sends partial message deltas |
stop | Array of strings Up to 4 sequences where the API will stop generating further tokens |
max_tokens | integer The maximum number of tokens to generate in the chat completion |
presence_penalty | number It is used to penalize new tokens based on their existence in the text so far |
frequency_penalty | number It is used to penalize new tokens based on their frequency in the text so far |
logit_bias | object Used to modify the probability of specific tokens appearing in the completion |
user | string A unique identifier representing your end-user |
{- "model": "meta-llama/Meta-Llama-3-70B-Instruct",
- "prompt": "Say this is a test",
- "temperature": 0.01,
- "top_p": 0.9,
- "n": 1,
- "stream": false,
- "stop": [
- "\n"
], - "max_tokens": 1000,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- "logit_bias": { },
- "user": "user-123"
}
{- "id": "string",
- "choices": [
- {
- "finish_reason": "string",
- "index": 0,
- "text": "string"
}
], - "created": 0,
- "object": "string",
- "model": "string",
- "usage": {
- "prompt_tokens": 0,
- "completion_tokens": 0,
- "total_tokens": 0
}
}
Generate one or more images using a model in a format that is compatible with the OpenAI API
model required | string ID of the model to use. Please check /v1/models for available models |
prompt required | string The prompt to generate images from |
n | integer Default: 1 The number of images to generate. Defaults to 1. |
size | string Default: "1024*1024" The size of the image to generate.
Defaults to |
response_format | string Default: "b64_json" Value: "b64_json" The format of the response. |
user | string A unique identifier representing your end-user |
{- "model": "stabilityai/stable-diffusion-xl-base-1.0",
- "prompt": "A beautiful sunset over the ocean",
- "n": 1,
- "size": "1024*1024",
- "response_format": "b64_json"
}
{- "created": 0,
- "data": [
- {
- "url": null,
- "b64_json": "string"
}
]
}
Creates an embedding vector representing the input text.
model | string ID of the model to use |
input | Array of strings The input text to create embeddings for |
{- "input": [
- "The food was delicious and the waiter."
], - "model": "intfloat/e5-large-v2"
}
{- "model": "string",
- "object": "string",
- "data": [
- {
- "index": 0,
- "object": "string",
- "embedding": [
- 0
]
}
], - "usage": {
- "prompt_tokens": 0,
- "total_tokens": 0
}
}