IONOS Cloud - OpenAI compatible AI Model Hub API (1.0.0)

Download OpenAPI specification:Download

IONOS Cloud AI Model Hub OpenAI compatible API

Please note that this API is not affiliated with OpenAI and is not endorsed by OpenAI in any way.

OpenAI Compatible Endpoints

Endpoints compatible with OpenAI's API specification

Create Chat Completions

Create Chat Completions by calling an available model in a format that is compatible with the OpenAI API

Authorizations:
tokenAuth
Request Body schema: application/json
model
required
string

ID of the model to use

required
Array of objects
temperature
number

The sampling temperature to be used

top_p
number

An alternative to sampling with temperature

n
integer

The number of chat completion choices to generate for each input message

stream
boolean

If set to true, it sends partial message deltas

stop
Array of strings

Up to 4 sequences where the API will stop generating further tokens

max_tokens
integer

The maximum number of tokens to generate in the chat completion

presence_penalty
number

It is used to penalize new tokens based on their existence in the text so far

frequency_penalty
number

It is used to penalize new tokens based on their frequency in the text so far

logit_bias
object

Used to modify the probability of specific tokens appearing in the completion

user
string

A unique identifier representing your end-user

Responses

Request samples

Content type
application/json
Example
{
  • "model": "meta-llama/Meta-Llama-3-70B-Instruct",
  • "messages": [
    ],
  • "temperature": 0.7,
  • "top_p": 0.9,
  • "n": 1,
  • "stream": false,
  • "stop": [
    ],
  • "max_tokens": 1000,
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "logit_bias": { },
  • "user": "user-123"
}

Response samples

Content type
application/json
{
  • "id": "string",
  • "choices": [
    ],
  • "created": 0,
  • "object": "string",
  • "model": "string",
  • "system_fingerprint": "string",
  • "usage": {
    }
}

Create Completions

Create Completions by calling an available model in a format that is compatible with the OpenAI API

Authorizations:
tokenAuth
Request Body schema: application/json
model
required
string

ID of the model to use

prompt
required
string

The prompt to generate completions from

temperature
number

The sampling temperature to be used

top_p
number

An alternative to sampling with temperature

n
integer

The number of chat completion choices to generate for each input message

stream
boolean

If set to true, it sends partial message deltas

stop
Array of strings

Up to 4 sequences where the API will stop generating further tokens

max_tokens
integer

The maximum number of tokens to generate in the chat completion

presence_penalty
number

It is used to penalize new tokens based on their existence in the text so far

frequency_penalty
number

It is used to penalize new tokens based on their frequency in the text so far

logit_bias
object

Used to modify the probability of specific tokens appearing in the completion

user
string

A unique identifier representing your end-user

Responses

Request samples

Content type
application/json
{
  • "model": "meta-llama/Meta-Llama-3-70B-Instruct",
  • "prompt": "Say this is a test",
  • "temperature": 0.01,
  • "top_p": 0.9,
  • "n": 1,
  • "stream": false,
  • "stop": [
    ],
  • "max_tokens": 1000,
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "logit_bias": { },
  • "user": "user-123"
}

Response samples

Content type
application/json
{
  • "id": "string",
  • "choices": [
    ],
  • "created": 0,
  • "object": "string",
  • "model": "string",
  • "usage": {
    }
}

Get the entire list of available models

Get the entire list of available models in a format that is compatible with the OpenAI API

Authorizations:
tokenAuth

Responses

Generate one or more images using a model

Generate one or more images using a model in a format that is compatible with the OpenAI API

Authorizations:
tokenAuth
Request Body schema: application/json
model
required
string

ID of the model to use. Please check /v1/models for available models

prompt
required
string

The prompt to generate images from

n
integer
Default: 1

The number of images to generate. Defaults to 1.

size
string
Default: "1024*1024"

The size of the image to generate. Defaults to "1024*1024". Must be one of 10241024, 17921024, or 1024*1792. The maximum supported resolution is "1792*1024

response_format
string
Default: "b64_json"
Value: "b64_json"

The format of the response.

user
string

A unique identifier representing your end-user

Responses

Request samples

Content type
application/json
{
  • "model": "stabilityai/stable-diffusion-xl-base-1.0",
  • "prompt": "A beautiful sunset over the ocean",
  • "n": 1,
  • "size": "1024*1024",
  • "response_format": "b64_json"
}

Response samples

Content type
application/json
{
  • "created": 0,
  • "data": [
    ]
}

Creates an embedding vector.

Creates an embedding vector representing the input text.

Authorizations:
tokenAuth
Request Body schema: application/json
model
string

ID of the model to use

input
Array of strings

The input text to create embeddings for

Responses

Request samples

Content type
application/json
{
  • "input": [
    ],
  • "model": "intfloat/e5-large-v2"
}

Response samples

Content type
application/json
{
  • "model": "string",
  • "object": "string",
  • "data": [
    ],
  • "usage": {
    }
}