Ektos AI is now in Early Access!Join our Discord 

Text Generation: Chat Completions #

All of the offered LLMs / Text generation models follow the “Chat Completions” standard popularized by OpenAI.

Given a list of messages comprising a conversation, the model will return a response.

There are multiple ways you can use the text generation models:

API Endpoints #

With HTTP POST method, the endpoints for text models are:

When making a request to the endpoint, select the model you want to use with the model body parameter.

Available models and corresponding API strings #

In order to get a streaming response, set the stream parameter to true.

A detailed description of the available parameters can be found in the API Reference specification.

Below are basic examples using cuRL and the OpenAI official client libraries for Python and Node.JS.

Examples in other programming languages are available in the API Reference specification (use the dropdown menu on the top right of the code block example): Link.

cuRL #

 1curl --request POST \
 2  --url https://api.ektos.ai/v1/chat/completions \
 3  --header "Authorization: Bearer $EKTOS_API_KEY" \
 4  --header "Content-Type: application/json" \
 5  -d '{
 6    "model": "llama-3.1-8b-instruct",
 7    "messages": [
 8      {
 9        "role": "user",
10        "content": "Hello, how can you help me?"
11      }
12    ]
13  }'

Python: OpenAI client library #

 1import os
 2from openai import OpenAI
 3client = OpenAI(
 4    base_url="https://api.ektos.ai/v1/",
 5    api_key="YOUR_EKTOS_API_KEY_HERE"
 6    # api_key=os.environ.get("EKTOS_API_KEY")
 7)
 8
 9completion = client.chat.completions.create(
10  model="llama-3.1-8b-instruct ",
11  messages=[
12    {"role": "user", "content": "Hello, how can you help me?"}
13  ]
14)
15
16print(completion.choices[0].message)

JavaScript: Node.JS OpenAI client library #

 1import OpenAI from "openai";
 2
 3const openai = new OpenAI({
 4  apiKey: "$EKTOS_API_KEY",
 5  baseURL: 'https://api.ektos.ai/v1/',
 6});
 7
 8async function main() {
 9  const completion = await openai.chat.completions.create({
10    messages: [
11      { role: "user", content: "How can you help me?" }
12    ],
13    model: "llama-3.1-8b-instruct"
14  });
15
16  console.log(completion.choices[0].message);
17}
18main();