Ektos AI is in closed Beta!Join our Discord to become a tester. 

    The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.

    7.24B parameters
    Model size
    On demand
    Availability

    The Mixtral-8x7B Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x7B MoE (Mixture of Experts) model.

    46.7B parameters
    Model size
    On demand
    Availability

    The Mixtral-8x22B Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B MoE (Mixture of Experts) model.

    141B parameters
    Model size
    On demand
    Availability

    The Meta-Llama-3.1-8B-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the Meta-Llama-3.1-8B model.

    8.03B parameters
    Model size
    On demand
    Availability

    The Meta-Llama-3.1-70B-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the Meta-Llama-3.1-70B model.

    70.6B parameters
    Model size
    On demand
    Availability

    The Phi-3-Mini-4K-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the Phi-3.5-Mini-4K model.

    3.92B parameters
    Model size
    Always available
    Availability

    The Qwen2-7B-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the Qwen2-7B model.

    7.62B parameters
    Model size
    On demand
    Availability

    Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web.

    0.809B parameters
    Model size
    Always available
    Availability

    GTE Multilingual Base is from the GTE (General Text Embedding) family of models. Achieving state-of-the-art (SOTA) results and supporting over 70 languages.

    0.305B parameters
    Model size
    Always available
    Availability

    GTE Multilingual Base is part of the GTE (General Text Embedding) family of models. It achieves state-of-the-art (SOTA) results on English text, offering the best overall performance relative to its number of parameters.

    0.434B parameters
    Model size
    Always available
    Availability