Skip to main content

<Tip>
**Want to run Mistral's models locally? Check out our [Ollama integration](/oss/javascript/integrations/chat/ollama).**

:::

</Tip>caution
You are currently on a page documenting the use of Mistral models as text completion models. Many popular models available on Mistral are [chat completion models](/oss/javascript/langchain/models).

You may be looking for [this page instead](/oss/javascript/integrations/chat/mistral/).
:::

Mistral AI is a platform that offers hosting for their powerful open source models. This will help you get started with MistralAI completion models (LLMs) using LangChain. For detailed documentation on MistralAI features and configuration options, please refer to the API reference.

Overview

Integration details

ClassPackageLocalSerializablePY supportDownloadsVersion
MistralAI@langchain/mistralaiNPM - DownloadsNPM - Version

Setup

To access MistralAI models you’ll need to create a MistralAI account, get an API key, and install the @langchain/mistralai integration package.

Credentials

Head to console.mistral.ai to sign up to MistralAI and generate an API key. Once you’ve done this set the MISTRAL_API_KEY environment variable:
export MISTRAL_API_KEY="your-api-key"
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"

Installation

The LangChain MistralAI integration lives in the @langchain/mistralai package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>

<Npm2Yarn>
  @langchain/mistralai @langchain/core
</Npm2Yarn>

Instantiation

Now we can instantiate our model object and generate chat completions:
import { MistralAI } from "@langchain/mistralai"

const llm = new MistralAI({
  model: "codestral-latest",
  temperature: 0,
  maxTokens: undefined,
  maxRetries: 2,
  // other params...
})

Invocation

const inputText = "MistralAI is an AI company that "

const completion = await llm.invoke(inputText)
completion
 has developed Mistral 7B, a large language model (LLM) that is open-source and available for commercial use. Mistral 7B is a 7 billion parameter model that is trained on a diverse and high-quality dataset, and it has been fine-tuned to perform well on a variety of tasks, including text generation, question answering, and code interpretation.

MistralAI has made Mistral 7B available under a permissive license, allowing anyone to use the model for commercial purposes without having to pay any fees. This has made Mistral 7B a popular choice for businesses and organizations that want to leverage the power of large language models without incurring high costs.

Mistral 7B has been trained on a diverse and high-quality dataset, which has enabled it to perform well on a variety of tasks. It has been fine-tuned to generate coherent and contextually relevant text, and it has been shown to be capable of answering complex questions and interpreting code.

Mistral 7B is also a highly efficient model, capable of processing text at a fast pace. This makes it well-suited for applications that require real-time responses, such as chatbots and virtual assistants.

Overall, Mistral 7B is a powerful and versatile large language model that is open-source and available for commercial use. Its ability to perform well on a variety of tasks, its efficiency, and its permissive license make it a popular choice for businesses and organizations that want to leverage the power of large language models.

Hooks

Mistral AI supports custom hooks for three events: beforeRequest, requestError, and reponse. Examples of the function signature for each hook type can be seen below:
const beforeRequestHook = (req: Request): Request | void | Promise<Request | void> => {
    // Code to run before a request is processed by Mistral
};

const requestErrorHook = (err: unknown, req: Request): void | Promise<void> => {
    // Code to run when an error occurs as Mistral is processing a request
};

const responseHook = (res: Response, req: Request): void | Promise<void> => {
    // Code to run before Mistral sends a successful response
};
To add these hooks to the chat model, either pass them as arguments and they are automatically added:
import { ChatMistralAI } from "@langchain/mistralai"

const modelWithHooks = new ChatMistralAI({
    model: "mistral-large-latest",
    temperature: 0,
    maxRetries: 2,
    beforeRequestHooks: [ beforeRequestHook ],
    requestErrorHooks: [ requestErrorHook ],
    responseHooks: [ responseHook ],
    // other params...
});
Or assign and add them manually after instantiation:
import { ChatMistralAI } from "@langchain/mistralai"

const model = new ChatMistralAI({
    model: "mistral-large-latest",
    temperature: 0,
    maxRetries: 2,
    // other params...
});

model.beforeRequestHooks = [ ...model.beforeRequestHooks, beforeRequestHook ];
model.requestErrorHooks = [ ...model.requestErrorHooks, requestErrorHook ];
model.responseHooks = [ ...model.responseHooks, responseHook ];

model.addAllHooksToHttpClient();
The method addAllHooksToHttpClient clears all currently added hooks before assigning the entire updated hook lists to avoid hook duplication. Hooks can be removed one at a time, or all hooks can be cleared from the model at once.
model.removeHookFromHttpClient(beforeRequestHook);

model.removeAllHooksFromHttpClient();

API reference

For detailed documentation of all MistralAI features and configurations head to the API reference.
I