Skip to main contentMistral AI Configuration
The Mistral AI configuration screen in PipesHub where you’ll enter your API Key and Model Name
PipesHub allows you to integrate with Mistral AI to enable sophisticated AI features in your workspace, accessing Mistral’s powerful large language models with advanced reasoning capabilities.
Required Fields
API Key *
The API Key is required to authenticate your requests to Mistral AI services.
How to obtain an API Key:
- Log in to the Mistral AI Console
- Navigate to the API Keys section
- Click on “Create new key” or use an existing key
- Copy the generated API key
Security Note: Your API key should be kept secure and never shared publicly. PipesHub securely stores your API key and uses it only for authenticating requests to Mistral AI services.
Model Name *
The Model Name field defines which Mistral AI model you want to use with PipesHub.
Popular Mistral AI models include:
mistral-large-2512 - Mistral Large 3: State-of-the-art, open-weight, general-purpose multimodal model with a granular Mixture-of-Experts architecture. It features 41B active parameters and 675B total parameters.
mistral-medium-2508 - Mistral Medium 3.1: Frontier-class multimodal model released August 2025. Improving tone and performance.
mistral-small-2506 - Mistral Small 3.2: An update to the previous small model, released June 2025.
How to choose a model:
- For complex reasoning and maximum capabilities, select
mistral-large-2512
- For balanced performance with improved tone, select
mistral-medium-2508
- For efficient, cost-effective tasks, select
mistral-small-2506
- Check Mistral’s model documentation for the most up-to-date options
Optional Fields
Context Length
Specify the maximum context window for your model. For example, 128000 tokens (128K) for most Mistral models.
Context length examples:
- Mistral Large 3: 128000 tokens (128K)
- Mistral Medium 3.1: 128000 tokens (128K)
- Mistral Small 3.2: 128000 tokens (128K)
Multimodal Support
Mistral models support multimodal capabilities, enabling both text and image processing. Enable this option if you want to use image understanding features in your application.
Reasoning
Enable this option if your selected model supports advanced reasoning capabilities. This allows for more complex problem-solving and logical inference.
Configuration Steps
As shown in the image above:
- Select “Mistral” as your Provider from the dropdown
- Enter your Mistral AI API Key in the designated field (marked with *)
- Specify your desired Model Name (marked with *)
- (Optional) Configure the Context Length for your use case
- (Optional) Enable Multimodal support if needed
- (Optional) Enable Reasoning capabilities if supported by your model
- Click “Add Model” to complete setup
Both the API Key and Model Name are required fields to successfully configure Mistral AI
integration. You must complete these fields to proceed with the setup.
Usage Considerations
- API usage will count against your Mistral AI API quota and billing
- Different models have different pricing - check Mistral AI’s pricing page for details
- Model capabilities vary - more powerful models may provide better results but at higher cost
- Mistral AI provides:
- Advanced reasoning capabilities with Mixture-of-Experts architecture
- Multimodal understanding (text and images)
- Support for long contexts (up to 128K tokens)
- Open-weight models for transparency and flexibility
- State-of-the-art performance across various tasks
Troubleshooting
- If you encounter authentication errors, verify your API key is correct and has not expired
- Ensure your Mistral AI account has billing set up if you’re using paid service tiers
- Check that the model name is spelled correctly and available
- Verify that your API key has access to the specific model you’ve selected
- If you’re experiencing rate limits, check your API usage in the Mistral AI Console dashboard
- For multimodal features, ensure your model supports image inputs
For additional support, refer to the Mistral AI documentation or contact PipesHub support.