OpenRouter Configuration

What is OpenRouter?
OpenRouter is an API aggregator that provides access to multiple LLM providers through a unified interface. It offers:- Access to 100+ models from various providers (OpenAI, Anthropic, Google, Meta, and more)
- OpenAI-compatible API format
- Simple pricing and billing across all providers
- Automatic fallback and load balancing
- No need to manage multiple API keys for different providers
Prerequisites
Before configuring OpenRouter in PipesHub, ensure you have:- An OpenRouter account (sign up at openrouter.ai)
- Your OpenRouter API key (available in your OpenRouter dashboard)
- Selected a model from OpenRouter’s model list
Getting Your API Key
To obtain your OpenRouter API key:- Visit openrouter.ai and create an account
- Navigate to your dashboard
- Go to the API Keys section
- Generate a new API key
- Add credits to your account for model usage
sk-or-v1-...
Required Fields
Endpoint URL *
The Endpoint URL is OpenRouter’s API endpoint. Format:https://openrouter.ai/api/v1/
Standard Configuration:
For most use cases, use: https://openrouter.ai/api/v1/
Important:
- Always use the HTTPS protocol for secure communication
- The endpoint URL must include the
/v1/suffix - OpenRouter’s API is cloud-based, so ensure your PipesHub instance has internet access
API Key *
The API Key field is your OpenRouter API key used to authenticate requests. Format:sk-or-v1-...
Where to find it:
- Log in to your OpenRouter account at openrouter.ai
- Navigate to the API Keys section in your dashboard
- Copy your existing key or generate a new one
Model Name *
The Model Name specifies which model from OpenRouter’s catalog you want to use. Format:provider/model-name
Popular Examples:
openai/gpt-5openai/gpt-5-minimeta-llama/llama-3.3-70b-instructdeepseek/deepseek-chatqwen/qwen-2.5-72b-instruct
openai/gpt-5-mini, not just gpt-5-mini).
Optional Features
Multimodal
Enable this checkbox if you’re using a model that supports multimodal input (text + images). When to enable:- You’re using a vision-language model
- The model can process both text and images
- You need to analyze documents with visual content
openai/gpt-5-minianthropic/claude-4.5-sonnetgoogle/gemini-pro-2.5
Reasoning
Enable this checkbox if you’re using a model with enhanced reasoning capabilities. When to enable:- You’re using a reasoning-focused model
- The model is designed for complex problem-solving tasks
- Your use case involves mathematical, logical, or multi-step reasoning
deepseek/deepseek-r1openai/gpt-5
Configuration Steps
As shown in the image above:- Select “OpenAI API Compatible” as your Provider Type from the dropdown
- Enter the OpenRouter Endpoint URL:
https://openrouter.ai/api/v1/ - Enter your OpenRouter API Key (starts with
sk-or-v1-) - Specify the Model Name in
provider/model-nameformat (e.g.,openai/gpt-5-mini) - (Optional) Check “Multimodal” if using a vision-language model
- (Optional) Check “Reasoning” if using a reasoning-focused model
- Click “Add Model” to complete the setup
Supported Models
OpenRouter provides access to 100+ models from various providers. For the complete and up-to-date list of supported models, pricing, and capabilities, visit openrouter.ai/models.Cost and Usage Considerations
Managing OpenRouter costs:- Model Selection: Different models have different pricing. Check openrouter.ai/models for current rates
- Credits: Add credits to your OpenRouter account to ensure uninterrupted service
- Usage Monitoring: Monitor your usage through the OpenRouter dashboard
- Cost Optimization: Consider using smaller or more cost-effective models for routine tasks
- Rate Limits: Be aware of rate limits for each model (varies by model and provider)
- GPT-5: Higher cost, best performance
- GPT-5-mini: Lower cost, good performance
- Claude 4.5 Sonnet: Mid-tier pricing, excellent quality
- Open-source models (Glm, Qwen): Often lower cost alternatives
- Start with smaller models for testing
- Use appropriate models for your use case (don’t use gpt-5 when gpt-5-mini suffices)
- Monitor your spending through the OpenRouter dashboard
- Set up billing alerts to avoid unexpected charges
Troubleshooting
Connection Issues:- Verify the endpoint URL is exactly:
https://openrouter.ai/api/v1/ - Check that your PipesHub instance has internet access
- Verify firewall rules allow outbound HTTPS connections
- Test the connection manually:
- Verify your API key is correct and starts with
sk-or-v1- - Check that your API key hasn’t been revoked in the OpenRouter dashboard
- Ensure your OpenRouter account has sufficient credits
- Regenerate your API key if issues persist
- Confirm the model name is in the correct format:
provider/model-name - Verify the model is available on openrouter.ai/models
- Check for typos in the model name
- Some models may have availability restrictions or require special access
- OpenRouter enforces rate limits per model
- Check your OpenRouter dashboard for rate limit details
- Consider spreading requests across multiple models
- Wait before retrying if you hit rate limits
- Check your OpenRouter account balance
- Add credits through the OpenRouter dashboard
- Set up auto-reload to prevent service interruptions
- Monitor your usage to avoid unexpected credit depletion
- Some models may be temporarily unavailable
- Check OpenRouter’s status page for service disruptions
- Try an alternative model as a fallback
- Contact OpenRouter support if a model is consistently unavailable













