Model Types
Large Language Models (LLMs)
Powerful AI models that understand and generate human language for conversations, content creation, and complex reasoning
Embedding Models
Specialized models that convert text into numerical vectors for semantic search, similarity matching, and vector database operations
Large Language Models (LLMs) Overview
LLMs provide natural language understanding and generation capabilities, enabling sophisticated AI interactions throughout your workflow.Supported LLM Providers
Anthropic Claude
Advanced language models with strong reasoning capabilities and nuanced understanding
OpenAI
Powerful language models with robust capabilities across various tasks
Azure OpenAI
Enterprise-grade OpenAI models with Azure’s security and compliance features
Google Gemini
Google’s multimodal AI models with advanced reasoning and comprehension
Key LLM Features
Natural Language Understanding
Process and comprehend human language with remarkable accuracy
Content Generation
Create high-quality written content for various purposes
Reasoning & Problem Solving
Tackle complex problems with sophisticated logical reasoning
Contextual Awareness
Maintain coherent understanding throughout conversations
Multimodal Capabilities
Some models can process and understand both text and visual information
Code Understanding
Assist with programming tasks and code generation
LLM Configuration Requirements
Each LLM provider requires specific credentials and configuration details:
- API Keys
- Model Selection
- Endpoint Information
Every provider requires an API key for authentication. These can be obtained from your provider’s developer console or dashboard.
Embedding Models Overview
Embedding models transform text into numerical vector representations, enabling semantic search, document retrieval, and similarity matching capabilities.Supported Embedding Providers
OpenAI Embeddings
High-performance embedding models with excellent semantic understanding
Azure OpenAI Embeddings
Enterprise-grade embedding models with Azure security and compliance
Sentence Transformer
Self-contained embedding models with multilingual support and no API requirements
BAAI/bge Models
State-of-the-art embedding models optimized for various languages and tasks
Key Embedding Features
Semantic Search
Find conceptually similar content beyond keyword matching
Vector Databases
Power efficient similarity-based retrieval systems
Document Clustering
Group similar documents automatically based on content
Cross-lingual Capabilities
Some models support similarity matching across multiple languages
Dimensionality Control
Adjust embedding size to balance performance and storage needs
Self-hosted Options
Run embedding models locally for privacy and cost control
Embedding Model Configuration Requirements
Each embedding provider has specific configuration requirements:
- API-based Models
- Self-hosted Models
- Model Selection
OpenAI and Azure OpenAI require API keys and endpoint information for authentication.
Getting Started
Setting up AI models in your PipesHub workspace is a straightforward process:1
Select Model Type
Decide whether you need a Language Model (LLM) or an Embedding Model based on your use case
2
Choose Provider
Select your preferred AI provider from the dropdown menu in the AI configuration section
3
Enter Credentials
Add your API key and any other required provider-specific information
4
Select Specific Model
Choose the specific AI model that best suits your needs and use case
5
Apply Configuration
Save your settings to enable AI features across your PipesHub workspace
Choosing the Right Models
Consider these key factors when selecting AI models for your needs:Task Complexity
More powerful models excel at complex reasoning, while lighter models handle routine tasks efficiently
Response Speed
Smaller models typically offer lower latency, making them ideal for real-time interactions
Cost Efficiency
Model pricing varies significantly - match your model choice to your budget and usage patterns
Context Length
For LLMs, longer context support enables understanding throughout extended conversations
Vector Dimensions
For embedding models, higher dimensions often provide better semantic accuracy but require more storage
Language Support
Ensure your chosen models support all languages needed for your application
Keep your API keys secure. PipesHub stores these credentials securely, but you should never share them publicly.
Usage Considerations
Start with smaller, more cost-effective models for routine tasks, and use more powerful models selectively for complex requirements.













