PipesHub seamlessly integrates with leading AI providers to enhance your workspace with advanced artificial intelligence capabilities. Configure your preferred models to power intelligent features throughout your workflow.

Model Types

Large Language Models (LLMs) Overview

LLMs provide natural language understanding and generation capabilities, enabling sophisticated AI interactions throughout your workflow.

Supported LLM Providers

Key LLM Features

Natural Language Understanding

Process and comprehend human language with remarkable accuracy

Content Generation

Create high-quality written content for various purposes

Reasoning & Problem Solving

Tackle complex problems with sophisticated logical reasoning

Contextual Awareness

Maintain coherent understanding throughout conversations

Multimodal Capabilities

Some models can process and understand both text and visual information

Code Understanding

Assist with programming tasks and code generation

LLM Configuration Requirements

Each LLM provider requires specific credentials and configuration details:
Every provider requires an API key for authentication. These can be obtained from your provider’s developer console or dashboard.

Embedding Models Overview

Embedding models transform text into numerical vector representations, enabling semantic search, document retrieval, and similarity matching capabilities.

Supported Embedding Providers

Key Embedding Features

Semantic Search

Find conceptually similar content beyond keyword matching

Vector Databases

Power efficient similarity-based retrieval systems

Document Clustering

Group similar documents automatically based on content

Cross-lingual Capabilities

Some models support similarity matching across multiple languages

Dimensionality Control

Adjust embedding size to balance performance and storage needs

Self-hosted Options

Run embedding models locally for privacy and cost control

Embedding Model Configuration Requirements

Each embedding provider has specific configuration requirements:
OpenAI and Azure OpenAI require API keys and endpoint information for authentication.

Getting Started

Setting up AI models in your PipesHub workspace is a straightforward process:
1

Select Model Type

Decide whether you need a Language Model (LLM) or an Embedding Model based on your use case
2

Choose Provider

Select your preferred AI provider from the dropdown menu in the AI configuration section
3

Enter Credentials

Add your API key and any other required provider-specific information
4

Select Specific Model

Choose the specific AI model that best suits your needs and use case
5

Apply Configuration

Save your settings to enable AI features across your PipesHub workspace

Choosing the Right Models

Consider these key factors when selecting AI models for your needs:

Task Complexity

More powerful models excel at complex reasoning, while lighter models handle routine tasks efficiently

Response Speed

Smaller models typically offer lower latency, making them ideal for real-time interactions

Cost Efficiency

Model pricing varies significantly - match your model choice to your budget and usage patterns

Context Length

For LLMs, longer context support enables understanding throughout extended conversations

Vector Dimensions

For embedding models, higher dimensions often provide better semantic accuracy but require more storage

Language Support

Ensure your chosen models support all languages needed for your application
Keep your API keys secure. PipesHub stores these credentials securely, but you should never share them publicly.

Usage Considerations

Start with smaller, more cost-effective models for routine tasks, and use more powerful models selectively for complex requirements.
API usage counts toward your quota and billing with each respective provider. Monitor your usage to manage costs effectively.

Learn More