Azure OpenAI
Configure PipesHub Workplace AI to use Azure OpenAI’s embedding models
Azure OpenAI Embeddings Configuration
The Azure OpenAI embeddings configuration screen in PipesHub where you’ll enter your endpoint, API Key, and Embedding Model
PipesHub allows you to integrate with Azure OpenAI’s powerful embedding models to enable vector search, semantic similarity, and other AI features in your workspace.
Required Fields
Endpoint *
The Azure OpenAI Endpoint is required to connect to your specific Azure OpenAI resource.
How to obtain your Endpoint:
- Log in to the Azure Portal
- Navigate to your Azure OpenAI resource
- In the Overview section, find and copy the Endpoint URL
- The endpoint typically follows this format:
https://{your-resource-name}.openai.azure.com/
Note: The endpoint is specific to your Azure deployment region and resource.
API Key *
The API Key is required to authenticate your requests to your Azure OpenAI resource.
How to obtain an API Key:
- In the Azure Portal, go to your Azure OpenAI resource
- Navigate to “Keys and Endpoint” under Resource Management
- Copy either Key 1 or Key 2 (both will work)
Security Note: Your API key should be kept secure and never shared publicly. PipesHub securely stores your API key and uses it only for authenticating requests to Azure OpenAI.
Embedding Model *
The Embedding Model field defines which Azure OpenAI embedding model you want to use with PipesHub.
Available Azure OpenAI embedding models:
text-embedding-3-small
- Newer model with excellent performance for most use casestext-embedding-3-large
- Higher dimensional embeddings for tasks requiring maximum accuracy
Note: Availability depends on which models you have deployed in your Azure OpenAI resource.
Configuration Steps
As shown in the image above:
- Select “Azure OpenAI” as your Provider from the dropdown
- Enter your Azure OpenAI Endpoint in the designated field (marked with *)
- Enter your Azure OpenAI API Key in the designated field (marked with *)
- Specify your desired Embedding Model (marked with *)
- Click “Continue” to proceed with setup
The system configuration interface clearly indicates which fields are required with an asterisk (*). All three fields (Endpoint, API Key, and Embedding Model) are required to successfully configure Azure OpenAI embedding integration.
Model Deployment
Before you can use a specific embedding model in Azure OpenAI, you must first deploy it:
- Go to your Azure OpenAI resource in the Azure Portal
- Select “Model Deployments” from the left menu
- Click “Create New Deployment”
- Select your desired embedding model (e.g.,
text-embedding-3-small
) - Provide a deployment name
- Set your desired capacity
The deployment name is what you’ll use as your Embedding Model value in the PipesHub configuration.
Model Specifications
Model | Dimensions | Tokens | Performance | Cost |
---|---|---|---|---|
text-embedding-3-small | 1536 | 8191 | Very High | Medium |
text-embedding-3-large | 3072 | 8191 | Highest | Higher |
Usage Considerations
- API usage will count against your Azure OpenAI resource’s quota and billing
- Different models have different pricing - check Azure OpenAI pricing page for details
- Consider your Azure resource capacity when configuring embedding models
- Regional availability may vary for different embedding models
- Azure OpenAI provides additional security, compliance, and data residency options
Troubleshooting
- If you encounter authentication errors, verify your API key and endpoint are correct
- Ensure your embedding model is properly deployed in your Azure OpenAI resource
- Check that your Azure subscription is active and has sufficient quota
- Verify network connectivity and firewall settings if using private endpoints
- Ensure the specified model name matches exactly with your deployment name
For additional support, refer to the Azure OpenAI documentation or contact PipesHub support.