Skip to main content
VectorAI DB integrates with popular AI frameworks and embedding providers so you can focus on application logic rather than infrastructure. Use any supported integration to generate embeddings, store vectors, and run similarity searches with minimal setup. Choose a framework integration like LangChain or LlamaIndex when you want built-in abstractions for RAG pipelines, retriever chains, and document management. Choose an embedding provider directly when you need full control over how vectors are generated and stored using the VectorAI DB client.

Frameworks

Build AI applications using VectorAI DB as the vector store in your preferred framework.

LangChain

Use VectorAI DB as a vector store in LangChain for RAG pipelines, similarity search, and retriever-based chains. Supports sync and async operations.

LlamaIndex

Build RAG applications and query engines with VectorAI DB as the storage backend in LlamaIndex.

Embedding providers

Generate vector embeddings from text, images, or other content using these providers, then store and search them in VectorAI DB.

OpenAI

Generate embeddings with OpenAI models like text-embedding-3-small and text-embedding-3-large for semantic search and retrieval.

How integrations work

All integrations follow the same pattern:
  1. Generate embeddings — Use an embedding provider (such as OpenAI or Cohere) to convert your data into vectors.
  2. Store in VectorAI DB — Insert vectors into a collection with optional metadata payloads.
  3. Search — Query with a vector to find semantically similar results, with optional metadata filtering.
You can use embedding providers directly with the VectorAI DB client, or use a framework like LangChain that handles embedding generation and storage automatically.

Quick reference

The following table summarizes each integration and when to use it.
IntegrationTypeUse case
LangChainFrameworkRAG pipelines, retriever chains, similarity search with document abstractions
OpenAIEmbedding providerGenerate embeddings for semantic search and clustering
LlamaIndexFrameworkQuery engines, data agents, and RAG applications