Frameworks
Build AI applications using VectorAI DB as the vector store in your preferred framework.LangChain
Use VectorAI DB as a vector store in LangChain for RAG pipelines, similarity search, and retriever-based chains. Supports sync and async operations.
LlamaIndex
Build RAG applications and query engines with VectorAI DB as the storage backend in LlamaIndex.
Embedding providers
Generate vector embeddings from text, images, or other content using these providers, then store and search them in VectorAI DB.OpenAI
Generate embeddings with OpenAI models like
text-embedding-3-small and text-embedding-3-large for semantic search and retrieval.How integrations work
All integrations follow the same pattern:- Generate embeddings — Use an embedding provider (such as OpenAI or Cohere) to convert your data into vectors.
- Store in VectorAI DB — Insert vectors into a collection with optional metadata payloads.
- Search — Query with a vector to find semantically similar results, with optional metadata filtering.
Quick reference
The following table summarizes each integration and when to use it.| Integration | Type | Use case |
|---|---|---|
| LangChain | Framework | RAG pipelines, retriever chains, similarity search with document abstractions |
| OpenAI | Embedding provider | Generate embeddings for semantic search and clustering |
| LlamaIndex | Framework | Query engines, data agents, and RAG applications |