llama-index-vector-stores-actian-vectorai package. This integration supports all standard LlamaIndex vector store operations, including adding nodes, similarity search, metadata filtering, and both synchronous and asynchronous workflows.
Installation
Install the VectorAI DB vector store integration for LlamaIndex:Requirements
Before using this integration, make sure your environment meets the following prerequisites:- Python 3.10–3.12
- A running Actian VectorAI DB instance (default endpoint:
localhost:50051)
Quickstart
TheActianVectorAIVectorStore uses a context manager to handle connection lifecycle automatically. Vector configuration is inferred from the first inserted embedding if not specified.
The following example creates text nodes with embeddings, stores them in VectorAI DB, and performs a similarity search:
Connection management
The integration supports several connection patterns for managing client lifecycle. The examples in this section assume you havenodes and query objects as shown in the Quickstart above.
Context manager (recommended)
Use a context manager for automatic connection handling:Manual connection
For fine-grained control over connection lifecycle:External client
Pass a pre-configuredVectorAIClient when you need to share a connection or supply custom client configuration:
url and client_kwargs are ignored. The caller is responsible for managing the client’s lifecycle.
Async operations
All operations have async counterparts for non-blocking workflows. Async methods useAsyncVectorAIClient under the hood. The examples in this section use the same nodes setup as the Quickstart.
Async context manager
Use an async context manager for automatic connection handling:Async manual connection
For fine-grained control over async connection lifecycle:Async external client
Pass a pre-configuredAsyncVectorAIClient when you need to share an async connection:
async_client must be a different instance from the internal async client of a provided sync client.
Deleting data
Remove nodes from the vector store using document IDs, metadata filters, or by clearing the entire collection.Delete by source document ID
Remove all nodes associated with a source document:Delete with metadata filters
Remove nodes matching specific metadata conditions:Clear collection
Delete the entire collection:Custom vector configuration
Specify explicit vector parameters instead of relying on auto-detection:dense_vector_params is omitted, vector configuration is inferred from the first inserted embedding and defaults to cosine distance.
Metadata filtering
Metadata filters can be used withquery, delete_nodes, and adelete_nodes to narrow results based on payload fields.
Supported filter operators
The following operators are supported for metadata filtering:| Operator | Description |
|---|---|
EQ | Exact match (string or numeric). |
NE | Not equal. |
GT / LT | Numeric greater/less than. |
GTE / LTE | Numeric greater/less than or equal. |
IN | Match any value in a list (or comma-separated string). |
NIN | Match none of the values in a list (or comma-separated string). |
TEXT_MATCH | Case-sensitive substring/token match. |
IS_EMPTY | Field is absent or null. |
ANY, ALL, TEXT_MATCH_INSENSITIVE, CONTAINS) raise NotImplementedError.
Filter conditions
AND, OR, and NOT conditions are supported through FilterCondition:
Configuration
The following table lists the parameters you can pass when creating anActianVectorAIVectorStore instance.
| Parameter | Type | Default | Description |
|---|---|---|---|
url | str | "localhost:50051" | Actian VectorAI DB endpoint (host:port). Ignored when explicit clients are provided. |
collection_name | str | "llama_index_collection" | Collection to use for storing vectors and metadata. |
dense_vector_name | str | "llama_index_dense_vector" | Name of the dense vector field inside the collection. |
dense_vector_params | VectorParams | None | None | Vector configuration (size, distance metric). Inferred from the first inserted embedding if omitted (defaults to cosine distance). |
stores_text | bool | False | Store node text in the point payload in addition to metadata. |
clear_existing_collection | bool | False | Delete any existing collection with the same name before the first operation. |
client_kwargs | dict | None | None | Extra keyword arguments forwarded to internally created sync/async clients. |
collection_kwargs | dict | None | None | Extra keyword arguments passed to collection creation. Do not include vectors_config; it is derived from dense_vector_name and dense_vector_params. |
client | VectorAIClient | None | None | Pre-configured synchronous client. When provided, url and client_kwargs are ignored. |
async_client | AsyncVectorAIClient | None | None | Pre-configured asynchronous client. Must be a different instance from the internal async client of a provided client. |
API reference
The following table lists all available methods and their async counterparts.| Method | Async variant | Description |
|---|---|---|
add() | async_add() | Add nodes to the vector store. |
query() | aquery() | Query the vector store with an embedding. |
delete() | adelete() | Delete nodes by source document ID. |
delete_nodes() | adelete_nodes() | Delete nodes matching metadata filters. |
clear() | aclear() | Delete the entire collection. |
connect() | aconnect() | Manually open a connection. |
close() | aclose() | Manually close a connection. |
Limitations
The following limitations apply to the current version of this integration:get_nodes()andaget_nodes()are not implemented (pending scroll API support in the Actian VectorAI client).- Only
VectorStoreQueryMode.DEFAULT(dense vector search) is supported.
Next steps
Explore related topics to get the most out of VectorAI DB with LlamaIndex:- OpenAI embeddings — Configure OpenAI as your embedding provider.
- LangChain — Use VectorAI DB with the LangChain framework.
- Search — Understand the underlying vector search operations.
- Filtering — Apply metadata conditions to narrow search results.