Quick Start Tutorial
This tutorial walks through a minimal but realistic Altor Vec workflow. The goal is not just to show a single API call, but to demonstrate how records should be shaped, how vectors are inserted, and what a search response looks like when you are building a documentation or knowledge-base experience. In a real application, you would generate embeddings with a model such as all-MiniLM-L6-v2, text-embedding-3-small, or an in-house encoder. For the example below, assume you already have numeric vectors available.
The most important rule is consistency. Every vector inserted into an engine must have the same dimensionality, and query vectors must be produced by the same embedding model or a fully compatible one. Mixing dimensions or encoder families will degrade results, even if the API accepts the data shape.
Build an Index
Start by creating an engine and inserting a few chunks. Each record should carry an id and can optionally include metadata that helps you render results later.
import {WasmSearchEngine} from 'altor-vec';
const engine = await WasmSearchEngine.create({
dimensions: 384,
m: 16,
efConstruction: 128,
});
await engine.addMany([
{
id: 'guide-installation',
vector: installationVector,
metadata: {title: 'Installation Guide', type: 'guide'},
},
{
id: 'guide-performance',
vector: performanceVector,
metadata: {title: 'Performance Benchmarks', type: 'reference'},
},
]);
After insertion, the index is ready for queries. Some teams build the index at startup, while others precompute and persist it during deployment so the application can serve traffic immediately.
Run a Search Query
Once the index exists, convert the user query into a vector and request the nearest matches. Keep the initial limit small, then re-rank or expand if you need a broader candidate set.
const results = await engine.search(queryVector, {
limit: 3,
efSearch: 64,
});
for (const result of results) {
console.log(result.id, result.score, result.metadata?.title);
}
In practice, you should log query latency, top score distribution, and click feedback so you can tell whether poor results come from embeddings, chunking strategy, or index tuning. A good quick start project usually adds a small evaluation set with known query-to-document pairs. That gives you a baseline before you experiment with ef_search, metadata filters, or hybrid ranking.