Getting Started with Altor Vec
Altor Vec is a vector search library built for teams that want fast semantic retrieval without depending on a hosted search service. The library focuses on a small API surface, predictable performance, and deployment flexibility. You can embed documents, product descriptions, support articles, or code snippets into vectors, store those vectors in a local index, and run nearest-neighbor queries directly in your application. That makes it useful for offline search, privacy-sensitive workloads, edge deployments, and test environments where deterministic behavior matters more than shared infrastructure.
Unlike a full search platform, Altor Vec is intentionally narrow in scope. It does not try to manage your embedding pipeline, user analytics, dashboards, or crawl scheduling. Instead, it gives you a performant search engine that accepts vectors and metadata, exposes HNSW-based indexing controls, and lets you decide how to model documents, segment content, and rank results. This separation is valuable when you need tight integration with an existing stack or when you want to benchmark search quality independent of data ingestion concerns.
Core Concepts
The central object in Altor Vec is the search engine instance. You create an engine with a fixed dimensionality, insert records that contain an identifier, a vector, and optional metadata, then query with a vector representing the user intent. Each query returns the closest matches along with similarity scores that you can combine with your own business rules. For example, a documentation site might boost results tagged as guide, while a support assistant may prefer fresher content when scores are close.
Altor Vec uses approximate nearest-neighbor search to balance index size, build speed, and lookup latency. In practice, that means you can search thousands to millions of vectors quickly, but index tuning matters. The m, ef_construction, and ef_search parameters control how densely the HNSW graph is connected and how much work the engine performs during indexing and querying. Smaller values reduce memory pressure, while larger values often improve recall.
Typical Workflow
Most projects follow the same flow. First, generate embeddings for every document chunk using the model of your choice. Second, initialize Altor Vec with the correct vector dimension. Third, add all records to the index and persist the result if your environment supports it. Finally, embed each user query and ask the engine for the top matches.
import {WasmSearchEngine} from 'altor-vec';
const engine = await WasmSearchEngine.create({dimensions: 384});
await engine.add({
id: 'docs:intro:1',
vector: embedding,
metadata: {title: 'Getting Started', section: 'overview'},
});
const results = await engine.search(queryEmbedding, {limit: 5});
The rest of this site covers installation, configuration, performance tuning, and migration guidance. If you are building a prototype, continue to the quick start tutorial. If you already know HNSW well, jump straight to the configuration and API reference sections.