Performance Benchmarks
Performance measurements for vector search are only useful when they reflect realistic workloads. Altor Vec is designed for low-latency local retrieval, so benchmark results should capture not only raw nearest-neighbor speed but also the cost of index construction, memory pressure, and the behavior of the JavaScript-to-Wasm boundary. Comparing a local Wasm engine to a hosted vector service can be misleading if network latency, serialization overhead, and infrastructure variance are ignored.
In internal tests, Altor Vec performs best when the dataset fits comfortably in memory and queries can be executed close to the application logic. That makes it a strong fit for browser-based search, edge functions, CLI tooling, and deterministic regression benchmarks. Once dataset size or write concurrency grows beyond the limits of a local index, a dedicated vector database may still be the better operational choice.
Latency and Recall
Query latency depends on vector dimensionality, HNSW tuning, and hardware characteristics. On a modern laptop, small to medium corpora often return top-10 results in a few milliseconds once the Wasm module is warm. Cold starts are slower because the runtime must initialize the module and allocate the initial structures. Measuring p50, p95, and p99 latency is more informative than a single average value because search behavior under bursty load is what users actually experience.
When benchmarking recall, compare approximate results against an exact baseline on a representative evaluation set. A system that is slightly faster but misses the relevant document too often will feel worse than one with marginally higher median latency.
Memory and System Comparisons
Memory usage grows with vector count, dimension size, metadata payload, and m. That means two indexes with the same number of records can behave very differently depending on how aggressively the graph is tuned. For fixture testing, track resident set size before and after ingestion so regression reports include both speed and footprint.
Compared with Algolia or Pinecone, Altor Vec removes network round trips and external service costs, but it also shifts operational responsibility into your application. You own embedding generation, persistence strategy, and index lifecycle management. That trade-off is often worth it for local-first search, test fixtures, and privacy-sensitive deployments where data should not leave the runtime boundary.