Aller au contenu principal
VelesDBVector + Graph + Column Store

The Agentic Memory
for Autonomous AI

The unified database combining vectors, knowledge graphs, and structured data. The cognitive backbone your AI agents need. Microsecond retrieval, VelesQL, zero cloud dependency.

57µs
Search Latency
Balanced mode 10K/128D
66ns
SIMD Distance
1536D dot product • 4x vs naive
15MB
Binary Size
Zero dependencies
100%
Recall@10
Perfect mode 10K/128D
VelesQL
-- Vector + Graph + SQL in one query
SELECT
memory.*, similarity()
FROM
agent_memory
WHERE
vector NEAR $embedding
AND MATCH (ctx)-[:RELATES_TO]->(fact)
AND session_id = $current_session
ORDER BY similarity() DESC
LIMIT 10;

Why Agents Need More Than Vector Search

AI agents need three types of memory: semantic (what feels similar), episodic (what is factually connected), and structured (explicit knowledge). Traditional databases can't provide all three unified.

Traditional Vector DBs

  • Vectors only, no relationships

    Can't model factual connections

  • 50-100ms latency per query

    Network round-trips add up fast

  • No offline, no data sovereignty

    Your data on someone else's servers

VelesDB Agentic Memory

  • Vector + Graph + Columns unified

    Complete memory for AI agents

  • 57µs search latency (10K)

    1000x faster than cloud alternatives

  • Runs anywhere, works offline

    Server, Browser, Mobile — your data stays local

MetricVelesDBCloud Vector DBs
Search Latency (10K)57µs50-100ms
10 Retrievals1.3ms total500ms-1s total
Time to First TokenInstantNoticeable delay

Features

The complete memory system for autonomous AI agents

Semantic Memory (Vectors)

HNSW index with SIMD acceleration. What your agent perceives as similar. Multi-Query Fusion with RRF, Average, Maximum strategies.

Episodic Memory (Graph)

Native Knowledge Graph with nodes, edges, and the MATCH clause. What your agent knows is factually connected.

VelesQL - SQL + NEAR + MATCH

Unified query language for vectors (NEAR), graphs (MATCH), and structured data. No JSON DSL to learn.

Hybrid Search

Combine BM25 full-text, vector similarity, and graph traversal in a single query. Trigram Index 22-128x faster.

SIMD + GPU Ready

AVX-512/AVX2/NEON auto-detection. 35ns dot product for 768D. GPU acceleration via wgpu.

Run Anywhere

Server, CLI, Python, TypeScript, WASM, iOS, Android, Tauri. Same core, same performance.

Metadata-Only Collections

Lightweight collections without vectors for catalogs, configs, or text-only search. Memory efficient.

Advanced Quantization

SQ8 (4x compression, <2% recall loss) + Binary (32x compression). Dictionary Encoder for strings.

SIMD Performance (1536D Vectors)

66ns
Dot Product
15M ops/sec • 4x vs naive
70ns
Euclidean
14M ops/sec • 4x vs naive
100ns
Cosine
10M ops/sec • 3x vs naive
6ns
Hamming (64-bit)
164M ops/sec • 34x vs naive

HNSW Recall Profiles (10K/128D)

Modeef_searchRecall@10Latency P50vs v1.0
Fast6492.2%36µsNEW
Balanced12898.8%57µs-80%
Accurate256100%130µs-72%
Perfect2048100%200µs-92%

Native Rust benchmarks (no HTTP overhead). Run your own: cargo bench

Use Cases

The cognitive backbone for autonomous AI

Agentic Memory

Complete memory system for autonomous agents: semantic (vectors), episodic (graph), and structured data in one unified store.

Vector + Graph + ColumnsMicrosecond retrievalMCP-compatible
-- Agentic Memory: Vector + Graph unified
SELECT * FROM memories
WHERE vector NEAR $embedding
  AND MATCH (a)-[:KNOWS]->(b)
LIMIT 10;

GraphRAG

Combine knowledge graph traversal with vector similarity for superior context retrieval. MATCH + NEAR in one query.

Knowledge Graph nativeMATCH clauseHybrid retrieval
-- GraphRAG: MATCH + NEAR in one query
SELECT doc.*, similarity()
FROM documents doc
WHERE vector NEAR $query
  AND MATCH (doc)-[:CITES]->(ref)
ORDER BY similarity() DESC;

AI Desktop Apps

Build offline-capable AI applications with Tauri or Electron. Single binary, no server needed.

15MB footprintWorks offlineTauri v2 plugin
const results = await invoke('plugin:velesdb|search', {
  collection: 'memories',
  vector: embedding,
  topK: 10
});

Browser Vector Search

Run vector search directly in the browser with WASM. Privacy-first, no backend required.

WASM-nativeSIMD128 optimizedData stays local
import { VectorStore } from 'velesdb-wasm';

const store = new VectorStore(768, 'cosine');
const results = store.search(query_vector, 10);

Mobile AI (iOS/Android)

Native SDKs for mobile with 32x memory compression via Binary Quantization.

UniFFI bindingsARM NEON SIMD32x compression
let db = VelesDatabase.open("./agent_memory")
let results = collection.search(embedding, topK: 10)

Robotics & Autonomous Systems

Microsecond decision-making for real-time autonomous systems. Knowledge graph for world modeling.

<100µs latencyWorld model (Graph)Offline mandatory
// <100µs latency for real-time decisions
let context = memory.search(sensor_embedding, 5);
let world_model = graph.traverse(current_node);

On-Premises / Air-Gapped

Full data sovereignty for regulated industries. GDPR, HIPAA, PCI-DSS ready.

100% local dataNo internet requiredAudit-ready
./velesdb-server --data-dir /secure/vectors --bind 127.0.0.1:8080

Multi-Agent Collaboration

CRDT-based memory synchronization for collaborative AI systems. Local-first, conflict-free merge.

CRDT sync (Premium)Local-firstConflict-free
// CRDT sync for collaborative agent memory (Premium)
await sync.connect_peer("agent_b_address");
agent_a.memory.sync();

Comparison

The only database with Vector + Graph + Columns

Looking for Agentic Memory?

Pinecone
Vector + Graph unified, no API keys, 100x faster locally
Qdrant
Native Knowledge Graph, single binary (15MB), WASM/Mobile
Neo4j
Vectors + Graph in one engine, microsecond latency, embedded
pgvector
Purpose-built agentic memory, 400x faster, graph native
ChromaDB
Production Rust, Knowledge Graph, VelesQL language
Feature🐺VelesDBQdrantMilvusPineconepgvector
ArchitectureSingle BinaryContainerClusterSaaSPostgres Ext
Search Latency (10K)57µs~30ms~20ms~50ms~50ms
Knowledge GraphNative MATCHNoneNoneNoneNone
Setup Time< 1 min5-10 min30+ min5 min15+ min
Binary Size15 MB100+ MBGBsN/AExtension
Query LanguageSQL (VelesQL)JSON DSLSDKSDKSQL
WASM/Browser
Mobile SDK
LicenseELv2Apache 2.0Apache 2.0ClosedPostgreSQL

Why VelesDB for Agentic Memory

Unified Vector + Graph + Columns
One database for semantic, episodic, and structured memory. No data silos.
VelesQL: SQL + NEAR + MATCH
Query vectors with NEAR, traverse graphs with MATCH, filter with SQL. All in one language.
Native GraphRAG
Knowledge graph traversal combined with vector similarity for superior retrieval.
SLM-Optimized
Designed for Small Language Models (3B-8B) running on consumer hardware.
Runs Everywhere
Server, WASM, iOS, Android, Tauri. Same agentic memory, same performance.
Multi-Agent Sync (Premium)
CRDT-based synchronization for collaborative agent memory. Local-first, conflict-free.

Get Started in 60 Seconds

Download, install, and run. No complex setup, no dependencies, no cloud accounts required.

Rust (crates.io)
cargo add velesdb-core

Quick Example

1. Start the server
velesdb-server --data-dir ./my_data
2. Create a collection
curl -X POST localhost:8080/collections \
  -d '{"name":"agent_memory","dimension":768,"metric":"cosine","graph":true}'
3. Search with VelesQL
curl -X POST localhost:8080/query \
  -d '{"query":"SELECT * FROM agent_memory WHERE vector NEAR $v AND MATCH (a)-[:KNOWS]->(b) LIMIT 10"}'
VelesDBReady to build agentic memory?

Join developers building autonomous AI with VelesDB's unified Vector + Graph + Column store

Join developers building autonomous AI with VelesDB's unified Vector + Graph + Column store

No credit card requiredSource-available (ELv2)Production-ready