← Blog
April 10, 2026

MentisDB vs the Field — A Competitive Analysis of Agentic Memory Systems

AI agents need memory. A growing stack of open-source projects tackle this problem, but they make very different architectural tradeoffs. We studied the five most prominent systems — Mem0, Graphiti/Zep, Letta, Neo4j LLM Graph Builder, and Cognee — to understand where MentisDB fits, what we're missing, and what to build next.

The Landscape

The agentic memory space has exploded in 2025–2026. At a high level, the projects fall into three categories:

MentisDB sits in the first category but borrows heavily from the second: we have a built-in knowledge graph with hybrid BM25+vector+graph retrieval, and we do it all without requiring an LLM or an external database.

Feature Comparison

Feature MentisDB Mem0 Graphiti Letta Neo4j KB Cognee
Language Rust Python Python Python/TS Python Python
Storage Embedded (sled) External DB External DB External DB Neo4j External DB
LLM Required for Core No (opt-in only) Yes Yes Yes Yes Yes
Local-First Yes Self-host option Self-host only Self-host option No Partial
Cryptographic Integrity Hash chain No No No No No
Hybrid Retrieval BM25+vec+graph vec+keyword semantic+kw+graph No Multi-mode vec+graph
Temporal Facts valid_at/invalid_at (0.8.2) Updates valid_at / invalid_at No No No
Memory Dedup Jaccard threshold (0.8.2) Yes Merge No No Partial
Custom Ontology entity_type field (0.8.7) No Pydantic models No Schema Yes
MCP Server Built-in No Yes No No Partial
Agent Registry Yes No No Yes No No
CLI Tool No Yes No Yes No Yes
Browser Extension No Yes No No No No
Episode Provenance Thought refs No Episodes No Partial Partial
Token Tracking No No No No Yes No

What Makes MentisDB Unique

Five properties that no competitor combines:

  1. Append-only hash chain — every thought links to the previous one via a cryptographic hash. Tampering with any memory breaks the chain. This is critical for audit trails, compliance, and any scenario where memory integrity matters. No other system in this comparison has it.
  2. Embedded storagecargo add mentisdb and it works. No Neo4j, no Qdrant, no Postgres. Graphiti requires Neo4j or FalkorDB. Mem0 requires a vector store. The Neo4j Graph Builder is literally built around Neo4j. MentisDB stores everything in sled, a Rust-native embedded database.
  3. No LLM dependency — Mem0, Graphiti, Cognee, and the Neo4j KB all require an LLM API key to function. Their core ingestion pipelines call an LLM to extract entities, facts, or summaries. MentisDB ingests, indexes, and retrieves without any API keys. This matters for offline use, air-gapped environments, and cost-sensitive deployments.
  4. Rust — every competitor is Python. Rust gives us 10–100× throughput, memory safety, and a single static binary. No runtime, no GIL, no virtualenv.
  5. Agent identity — first-class agent registry with aliases, public keys, and lifecycle status. Thoughts carry producing agent IDs. This enables multi-agent audit trails that no other system supports natively.

MentisDB is the only local-first, zero-dependency, cryptographically-integrity-verified semantic memory with built-in hybrid retrieval — in Rust.

What We're Missing

Honest gaps matter more than marketing. Here are the features competitors have that MentisDB doesn't — ranked by how much they'd move the needle for real users.

1. Temporal Fact Management

Who has it: Graphiti (valid_at / invalid_at on edges)

This is Graphiti's killer feature. In production, facts change: "Kendra works at Acme" becomes false when she switches jobs. Graphiti marks the old edge as invalid_at = now and adds a new one with valid_at = now. You can query "what was true on March 1?" — something no other system handles.

How we'd implement it: Add valid_at and invalid_at fields to ThoughtRelation. When a new Supersedes or Corrects relation is created, automatically set invalid_at on the old edge. Add a query parameter as_of=<timestamp> that filters to currently-valid edges. Purely structural — no LLM needed.

2. Memory Deduplication

Who has it: Mem0 (LLM-based dedup)

Without dedup, contradictory or near-duplicate thoughts pollute search results. "Caroline likes hiking" stored three times shouldn't produce three separate high-scoring hits.

How we'd implement it: On append(), run a lexical overlap check against recent thoughts in the same chain. If similarity exceeds 0.85, auto-create a Supersedes relation instead of inserting a duplicate. An optional LLM pass could handle semantic dedup for harder cases.

3. Multi-Level Memory Scopes

Who has it: Mem0 (User / Session / Agent levels), Letta (memory blocks)

Real applications need per-user, per-session, and per-agent memory isolation. MentisDB's chain keys provide physical isolation, but there's no semantic way to say "this is a user-level preference" vs. "this is a session-specific context."

How we'd implement it: Add semantic scope tags (user, session, agent) to thoughts. Chain keys already provide the physical isolation. Add a convenience API memory.add(scope="user", ...) and scoped search filters.

4. Custom Ontology / Entity Types

Who has it: Graphiti (Pydantic models), Neo4j KB (schema), Cognee

Domain-specific applications want typed entities: Person, Product, Policy — not just generic "thoughts." Graphiti lets you define these upfront via Pydantic models; the Neo4j builder supports custom schemas.

How we'd implement it: Add entity_type and relation_type fields to thoughts. Allow a user-defined type registry per chain. Graph expansion respects types. Schema validation at the REST/MCP layer.

5. Episode Provenance

Who has it: Graphiti (episodes — every fact traces back to source data)

When a search result says "Kendra loves Adidas shoes," you want to know which conversation produced that fact. Graphiti's episode system gives full lineage from derived fact to source.

How we'd implement it: Thoughts already have a refs field. Add a provenance chain: source_episode field pointing to the original thought, and a DerivedFrom relation kind. The dashboard would show provenance graphs.

Positioning vs. Each Competitor

vs. Mem0

Mem0 is the most popular memory layer (52k+ GitHub stars), but it requires an LLM to function and strongly pushes users to its hosted platform. MentisDB works offline with no API keys, and your memory is tamper-proof. Mem0's strength is its ecosystem — browser extension, LangChain/CrewAI integrations, and a polished CLI — all of which we plan to add over time.

vs. Graphiti / Zep

Graphiti has the best temporal knowledge graph in the space. But it requires Neo4j (or FalkorDB, Kuzu, Neptune). MentisDB is embedded — one binary, one crate. No Docker, no database server, no configuration. We're adding temporal edges in 0.8.2. Zep (the managed version) offers sub-200ms latency at scale with governance — a compelling enterprise offering, but one that comes with vendor lock-in.

vs. Letta / MemGPT

Letta is an agent framework, not a memory layer. It manages the entire agent loop: memory blocks, tool calls, self-improvement. MentisDB is the memory layer that any agent framework can use — Letta included. If you want memory without buying into a specific agent framework, MentisDB is the right choice.

vs. Neo4j LLM Graph Builder

The Neo4j builder is a document-to-graph pipeline. You upload PDFs and it extracts a knowledge graph using an LLM. MentisDB is a runtime memory — agents write to it and query it in real time. Different use cases: the Neo4j builder is for unstructured document analysis; MentisDB is for live agent memory.

vs. Cognee

Cognee is a knowledge engine that combines vector search, graph databases, and cognitive science approaches. It requires external databases and an LLM for its cognify pipeline. MentisDB is self-contained and works without any external services.

Roadmap

Based on this analysis, here's our feature roadmap through 1.0:

VersionFocusKey Features
0.8.2 Temporal + Dedup + Ergonomics Temporal edge validity, memory dedup/merge, multi-level memory scopes, CLI tool
0.8.3 Retrieval Quality Lightweight reranking, irregular verb lemma expansion, per-field BM25 DF cutoff
0.8.4 Ontology + Provenance Custom entity/relation types, episode provenance tracking
0.9.0 Ecosystem Cross-chain queries, optional LLM-extracted memories, LangChain integration, webhooks
1.0.0 Production Stable Browser extension, self-improving agent primitives, token tracking, API stability

The Bigger Picture

The agentic memory space is still early. Most systems are Python-based, cloud-dependent, and require LLM calls for basic operations. MentisDB's architectural choices — Rust, embedded storage, no-LLM core, cryptographic integrity — are contrarian bets. They trade ecosystem convenience for reliability, performance, and trust.

If you're building agents that need memory you can audit, deploy offline, or run at scale without external infrastructure, MentisDB is built for you. If you need an LLM to extract facts from conversations today, Mem0 or Graphiti are the pragmatic choice. Our goal is to close that gap without compromising on what makes MentisDB different.


MentisDB is an open-source durable memory layer for AI agents. It stores memories in an append-only hash-chained log, retrieves them with hybrid lexical+semantic+graph search, and runs entirely locally with no cloud dependencies. GitHub · Docs · Website