← Blog
April 13, 2026

MentisDB in Perspective — April 2026 Competitive Review

Three weeks ago we published a competitive analysis comparing MentisDB to Mem0, Graphiti/Zep, Letta, Neo4j LLM Graph Builder, and Cognee. That analysis was accurate as of April 10. This is a refresh — same competitors, but updated to reflect what shipped in 0.8.7 and 0.8.8, and what we've learned since.

Where We Stand Today

MentisDB 0.8.8 now has:

The comparison table has changed significantly. Here's what the competitive landscape looks like as of April 13, 2026.

Feature Comparison (April 2026)

Feature MentisDB Mem0 Graphiti/Zep Letta Cognee
Language / Runtime Rust (static binary) Python Python Python/TS Python
Storage Embedded (sled) External DB External DB External DB External DB
LLM Required for Core No (opt-in reranking only) Yes Yes Yes Yes
Cryptographic Integrity Hash chain (SHA-256) No No No No
Hybrid Retrieval BM25+vec+graph+RRF vec+keyword semantic+kw+graph No vec+graph
Memory Branching BranchesFrom (0.8.6) No No No No
Temporal Facts valid_at/invalid_at (0.8.2) Updates valid_at / invalid_at No No
Memory Dedup Jaccard threshold (0.8.2) LLM-based Merge No Partial
Entity Types (Custom Ontology) entity_type field (0.8.7) No Pydantic models No Yes
Episode Provenance source_episode (0.8.8) No Episodes No Partial
LLM Reranking Opt-in (0.8.8) No No No No
RRF Reranking RRF k=60 (0.8.6) No No No No
MCP Server Built-in (0.8.0) No Yes No Partial
Versioned Skill Registry Ed25519-signed (0.8.4) No No No No
Agent Registry + Aliases Ed25519 keys + status No No Yes No
Cross-Chain Graph Queries BranchesFrom traversal (0.8.6) No No No No
Webhook Notifications Planned No No No No
LangChain/LlamaIndex Planned Yes Partial Yes Yes
Token Tracking Planned (1.0) No No Yes No
Browser Extension Planned (1.0) Yes No No No

new = shipped since April 10   updated = was partial/incorrect in prior analysis

What Changed Since April 10

Three weeks is a long time in a fast-moving space. Here's what was wrong or incomplete in our prior analysis:

Temporal Facts — We Already Had It

The April 10 analysis listed "Temporal Fact Management" as our #1 gap and explained how we'd implement valid_at/invalid_at on relations. We already shipped this in 0.8.2 (April 11). The feature was on the prior roadmap and implemented before the analysis was published — we simply didn't update the doc in time. Temporal facts are a solved problem for MentisDB.

Memory Dedup — Also Already Shipped

Same situation. Jaccard-threshold dedup with auto-Supersedes relations was implemented in 0.8.2. The competitive analysis called this a gap; it wasn't.

Custom Ontology — Implemented in 0.8.7

The analysis said we'd need to add entity_type and relation_type fields. We shipped the entity_type field plus a full per-chain registry with auto-observation and persistence in 0.8.7.

Episode Provenance — Implemented in 0.8.8

The analysis recommended adding a source_episode field. We implemented it in 0.8.8, complete with ThoughtQuery filter, dashboard display, and JSON serialization.

LLM Reranking — New Since Analysis

This was not on our radar three weeks ago. The LongMemEval benchmark showed our lexical+vector+graph pipeline leaves room for improvement on complex multi-hop queries. LLM reranking (0.8.8) is our first explicit step toward closing that gap — opt-in, with graceful fallback if the LLM API is unavailable.

What Actually Remains on Our Roadmap

After shipping everything above, here are the honest gaps still on the 0.9.0 list:

Token tracking and the browser extension are 1.0 items — we're not rushing those.

Benchmark Progress

Retrieval quality metrics as of April 13:

BenchmarkMetricScoreNotes
LoCoMo 10-persona R@10 73.0% Fresh chain, rebuilt vector sidecar
LoCoMo 10-persona w/ RRF R@10 73.0% Multi-type +0.5%; RRF neutral on simple queries
LongMemEval R@5 57.6% First baseline established (0.8.6)
LongMemEval R@10 62.6% Room to improve; LLM reranking is the path
Write latency vs pre-0.8.0 -13.8% After 0.8.0 write performance improvements

The LongMemEval numbers (57.6% R@5 / 62.6% R@10) are the focus area. Our hybrid BM25+vector+graph pipeline works well for single-hop factual recall (LoCoMo 73%) but gaps appear on multi-entity, multi-hop reasoning tasks. LLM reranking is the current approach — we're tuning the prompt and model selection.

Bottom Line

Three weeks ago we called out five gaps. Two were already shipped (temporal facts, dedup). Two are now shipped (entity_type, source_episode). LLM reranking is new. The remaining roadmap is narrower: webhooks, Python bindings, and opt-in LLM extraction.

The architectural advantages we lead with — Rust, embedded storage, no-LLM core, hash chain integrity, versioned skill registry, MCP server, agent identity — remain intact and differentiated. What we've added in retrieval quality, temporal reasoning, and semantic organization closes most of the feature gap that made other systems look compelling.

MentisDB is now the only local-first, cryptographically-verified, hybrid-retrieval memory system with a built-in MCP server, versioned skill registry, and zero mandatory external dependencies — in Rust.