← Blog
April 14, 2026 · Follow-up to the April 10 analysis

MentisDB vs the Field — April 2026 Update

Four days after publishing our original competitive analysis, the landscape has shifted significantly. Hindsight emerged as a credible SOTA benchmark contender, Cognee crossed 15k stars with v1.0, and LangMem became the default in LangGraph Platform. This post is the updated analysis — see the original April 10 analysis for where we started.

Executive Summary


New Entrants

Hindsight — The SOTA Benchmark Threat

9.2k GitHub stars · Python + TypeScript + Rust · From Vectorize

Hindsight is the most credible new entrant in the agentic memory space. Its claim to fame is state-of-the-art performance on LongMemEval with independently verified scores from Virginia Tech and The Washington Post — a level of benchmark credibility no other system has achieved.

Architecture:

Memory model:

Weaknesses:

Cognee v1.0 — Hermes Integration

15.3k GitHub stars · Python · Apache 2.0 · v1.0.0 shipped April 11, 2026

Cognee crossed 15k stars and released v1.0 with three notable additions: a cognify-mcp package for MCP server integration, Cognee Cloud managed service, and native Hermes Agent integration as memory provider. The most full-featured knowledge engine combining vector search, graph databases, and cognitive science approaches.

Still requires external databases and an LLM for the cognify pipeline.

LangMem — The Ecosystem Advantage

1.4k GitHub stars · Python · MIT · From LangChain

LangMem is LangChain's memory primitives library, integrated natively into LangGraph Platform. This gives it massive distribution: every LangGraph Platform deployment defaults to LangMem for agent memory. Functional memory primitives (create_manage_memory_tool(), create_search_memory_tool()) are storage-backend agnostic but default to Postgres in production.

Weaknesses: LLM required, no graph traversal or temporal facts, no cryptographic integrity, vec-only retrieval.


Updated Feature Comparison (April 14)

Feature MentisDB Hindsight Cognee LangMem Mem0 Graphiti
LanguageRustPythonPythonPythonPythonPython
StorageEmbedded (sled)External (PG)ExternalExternalExternal DBExternal DB
LLM RequiredNo (opt-in)YesYesYesYesYes
Local-FirstYesNoNoNoPartialNo
Crypto IntegrityHash chainNoNoNoNoNo
Hybrid RetrievalBM25+vec+graph4-signal RRFvec+graphvec onlyvec+keywordsem+kw+graph
MCP SupportBuilt-inNoMCP clientNoNoYes
Agent RegistryYesNoNoNoNoNo
Federated SearchCross-chainNoNoNoNoNo
Skills/ExtensionsSkill registryNoNoNoNoNo
WebhooksYesNoNoNoNoNo
Temporal Factsvalid_at/invalid_atVia metadataNoNoUpdatesvalid_at
Memory DedupJaccard thresholdNoMergeNoYesMerge
Custom Ontologyentity_type registryVia metadataSchemaNoNoPydantic
Memory BranchingBranchesFromNoNoNoNoNo
Benchmark R@1074.0% (self)SOTA (indep. verified)N/AN/AN/AN/A

What MentisDB Closed Since April 10

In 11 releases, we shipped everything on the original roadmap and several unplanned additions:

FeatureVersionStatus
Temporal Facts0.8.2Shipped
Memory Dedup0.8.2Shipped (Jaccard)
Multi-Level Scopes0.8.2Shipped (tag-based)
CLI Tool0.8.2Shipped
RRF Reranking0.8.6Shipped
Memory Branching0.8.6Shipped (BranchesFrom)
Per-Field BM25 Cutoffs0.8.6Shipped
Custom Ontology0.8.7Shipped (entity_type + registry)
Episode Provenance0.8.8Shipped (source_episode field)
LLM Reranking0.8.8Shipped (opt-in)
Federated Cross-Chain Search0.9.1Shipped
Webhooks0.9.1Shipped
Opt-in LLM Extraction0.9.1Shipped
Python Client0.9.1Shipped (pymentisdb on PyPI)
Wizard Brew-First Setup0.9.1Shipped

What MentisDB Is Still Missing

1. Academic Benchmark Verification

Hindsight's scores are independently verified by Virginia Tech. Ours are self-reported. Fix: Partner with an academic group to independently verify our scores.

2. Native LangChain Store

LangMem is the default in LangGraph Platform — massive structural distribution advantage. Fix: Build langchain-mentisdb pip package with BaseStore implementation.

3. Multi-Hop Recall (59.1% R@10 vs 78.0% single-hop)

The 19pp gap is the biggest retrieval quality gap. Miss analysis shows near-zero vector scores on misses — the semantic layer isn't contributing in multi-hop cases. Fix: Entity coreference, deeper graph traversal, query expansion.

4. Managed Cloud Service

Mem0, Cognee, Hindsight, and Fast.io all offer hosted versions. Fix: MentisDB Cloud.

5. Memory Consolidation / Lifecycle

agentmemory uses Ebbinghaus decay curves; Hindsight uses reflect(). MentisDB has dedup and temporal validity but no automatic memory evolution. Fix: Implement automatic memory consolidation tiers.


Competitive Position Today

MentisDB is the only local-first, zero-dependency, cryptographically-integrity-verified semantic memory with built-in hybrid retrieval — in Rust. This combination is still unique.

What we've added since April 10 that no competitor has:

The competitive threats are real: Hindsight's independently verified benchmarks and managed service are credible; LangMem's LangGraph Platform distribution is a structural advantage. But MentisDB's architectural properties — Rust performance, cryptographic integrity, embedded storage — matter more in enterprise, audit-critical, and air-gapped deployments.

The next battle is ecosystem and distribution, not features. Native LangChain store, academic benchmark verification, and MentisDB Cloud are the three moves that would make 1.0 a genuinely competitive release.


See the original April 10 analysis for where we started, and the 0.9.1 announcement for the full benchmark results and feature breakdown.


MentisDB is an open-source durable memory layer for AI agents. It stores memories in an append-only hash-chained log, retrieves them with hybrid BM25+semantic+graph search, and runs entirely locally with no cloud dependencies. GitHub · Docs · Website