April 17, 2026
The stdio MCP process now auto-detects a running daemon, proxies to it if found,
or launches one in the background. Claude Desktop users get shared live state
with zero configuration — no Node.js, no mcp-remote, no TLS config.
April 16, 2026
Complete tutorial for MentisDB's LLM-extracted memories pipeline. Covers setup,
REST, MCP, and Rust usage, live-tested behavior, review-before-append workflow,
and real use cases for normal people, finance teams, defense workflows, and coders.
April 16, 2026
We audited the live MentisDB project chain and found a strong overconcentration in
Summary, LessonLearned, and generic References
links. This post covers the histogram, what guidance was wrong, and what we changed in
MENTISDB_SKILL.md for the next release.
April 16, 2026
A fair, detailed comparison of MentisDB and Cognee across architecture,
ingestion, retrieval, provenance, operational complexity, enterprise fit,
and multi-agent workflows, ending with a comprehensive side-by-side table.
April 14, 2026
Complete guide to integrating MentisDB with any MCP client. Covers Claude Desktop
(mcp-remote with Homebrew), OpenCode, custom MCP clients, and the REST API for
any HTTP-capable language. Includes the one-command wizard setup.
April 14, 2026
Full guide to pymentisdb — the official Python client for MentisDB. Covers the
MentisDbClient for appending thoughts and ranked search, LangChain integration
with MentisDbMemory, typed relations, and a complete working example.
April 14, 2026
Four days after our April 10 competitive analysis, 11 releases later: 74.0% R@10 on the full 10-persona LoCoMo benchmark (1977 queries), every feature gap closed, federated cross-chain search, pymentisdb on PyPI, and a complete competitive update covering Hindsight, Cognee, LangMem, and more.
April 13, 2026
Webhook delivery for thought append events (async HTTP callbacks with retry),
irregular verb lemma expansion in lexical search, and LoCoMo stable at 72.0% R@10
with LongMemEval R@5 at 57.6%.
April 13, 2026
Optional LLM-based reranking for ranked search, source_episode field for
grouping thoughts by conversational context, cross-chain relation traversal
fix, and 14 new search regression tests.
April 13, 2026
Updated feature comparison against Mem0, Graphiti/Zep, Letta, and Cognee.
Temporal facts and dedup were already shipped before the original analysis.
entity_type, source_episode, and LLM reranking are all new since April 10.
April 13, 2026
Structured memory categories via the new entity_type field, per-chain type
registry with persistence, and a full dashboard modal UX overhaul. Also fixes
the wizard setup crash with older Node versions.
April 12, 2026
Complete guide to installing and running the mentisdbd daemon as a proper background service using systemd.
Includes dedicated user, environment configuration, security hardening, and useful management commands.
April 12, 2026
Reciprocal Rank Fusion (RRF) reranking, memory chain branching with
BranchesFrom relations, irregular verb lemma expansion, and a BM25 DF
cutoff fix. LoCoMo stable at 73.0% R@10. First LongMemEval baseline: 57.6% R@5.
April 11, 2026
Tuned session cohesion (radius 8→12, boost 0.8→1.2) and doubled graph relation
scores. LoCoMo 10-persona R@10 jumps from 72.8% to 74.6%, clearing the 74.2%
baseline. Single-hop recall hits 79.0%.
April 11, 2026
Three releases in 72 hours. A migration crash, a catch-22 where the auto-updater
couldn't run because the bug blocked startup, and a runtime-in-runtime panic in the
fix. The full story of what broke, why, and how we recovered.
April 10, 2026
Users upgrading from 0.8.1 hit a hard crash on chain open. The V2→V3 migration code
peeked only the first thought's schema version and applied that format to the entire
chain. Fixed with per-thought schema detection. 11 new migration tests. WHITEPAPER.md
rewritten.
April 10, 2026
Four features closing the competitive gap: temporal edge validity for point-in-time
queries, Jaccard-based auto-deduplication, User/Session/Agent memory scopes, and
add/search/agents CLI subcommands on the existing mentisdbd binary. No LLM calls,
no cloud dependencies, no schema-breaking surprises.
April 14, 2026 · Follow-up to April 10 analysis
Updated competitive analysis covering Hindsight (SOTA, independently verified), Cognee v1.0
with Hermes integration, LangMem ecosystem advantage, and the updated feature comparison
across 6 systems. MentisDB: 74.0% R@10 LoCoMo, 15 gaps closed since April 10.
April 10, 2026
Session cohesion scoring, smooth exponential vector-lexical fusion, and a tighter
BM25 DF cutoff push LongMemEval R@5 to 67.6% and LoCoMo R@10 to 88.7% —
within 0.2% of MemPalace's published hybrid score.
April 8, 2026
The biggest search quality release ever. LongMemEval R@5 went from 57.2% to 65.0%
with Porter stemming, tiered vector-lexical fusion, and importance weighting.
Plus 13.8% faster writes, security hardening, and a skill file agents actually read.
April 8, 2026
We benchmarked MentisDB on LongMemEval — the standard benchmark for long-term
memory retrieval. Starting at 57.2% R@5, we reached 65.0% through Porter stemming,
tiered vector-lexical fusion, and importance-weighted scoring. Here's what worked,
what didn't, and how we compare to MemX and Mem0.