0.8.2 closes the four biggest feature gaps we identified in our competitive analysis against Mem0, Graphiti/Zep, Letta, and Cognee. Every major agentic memory system supports temporal facts, deduplication, scoped visibility, or some form of CLI — and now MentisDB does too, without adding any cloud dependencies, LLM calls, or schema-breaking migrations.
| Feature | What it does | Competitive match |
|---|---|---|
| Temporal edge validity | Point-in-time queries, time-bounded relations | Graphiti, Mem0 |
| Memory dedup/merge | Auto-Supersedes on near-duplicate append | Mem0, Cognee |
| Multi-level scopes | User / Session / Agent visibility tags | Mem0, Letta |
| CLI subcommands | add, search, agents via daemon REST | Zep, Cognee |
Agents accumulate facts that go stale. "Alice's office is on floor 3" is true until she moves to floor 7. Competitors handle this with mutable graph properties or LLM-extracted temporal triples. MentisDB does it with append-only temporal relations.
Every ThoughtRelation now carries valid_at and
invalid_at timestamps — set explicitly when you know the time bounds, or
auto-set to the append time when a relation is created. The real power comes from the
as_of parameter on ranked search:
let query = RankedSearchQuery::new()
.with_text("Alice's office")
.with_as_of("2026-03-01T00:00:00Z".parse().unwrap());
An as_of query excludes thoughts appended after the timestamp and thoughts
that have been superseded, corrected, or invalidated by thoughts appended at or before it.
The invalidated set is built once at chain open time and updated incrementally on each
append — O(1) lookup at query time, no LLM needed.
This is a schema V3 change: ThoughtRelation expanded from
3 fields to 5. Existing V0, V1, and V2 chains are migrated automatically on open, with
full hash-chain rebuild. The migration is persisted so future opens skip it.
Design choice: we considered auto-mutating the invalid_at
field on prior thoughts when a Supersedes/Corrects relation is appended. That would
break the append-only hash chain. Instead, temporal invalidation is purely
query-time: the invalidated set is computed from the relation graph,
never from mutated fields. This keeps every historical hash verifiable.
Agents in long sessions tend to re-assert facts they've already recorded. "The sky is blue" appears three times across different turns. Without dedup, all three compete for retrieval slots, wasting bandwidth and diluting relevance.
Set MENTISDB_DEDUP_THRESHOLD=0.85 (or call
db.with_dedup_threshold(Some(0.85))) and every new thought is checked
against the last 64 thoughts in the chain using Jaccard similarity on
normalized lexical tokens. If the best match exceeds the threshold, a
Supersedes relation is automatically added pointing to the prior thought.
# Enable auto-dedup
MENTISDB_DEDUP_THRESHOLD=0.85
MENTISDB_DEDUP_SCAN_WINDOW=64 # default
The superseded thought is then filtered from search results — no LLM call, no embedding comparison, no cloud dependency. Pure token-set math in microseconds.
Why Jaccard instead of embeddings? Two reasons: (1) it's deterministic and fast — no model loading, no GPU, no variance between runs; (2) for near-exact duplicates (the common case), Jaccard is more precise than cosine similarity on short texts. A threshold of 0.85 catches re-statements while allowing genuine elaborations ("The sky is blue" vs "The sky is blue on clear days" — Jaccard ≈ 0.57, below threshold).
In multi-agent systems, not every memory should be visible to every agent. A session-scoped scratchpad shouldn't pollute a user's long-term recall. An agent's private strategy notes shouldn't leak to other agents on the same chain.
MentisDB 0.8.2 introduces MemoryScope — three visibility levels stored as
tags on thoughts:
| Scope | Tag | Visible to |
|---|---|---|
User (default) | scope:user | All agents sharing the same user identity |
Session | scope:session | Only the session that created it |
Agent | scope:agent | Only the agent that created it |
Scopes are stored as tags, not as a new struct field. This was a deliberate choice: adding
a field to Thought would require schema V4 and another migration round. Tags
already exist, already participate in search filtering, and the scope: prefix
convention is easy to audit. A helper method makes it seamless:
let input = ThoughtInput::new(ThoughtType::FactLearned, "Draft hypothesis")
.with_scope(MemoryScope::Session);
let query = RankedSearchQuery::new()
.with_text("hypothesis")
.with_scope(MemoryScope::Session);
The REST API accepts "scope": "session" on both append and search. The MCP
surface passes it through tags as well.
The mentisdbd binary already had setup and wizard
subcommands for integration configuration. 0.8.2 adds three more that talk to a running
daemon via REST, giving operators a quick way to add, search, and inspect memories
without opening the dashboard or writing an MCP client:
# Add a thought
mentisdbd add "The database uses schema V3" --type fact-learned --tag architecture
# Search
mentisdbd search "schema version" --limit 5 --scope user
# List agents
mentisdbd agents --chain my-project
The CLI uses ureq for synchronous HTTP — no async runtime, no Tokio
dependency, no startup latency. It connects to the daemon's REST port (default
http://127.0.0.1:9472) and returns JSON output.
This is especially useful for quick inspection, debugging, and scripting. Combined with
jq, you can pipe search results into shell pipelines:
mentisdbd search "deployment" --limit 20 | jq '.hits[].thought.content'
0.8.2 includes the third schema migration in MentisDB's history (V0 → V1 → V2 → V3). Each migration has taught us something:
chain_key to ThoughtRelation for
cross-chain references.
valid_at/invalid_at to
ThoughtRelation.
The key lesson from V3: bincode's empty-Vec fast path is a trap. When
a Rust struct has Vec<T> and the serialized data has 0 items, bincode
can successfully deserialize even if T's layout changed between schema versions. The
deserializer reads length=0, skips the element loop, and continues at the same byte
offsets — producing silently wrong data. The fix: always check
schema_version == MENTISDB_CURRENT_VERSION after a successful fast-path
deserialization.
All four features follow the same principles that distinguish MentisDB from the field:
ureq but it's
only needed for those subcommands, not the library.cargo install mentisdb
Or from source:
git pull
cargo install --path . --locked
Existing chains (V0, V1, V2) are migrated to V3 automatically on first open. The migration persists so subsequent opens are instant. Vector sidecars and skill registries are unaffected.
To enable dedup, set the environment variable before starting the daemon:
MENTISDB_DEDUP_THRESHOLD=0.85 mentisdbd
0.8.3 will focus on retrieval quality: irregular verb lemma expansion ("went" → "go"), lightweight reranking, and per-field BM25 DF cutoffs. These address the stemming gap that accounts for ~38% of remaining LoCoMo misses. Beyond that: custom ontology support and episode provenance, the two remaining gaps from the competitive analysis.
MentisDB is an open-source durable memory layer for AI agents. It stores memories in an append-only hash-chained log, retrieves them with hybrid lexical+semantic+graph search, and runs entirely locally with no cloud dependencies. GitHub · Docs · Website