mentisdb ships a first-class Python client called pymentisdb. Whether you're
building a LangChain agent, a custom chatbot, or any Python application that needs durable
semantic memory, pymentisdb gives you a clean interface to store and retrieve thoughts
from MentisDB's append-only hash-chained store.
Install from source or via pip:
# Core client only (no LangChain dependency)
pip install pymentisdb
# With LangChain integration
pip install pymentisdb[langchain]
Requires Python 3.10+ and a running mentisdbd instance. Start one with:
mentisdbd
Or for production with TLS:
MENTISDB_DIR=/path/to/data mentisdbd --https --port 9473
The MentisDbClient wraps the MentisDB REST API. It handles connection pooling,
authentication headers, and type conversion automatically.
from pymentisdb import MentisDbClient
# Connect to local mentisdbd (default)
client = MentisDbClient()
# Connect to a remote instance
client = MentisDbClient(base_url="https://my.mentisdb.com:9473")
Thoughts are the atomic memory records in MentisDB. Every append is cryptographically chained to the previous thought — you can't rewrite history, only extend it.
from pymentisdb import ThoughtType, ThoughtRole
# Record an insight
thought = client.append_thought(
thought_type=ThoughtType.INSIGHT,
content="Rate limiting is the real bottleneck for our API.",
agent_name="assistant",
importance=0.8,
tags=["performance", "api"],
)
print(f"Appended: {thought.id}")
# Record a decision
decision = client.append_thought(
thought_type=ThoughtType.DECISION,
content="We will implement a sliding window rate limiter.",
agent_name="assistant",
importance=0.9,
tags=["architecture", "api"],
concepts=["rate-limiting", "sliding-window"],
)
# Record a lesson learned
lesson = client.append_thought(
thought_type=ThoughtType.LESSON_LEARNED,
content="Never deploy on a Friday afternoon.",
agent_name="assistant",
importance=1.0,
confidence=0.95,
)
Ranked search combines lexical matching, vector similarity, and graph traversal into a single scored result set:
# Search by text
results = client.ranked_search(
text="rate limiting",
limit=5,
)
print(f"Found {results.total} results")
for hit in results.results:
print(f" [{hit.score.total:.3f}] {hit.thought.content}")
# Filter by importance
results = client.ranked_search(
text="performance",
min_importance=0.7,
)
# Filter by tags and thought type
results = client.ranked_search(
text="api",
thought_types=[ThoughtType.DECISION, ThoughtType.INSIGHT],
tags_any=["architecture", "performance"],
)
Context bundles go beyond flat search results. They group each top-scoring "seed" match with the supporting memories reachable through graph relations — giving you not just the answer but the trail of evidence that led to it:
response = client.context_bundles(
text="why did we choose PostgreSQL",
limit=3,
)
for bundle in response.bundles:
seed = bundle.seed
print(f"Seed: {seed.thought.content} (score: {seed.lexical_score:.3f})")
for support_hit in bundle.support:
print(f" Supporting: {support_hit.thought.content}")
print(f" Depth: {support_hit.depth}, via: {support_hit.relation_kinds}")
pymentisdb includes a first-class MentisDbMemory class that implements
LangChain's BaseMemory interface. Drop it into any LangChain agent to give
it persistent, retrievable conversation history.
from pymentisdb import MentisDbClient, MentisDbMemory, ThoughtType, ThoughtRole
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory
# 1. Set up memory
memory = MentisDbMemory(
base_url="http://127.0.0.1:9472",
chain_key="my-agent", # persists across sessions
agent_name="assistant",
thought_type=ThoughtType.SUMMARY,
role=ThoughtRole.MEMORY,
)
# 2. Build the chain
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with persistent memory."),
("placeholder", "{chat_history}"),
("human", "{question}"),
])
chain = prompt | llm
# 3. Add message history (in-memory for demo, use Redis/SQL in prod)
chain_with_history = chain.with_message_history(
ChatMessageHistory(session_id="user-123"),
)
# 4. Run — memory is automatically loaded and saved
response = chain_with_history.invoke(
{"question": "What did I tell you about my project?"},
config={"configurable": {"session_id": "user-123"}},
)
print(response.content)
MentisDbMemory implements four LangChain lifecycle methods:
load_memory_variables(inputs) — retrieves recent thoughts from MentisDB
via ranked_search() and formats them as a chat history string for the prompt.add_messages(messages) — converts each LangChain message to a
ThoughtType.SUMMARY thought and appends it via append_thought().get_messages() — retrieves thoughts and reconstructs them as
HumanMessage/AIMessage objects.clear() — a no-op (MentisDB is append-only; use a different
chain_key to isolate sessions).Tip: Use different chain_key values for different users or
conversation threads. MentisDB's cross-chain search lets you query across all of them when needed.
MentisDB uses typed thoughts instead of generic key-value pairs. This gives retrieval
semantic meaning — a Decision is scored differently from a
Mistake, and you can filter by type:
| ThoughtType | When to Use |
|---|---|
INSIGHT | A non-obvious realization or lesson |
DECISION | A committed choice affecting future behavior |
PREFERENCE_UPDATE | A stable user preference discovered or changed |
MISTAKE | A wrong action taken (distinct from Correction) |
CORRECTION | A prior assumption was wrong — this replaces it |
LESSON_LEARNED | A rule distilled from failure or expensive fix |
FINDING | A fact or data point discovered during work |
QUESTION | An unresolved issue worth preserving |
SUMMARY | Compressed state (pair with Checkpoint role) |
LLM_EXTRACTED | Memories auto-extracted from text via the LLM pipeline |
Thoughts can link to each other with typed graph relations. This enables multi-hop reasoning — "show me what led to this decision":
from pymentisdb import ThoughtRelation, ThoughtRelationKind
# Link a correction to the original mistake
client.append_thought(
thought_type=ThoughtType.CORRECTION,
content="The old assumption about retry logic was flawed.",
relations=[
ThoughtRelation(
kind=ThoughtRelationKind.CORRECTS,
target_id=mistake_thought.id,
)
],
)
# Link evidence to a hypothesis
client.append_thought(
thought_type=ThoughtType.FINDING,
content="Cache invalidation is the real issue.",
relations=[
ThoughtRelation(
kind=ThoughtRelationKind.SUPPORTS,
target_id=hypothesis_thought.id,
)
],
)
MentisDbClient exposes all MentisDB server operations. Methods currently implemented in the
Python client are marked with ✓. Methods not yet wrapped (but available via
the REST API directly) are marked with server-only and can be called using
client._post() or client._get() helpers.
from pymentisdb import (
MentisDbClient,
ThoughtType,
ThoughtRole,
ThoughtRelation,
ThoughtRelationKind,
MemoryScope,
Thought,
AgentRecord,
RankedSearchHit,
RankedSearchResponse,
ContextBundle,
ContextBundlesResponse,
ChainSummary,
ListChainsResponse,
)
# Local instance (default)
client = MentisDbClient()
# Remote instance
client = MentisDbClient(base_url="https://my.mentisdb.com:9473")
# Custom timeout (seconds, default 30)
import requests
client = MentisDbClient()
client._session.timeout = 60 # set timeout on underlying requests session
# Add auth headers for remote instances
client._session.headers["Authorization"] = "Bearer YOUR_API_KEY"
thought = client.append_thought(
thought_type=ThoughtType.INSIGHT,
content="Rate limiting is the real bottleneck.",
chain_key="my-agent", # uses default if omitted
agent_id="agent-001",
agent_name="assistant",
agent_owner="cloudllm",
role=ThoughtRole.MEMORY, # Memory | WorkingMemory | Summary | Checkpoint | Handoff | Audit | Retrospective
importance=0.8, # 0.0–1.0, default 0.5
confidence=0.95, # optional, 0.0–1.0
tags=["performance", "api"],
concepts=["rate-limiting"],
refs=[5, 12], # indices of referenced prior thoughts
relations=[ # typed graph edges
ThoughtRelation(
kind=ThoughtRelationKind.SUPPORTS,
target_id="thought-uuid-abc",
)
],
scope=MemoryScope.USER, # User | Session | Agent
)
print(thought.id, thought.hash)
results = client.ranked_search(
text="rate limiting",
chain_key="my-agent",
limit=10,
offset=0,
thought_types=[ThoughtType.DECISION, ThoughtType.INSIGHT],
roles=[ThoughtRole.MEMORY],
tags_any=["performance", "api"],
concepts_any=["rate-limiting"],
agent_ids=["agent-001"],
agent_names=["assistant"],
agent_owners=["cloudllm"],
min_importance=0.5,
min_confidence=0.6,
since=datetime(2026, 1, 1),
until=datetime(2026, 4, 14),
scope="user",
enable_reranking=True,
rerank_k=20,
entity_type="decision",
)
print(results.total)
for hit in results.results:
print(f"[{hit.score.total:.3f}] {hit.thought.content}")
print(f" lexical={hit.score.lexical:.3f} vector={hit.score.vector:.3f} graph={hit.score.graph:.3f}")
print(f" matched_terms={hit.matched_terms}")
resp = client.context_bundles(
text="why PostgreSQL",
limit=5,
thought_types=[ThoughtType.DECISION, ThoughtType.FINDING],
)
print(f"Total bundles: {resp.total_bundles}")
for bundle in resp.bundles:
seed = bundle.seed
print(f"Seed: {seed.thought.content} (score={seed.lexical_score:.3f})")
for support in bundle.support:
print(f" [{support.depth}] {support.thought.content}")
print(f" via: {support.relation_kinds}")
# Use the raw REST helpers until this is wrapped
result = client._post("/v1/lexical-search", {
"text": "rate limiting",
"chain_key": "my-agent",
"limit": 10,
"offset": 0,
})
for hit in result["results"]:
thought = Thought.from_dict(hit["thought"])
print(f"[{hit['score']:.3f}] {thought.content}")
# Alias for lexical_search on the server
result = client._post("/v1/search", {
"text": "performance optimization",
"chain_key": "my-agent",
"limit": 10,
})
# Retrieve a single thought by ID, hash, or index
result = client._get("/v1/thoughts/abc123") # by thought ID
result = client._get("/v1/thoughts/by-hash/xyz789") # by content hash
result = client._get("/v1/thoughts/by-index/42") # by append-order index
result = client._get("/v1/thoughts/head") # latest thought on chain
thought = Thought.from_dict(result["thought"])
print(f"Thought {thought.id} at index {thought.index}")
# Traverse forwards or backwards from an anchor point
result = client._post("/v1/traverse-thoughts", {
"chain_key": "my-agent",
"anchor_id": "abc123", # start from this thought ID
"direction": "forward", # forward | backward
"limit": 20,
"include_anchor": True,
"thought_types": ["Insight", "Decision"],
"roles": ["Memory"],
"tags_any": ["performance"],
"since": "2026-01-01T00:00:00Z",
"until": "2026-04-14T00:00:00Z",
})
for thought_data in result["thoughts"]:
t = Thought.from_dict(thought_data)
print(f"[{t.index}] {t.content[:60]}...")
# Render recent context as a prompt snippet for agent handoff
result = client._get("/v1/recent-context?chain_key=my-agent&last_n=10")
print(result["content"])
# Get chain tip metadata
result = client._get("/v1/chains/head?chain_key=my-agent")
print(f"Chain length: {result['length']}")
print(f"Latest thought: {result['latest_thought_id']}")
print(f"Head hash: {result['head_hash']}")
resp = client.list_chains()
print(f"Default chain: {resp.default_chain_key}")
print(f"All chains: {resp.chain_keys}")
for chain in resp.chains:
print(f" {chain.chain_key}: {chain.thought_count} thoughts, {chain.agent_count} agents")
print(f" adapter={chain.storage_adapter} location={chain.storage_location}")
# Create or update an agent identity
agent = client.upsert_agent(
agent_id="agent-001",
chain_key="my-agent",
display_name="Assistant v3",
agent_owner="cloudllm",
description="Primary assistant agent for user tasks",
status="active",
)
print(f"Agent: {agent.display_name} ({agent.status})")
print(f" first seen: index {agent.first_seen_index}")
print(f" thought count: {agent.thought_count}")
print(f" aliases: {agent.aliases}")
# Get a specific agent's full registry record
agent_result = client._get(f"/v1/agents/agent-001?chain_key=my-agent")
# List all agents on a chain
agents_result = client._get("/v1/agents?chain_key=my-agent")
for agent_data in agents_result["agents"]:
print(agent_data["display_name"])
result = client._post("/v1/skills/upload", {
"agent_id": "agent-001",
"chain_key": "my-agent",
"skill_id": "my-skill",
"content": "# My Skill\n\nThis skill helps with...",
"format": "markdown",
"signing_key_id": "key-2026",
"skill_signature": [1, 2, 3, 4], # 64 bytes as int list
})
print(f"Uploaded skill ID: {result['skill_id']}")
print(f"Version: {result['version_id']}")
result = client._post("/v1/skills/read", {
"skill_id": "my-skill",
"version_id": "v1.0", # optional, latest if omitted
})
print(result["content"])
print(f"Format: {result['format']}")
result = client._post("/v1/skills/search", {
"text": "debugging workflow",
"limit": 5,
"names": ["my-skill"],
"tags_any": ["debug"],
"statuses": ["active"],
"since": "2026-01-01T00:00:00Z",
})
for skill in result["skills"]:
print(f"{skill['skill_id']}: {skill['latest_version']['description']}")
result = client._get("/v1/skills?chain_key=my-agent")
for skill in result["skills"]:
print(f"{skill['skill_id']} ({skill['status']}) — latest version: {skill['latest_version_id']}")
result = client._post("/v1/webhooks/register", {
"chain_key": "my-agent",
"url": "https://my-app.com/mentisdb-hook",
"event_types": ["thought.appended", "chain.branched"],
"description": "Notify my app on new thoughts",
"secret": "my-webhook-secret",
})
print(f"Webhook registered: {result['webhook_id']}")
result = client._get("/v1/webhooks?chain_key=my-agent")
for wh in result["webhooks"]:
print(f"{wh['webhook_id']}: {wh['url']} [{', '.join(wh['event_types'])}]")
result = client._post("/v1/webhooks/delete", {
"webhook_id": "wh-abc123",
})
# Export chain as MEMORY.md formatted string
result = client._post("/v1/memory-markdown", {
"chain_key": "my-agent",
"limit": 50,
"thought_types": ["Insight", "Decision"],
"since": "2026-01-01T00:00:00Z",
})
print(result["markdown"])
# Output format:
# ## [Insight] 2026-04-14T10:30:00
# content here
# Tags: performance, api
# Agent: assistant
# Import a MEMORY.md formatted string back into a chain
result = client._post("/v1/import-memory-markdown", {
"chain_key": "my-agent",
"markdown": """## [Insight] 2026-04-10
Some insight here
Tags: test
## [Decision] 2026-04-11
A decision made
""",
"default_agent_id": "agent-001",
})
print(f"Imported {result['imported_count']} thoughts")
print(f"Imported indices: {result['imported_indices']}")
Here's a full example showing the full loop — appending thoughts, searching, and using context bundles — against a running mentisdbd:
#!/usr/bin/env python3
"""Complete pymentisdb example — append, search, and retrieve."""
from pymentisdb import MentisDbClient, ThoughtType, ThoughtRole
client = MentisDbClient(base_url="http://127.0.0.1:9472")
# 1. Seed some memories
client.append_thought(
thought_type=ThoughtType.DECISION,
content="We chose PostgreSQL for the primary database.",
agent_name="architect",
importance=0.9,
tags=["database", "architecture"],
)
client.append_thought(
thought_type=ThoughtType.FINDING,
content="PostgreSQL handles 10k TPS with our current schema.",
agent_name="engineer",
importance=0.7,
tags=["database", "performance"],
)
# 2. Semantic search
results = client.ranked_search(text="database performance", limit=5)
print(f"Search results: {results.total} found")
for hit in results.results:
print(f" [{hit.score.total:.3f}] {hit.thought.thought_type.value}: "
f"{hit.thought.content}")
# 3. Get context with supporting evidence
bundles = client.context_bundles(text="why PostgreSQL", limit=2)
for bundle in bundles.bundles:
print(f"\nSeed: {bundle.seed.thought.content}")
for support in bundle.support:
print(f" Evidence ({support.relation_kinds}): {support.thought.content}")
# 4. List all chains
chains = client.list_chains()
print(f"Default chain: {chains.default_chain_key}")
for chain in chains.chains:
print(f" {chain.chain_key}: {chain.thought_count} thoughts")
# 5. Upsert an agent
agent = client.upsert_agent(
agent_id="agent-001",
display_name="My Assistant",
description="Primary user assistant",
)
print(f"Agent: {agent.display_name} — {agent.thought_count} thoughts recorded")