← Blog
April 14, 2026

pymentisdb — Python Client for MentisDB

mentisdb ships a first-class Python client called pymentisdb. Whether you're building a LangChain agent, a custom chatbot, or any Python application that needs durable semantic memory, pymentisdb gives you a clean interface to store and retrieve thoughts from MentisDB's append-only hash-chained store.

Installation

Install from source or via pip:

# Core client only (no LangChain dependency)
pip install pymentisdb

# With LangChain integration
pip install pymentisdb[langchain]

Requires Python 3.10+ and a running mentisdbd instance. Start one with:

mentisdbd

Or for production with TLS:

MENTISDB_DIR=/path/to/data mentisdbd --https --port 9473

Basic Client Usage

The MentisDbClient wraps the MentisDB REST API. It handles connection pooling, authentication headers, and type conversion automatically.

Connecting

from pymentisdb import MentisDbClient

# Connect to local mentisdbd (default)
client = MentisDbClient()

# Connect to a remote instance
client = MentisDbClient(base_url="https://my.mentisdb.com:9473")

Appending Thoughts

Thoughts are the atomic memory records in MentisDB. Every append is cryptographically chained to the previous thought — you can't rewrite history, only extend it.

from pymentisdb import ThoughtType, ThoughtRole

# Record an insight
thought = client.append_thought(
    thought_type=ThoughtType.INSIGHT,
    content="Rate limiting is the real bottleneck for our API.",
    agent_name="assistant",
    importance=0.8,
    tags=["performance", "api"],
)
print(f"Appended: {thought.id}")

# Record a decision
decision = client.append_thought(
    thought_type=ThoughtType.DECISION,
    content="We will implement a sliding window rate limiter.",
    agent_name="assistant",
    importance=0.9,
    tags=["architecture", "api"],
    concepts=["rate-limiting", "sliding-window"],
)

# Record a lesson learned
lesson = client.append_thought(
    thought_type=ThoughtType.LESSON_LEARNED,
    content="Never deploy on a Friday afternoon.",
    agent_name="assistant",
    importance=1.0,
    confidence=0.95,
)

Ranked search combines lexical matching, vector similarity, and graph traversal into a single scored result set:

# Search by text
results = client.ranked_search(
    text="rate limiting",
    limit=5,
)
print(f"Found {results.total} results")
for hit in results.results:
    print(f"  [{hit.score.total:.3f}] {hit.thought.content}")

# Filter by importance
results = client.ranked_search(
    text="performance",
    min_importance=0.7,
)

# Filter by tags and thought type
results = client.ranked_search(
    text="api",
    thought_types=[ThoughtType.DECISION, ThoughtType.INSIGHT],
    tags_any=["architecture", "performance"],
)

Context Bundles — Retrieval with Supporting Memories

Context bundles go beyond flat search results. They group each top-scoring "seed" match with the supporting memories reachable through graph relations — giving you not just the answer but the trail of evidence that led to it:

response = client.context_bundles(
    text="why did we choose PostgreSQL",
    limit=3,
)
for bundle in response.bundles:
    seed = bundle.seed
    print(f"Seed: {seed.thought.content} (score: {seed.lexical_score:.3f})")
    for support_hit in bundle.support:
        print(f"  Supporting: {support_hit.thought.content}")
        print(f"    Depth: {support_hit.depth}, via: {support_hit.relation_kinds}")

LangChain Integration

pymentisdb includes a first-class MentisDbMemory class that implements LangChain's BaseMemory interface. Drop it into any LangChain agent to give it persistent, retrievable conversation history.

Full LangChain Example

from pymentisdb import MentisDbClient, MentisDbMemory, ThoughtType, ThoughtRole
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory

# 1. Set up memory
memory = MentisDbMemory(
    base_url="http://127.0.0.1:9472",
    chain_key="my-agent",        # persists across sessions
    agent_name="assistant",
    thought_type=ThoughtType.SUMMARY,
    role=ThoughtRole.MEMORY,
)

# 2. Build the chain
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with persistent memory."),
    ("placeholder", "{chat_history}"),
    ("human", "{question}"),
])
chain = prompt | llm

# 3. Add message history (in-memory for demo, use Redis/SQL in prod)
chain_with_history = chain.with_message_history(
    ChatMessageHistory(session_id="user-123"),
)

# 4. Run — memory is automatically loaded and saved
response = chain_with_history.invoke(
    {"question": "What did I tell you about my project?"},
    config={"configurable": {"session_id": "user-123"}},
)
print(response.content)

How MentisDbMemory Works

MentisDbMemory implements four LangChain lifecycle methods:

Tip: Use different chain_key values for different users or conversation threads. MentisDB's cross-chain search lets you query across all of them when needed.

Thought Types Reference

MentisDB uses typed thoughts instead of generic key-value pairs. This gives retrieval semantic meaning — a Decision is scored differently from a Mistake, and you can filter by type:

ThoughtTypeWhen to Use
INSIGHTA non-obvious realization or lesson
DECISIONA committed choice affecting future behavior
PREFERENCE_UPDATEA stable user preference discovered or changed
MISTAKEA wrong action taken (distinct from Correction)
CORRECTIONA prior assumption was wrong — this replaces it
LESSON_LEARNEDA rule distilled from failure or expensive fix
FINDINGA fact or data point discovered during work
QUESTIONAn unresolved issue worth preserving
SUMMARYCompressed state (pair with Checkpoint role)
LLM_EXTRACTEDMemories auto-extracted from text via the LLM pipeline

Advanced: Typed Relations

Thoughts can link to each other with typed graph relations. This enables multi-hop reasoning — "show me what led to this decision":

from pymentisdb import ThoughtRelation, ThoughtRelationKind

# Link a correction to the original mistake
client.append_thought(
    thought_type=ThoughtType.CORRECTION,
    content="The old assumption about retry logic was flawed.",
    relations=[
        ThoughtRelation(
            kind=ThoughtRelationKind.CORRECTS,
            target_id=mistake_thought.id,
        )
    ],
)

# Link evidence to a hypothesis
client.append_thought(
    thought_type=ThoughtType.FINDING,
    content="Cache invalidation is the real issue.",
    relations=[
        ThoughtRelation(
            kind=ThoughtRelationKind.SUPPORTS,
            target_id=hypothesis_thought.id,
        )
    ],
)

Complete API Reference

MentisDbClient exposes all MentisDB server operations. Methods currently implemented in the Python client are marked with . Methods not yet wrapped (but available via the REST API directly) are marked with server-only and can be called using client._post() or client._get() helpers.

Imports

from pymentisdb import (
    MentisDbClient,
    ThoughtType,
    ThoughtRole,
    ThoughtRelation,
    ThoughtRelationKind,
    MemoryScope,
    Thought,
    AgentRecord,
    RankedSearchHit,
    RankedSearchResponse,
    ContextBundle,
    ContextBundlesResponse,
    ChainSummary,
    ListChainsResponse,
)

Connection Configuration

# Local instance (default)
client = MentisDbClient()

# Remote instance
client = MentisDbClient(base_url="https://my.mentisdb.com:9473")

# Custom timeout (seconds, default 30)
import requests
client = MentisDbClient()
client._session.timeout = 60  # set timeout on underlying requests session

# Add auth headers for remote instances
client._session.headers["Authorization"] = "Bearer YOUR_API_KEY"

Memory Operations

append_thought

thought = client.append_thought(
    thought_type=ThoughtType.INSIGHT,
    content="Rate limiting is the real bottleneck.",
    chain_key="my-agent",        # uses default if omitted
    agent_id="agent-001",
    agent_name="assistant",
    agent_owner="cloudllm",
    role=ThoughtRole.MEMORY,    # Memory | WorkingMemory | Summary | Checkpoint | Handoff | Audit | Retrospective
    importance=0.8,              # 0.0–1.0, default 0.5
    confidence=0.95,            # optional, 0.0–1.0
    tags=["performance", "api"],
    concepts=["rate-limiting"],
    refs=[5, 12],               # indices of referenced prior thoughts
    relations=[                 # typed graph edges
        ThoughtRelation(
            kind=ThoughtRelationKind.SUPPORTS,
            target_id="thought-uuid-abc",
        )
    ],
    scope=MemoryScope.USER,     # User | Session | Agent
)
print(thought.id, thought.hash)

ranked_search

results = client.ranked_search(
    text="rate limiting",
    chain_key="my-agent",
    limit=10,
    offset=0,
    thought_types=[ThoughtType.DECISION, ThoughtType.INSIGHT],
    roles=[ThoughtRole.MEMORY],
    tags_any=["performance", "api"],
    concepts_any=["rate-limiting"],
    agent_ids=["agent-001"],
    agent_names=["assistant"],
    agent_owners=["cloudllm"],
    min_importance=0.5,
    min_confidence=0.6,
    since=datetime(2026, 1, 1),
    until=datetime(2026, 4, 14),
    scope="user",
    enable_reranking=True,
    rerank_k=20,
    entity_type="decision",
)
print(results.total)
for hit in results.results:
    print(f"[{hit.score.total:.3f}] {hit.thought.content}")
    print(f"  lexical={hit.score.lexical:.3f} vector={hit.score.vector:.3f} graph={hit.score.graph:.3f}")
    print(f"  matched_terms={hit.matched_terms}")

context_bundles

resp = client.context_bundles(
    text="why PostgreSQL",
    limit=5,
    thought_types=[ThoughtType.DECISION, ThoughtType.FINDING],
)
print(f"Total bundles: {resp.total_bundles}")
for bundle in resp.bundles:
    seed = bundle.seed
    print(f"Seed: {seed.thought.content} (score={seed.lexical_score:.3f})")
    for support in bundle.support:
        print(f"  [{support.depth}] {support.thought.content}")
        print(f"    via: {support.relation_kinds}")

lexical_search server-only

# Use the raw REST helpers until this is wrapped
result = client._post("/v1/lexical-search", {
    "text": "rate limiting",
    "chain_key": "my-agent",
    "limit": 10,
    "offset": 0,
})
for hit in result["results"]:
    thought = Thought.from_dict(hit["thought"])
    print(f"[{hit['score']:.3f}] {thought.content}")

search / query server-only

# Alias for lexical_search on the server
result = client._post("/v1/search", {
    "text": "performance optimization",
    "chain_key": "my-agent",
    "limit": 10,
})

get_thought server-only

# Retrieve a single thought by ID, hash, or index
result = client._get("/v1/thoughts/abc123")           # by thought ID
result = client._get("/v1/thoughts/by-hash/xyz789")   # by content hash
result = client._get("/v1/thoughts/by-index/42")      # by append-order index
result = client._get("/v1/thoughts/head")             # latest thought on chain

thought = Thought.from_dict(result["thought"])
print(f"Thought {thought.id} at index {thought.index}")

traverse_thoughts server-only

# Traverse forwards or backwards from an anchor point
result = client._post("/v1/traverse-thoughts", {
    "chain_key": "my-agent",
    "anchor_id": "abc123",           # start from this thought ID
    "direction": "forward",          # forward | backward
    "limit": 20,
    "include_anchor": True,
    "thought_types": ["Insight", "Decision"],
    "roles": ["Memory"],
    "tags_any": ["performance"],
    "since": "2026-01-01T00:00:00Z",
    "until": "2026-04-14T00:00:00Z",
})
for thought_data in result["thoughts"]:
    t = Thought.from_dict(thought_data)
    print(f"[{t.index}] {t.content[:60]}...")

recent_context server-only

# Render recent context as a prompt snippet for agent handoff
result = client._get("/v1/recent-context?chain_key=my-agent&last_n=10")
print(result["content"])

head server-only

# Get chain tip metadata
result = client._get("/v1/chains/head?chain_key=my-agent")
print(f"Chain length: {result['length']}")
print(f"Latest thought: {result['latest_thought_id']}")
print(f"Head hash: {result['head_hash']}")

Chain & Agent Management

list_chains

resp = client.list_chains()
print(f"Default chain: {resp.default_chain_key}")
print(f"All chains: {resp.chain_keys}")
for chain in resp.chains:
    print(f"  {chain.chain_key}: {chain.thought_count} thoughts, {chain.agent_count} agents")
    print(f"    adapter={chain.storage_adapter} location={chain.storage_location}")

upsert_agent (also: get_agent, list_agents)

# Create or update an agent identity
agent = client.upsert_agent(
    agent_id="agent-001",
    chain_key="my-agent",
    display_name="Assistant v3",
    agent_owner="cloudllm",
    description="Primary assistant agent for user tasks",
    status="active",
)
print(f"Agent: {agent.display_name} ({agent.status})")
print(f"  first seen: index {agent.first_seen_index}")
print(f"  thought count: {agent.thought_count}")
print(f"  aliases: {agent.aliases}")

# Get a specific agent's full registry record
agent_result = client._get(f"/v1/agents/agent-001?chain_key=my-agent")

# List all agents on a chain
agents_result = client._get("/v1/agents?chain_key=my-agent")
for agent_data in agents_result["agents"]:
    print(agent_data["display_name"])

Skills

upload_skill server-only

result = client._post("/v1/skills/upload", {
    "agent_id": "agent-001",
    "chain_key": "my-agent",
    "skill_id": "my-skill",
    "content": "# My Skill\n\nThis skill helps with...",
    "format": "markdown",
    "signing_key_id": "key-2026",
    "skill_signature": [1, 2, 3, 4],  # 64 bytes as int list
})
print(f"Uploaded skill ID: {result['skill_id']}")
print(f"Version: {result['version_id']}")

read_skill server-only

result = client._post("/v1/skills/read", {
    "skill_id": "my-skill",
    "version_id": "v1.0",    # optional, latest if omitted
})
print(result["content"])
print(f"Format: {result['format']}")

search_skill server-only

result = client._post("/v1/skills/search", {
    "text": "debugging workflow",
    "limit": 5,
    "names": ["my-skill"],
    "tags_any": ["debug"],
    "statuses": ["active"],
    "since": "2026-01-01T00:00:00Z",
})
for skill in result["skills"]:
    print(f"{skill['skill_id']}: {skill['latest_version']['description']}")

list_skills server-only

result = client._get("/v1/skills?chain_key=my-agent")
for skill in result["skills"]:
    print(f"{skill['skill_id']} ({skill['status']}) — latest version: {skill['latest_version_id']}")

Webhooks

register_webhook server-only

result = client._post("/v1/webhooks/register", {
    "chain_key": "my-agent",
    "url": "https://my-app.com/mentisdb-hook",
    "event_types": ["thought.appended", "chain.branched"],
    "description": "Notify my app on new thoughts",
    "secret": "my-webhook-secret",
})
print(f"Webhook registered: {result['webhook_id']}")

list_webhooks server-only

result = client._get("/v1/webhooks?chain_key=my-agent")
for wh in result["webhooks"]:
    print(f"{wh['webhook_id']}: {wh['url']} [{', '.join(wh['event_types'])}]")

delete_webhook server-only

result = client._post("/v1/webhooks/delete", {
    "webhook_id": "wh-abc123",
})

Memory Markdown Export / Import

memory_markdown server-only

# Export chain as MEMORY.md formatted string
result = client._post("/v1/memory-markdown", {
    "chain_key": "my-agent",
    "limit": 50,
    "thought_types": ["Insight", "Decision"],
    "since": "2026-01-01T00:00:00Z",
})
print(result["markdown"])
# Output format:
# ## [Insight] 2026-04-14T10:30:00
# content here
# Tags: performance, api
# Agent: assistant

import_memory_markdown server-only

# Import a MEMORY.md formatted string back into a chain
result = client._post("/v1/import-memory-markdown", {
    "chain_key": "my-agent",
    "markdown": """## [Insight] 2026-04-10
Some insight here
Tags: test

## [Decision] 2026-04-11
A decision made
""",
    "default_agent_id": "agent-001",
})
print(f"Imported {result['imported_count']} thoughts")
print(f"Imported indices: {result['imported_indices']}")

Complete Working Example

Here's a full example showing the full loop — appending thoughts, searching, and using context bundles — against a running mentisdbd:

#!/usr/bin/env python3
"""Complete pymentisdb example — append, search, and retrieve."""

from pymentisdb import MentisDbClient, ThoughtType, ThoughtRole

client = MentisDbClient(base_url="http://127.0.0.1:9472")

# 1. Seed some memories
client.append_thought(
    thought_type=ThoughtType.DECISION,
    content="We chose PostgreSQL for the primary database.",
    agent_name="architect",
    importance=0.9,
    tags=["database", "architecture"],
)

client.append_thought(
    thought_type=ThoughtType.FINDING,
    content="PostgreSQL handles 10k TPS with our current schema.",
    agent_name="engineer",
    importance=0.7,
    tags=["database", "performance"],
)

# 2. Semantic search
results = client.ranked_search(text="database performance", limit=5)
print(f"Search results: {results.total} found")
for hit in results.results:
    print(f"  [{hit.score.total:.3f}] {hit.thought.thought_type.value}: "
          f"{hit.thought.content}")

# 3. Get context with supporting evidence
bundles = client.context_bundles(text="why PostgreSQL", limit=2)
for bundle in bundles.bundles:
    print(f"\nSeed: {bundle.seed.thought.content}")
    for support in bundle.support:
        print(f"  Evidence ({support.relation_kinds}): {support.thought.content}")

# 4. List all chains
chains = client.list_chains()
print(f"Default chain: {chains.default_chain_key}")
for chain in chains.chains:
    print(f"  {chain.chain_key}: {chain.thought_count} thoughts")

# 5. Upsert an agent
agent = client.upsert_agent(
    agent_id="agent-001",
    display_name="My Assistant",
    description="Primary user assistant",
)
print(f"Agent: {agent.display_name} — {agent.thought_count} thoughts recorded")