Give Your Agents
Context They Can Trust
Your agents just ask and tell. We handle everything underneath.
Feed NocturnusAI plain text, get back structured facts. Ask it questions in natural language, get verified answers with proof. Rules, inference, memory lifecycle, and consistency all happen automatically — your agent doesn't need to know how.
The logic engine at the foundation of every serious AI agent.
Not a Plugin. A Foundation.
Other tools sit on top of your LLM and hope for the best. Nocturnus sits beneath your agents and provides the substrate that makes correct reasoning possible — regardless of which LLM or framework is above it.
from langchain_anthropic import ChatAnthropic
from langchain.agents import AgentExecutor, create_tool_calling_agent
from nocturnusai.langchain import get_nocturnusai_tools
# Point your agent at the logic engine
tools = get_nocturnusai_tools("http://localhost:9300")
# tells, asks, teaches, forgets, recalls, context
# — all backed by the Hexastore + inference engine
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({
"input": "Is Acme Corp eligible for premium SLA?"
})
# Agent reasons over verified facts, not LLM memory.
# Answer is provable. Traceable. Consistent. 9 MCP tools, all backed by the logic engine
Your Agent Asks. NocturnusAI Knows.
Start with natural language — feed it text, ask it questions, connect via MCP. The logic engine, inference, and memory management all happen beneath the surface.
Plain English In, Verified Facts Out
POST /extract with any text. NocturnusAI calls your LLM to pull out structured facts and stores them automatically. No schema design, no parsing code, no mapping logic — your agent just feeds it context.
Ask Questions, Get Grounded Answers
POST /synthesize with a natural language question. NocturnusAI queries its fact store, runs inference, and returns a sourced answer with a derivation trail — not a hallucinated guess from token probabilities.
9 MCP Tools, Zero Integration Work
Connect any MCP-compatible agent, IDE, or framework with a two-line config. tell, ask, teach, forget, recall, context — your agent gets a complete reasoning toolkit without writing any integration code.
Salience-Ranked Memory
Composite scoring keeps the most relevant facts surfaced for your agent's context window. Episodic patterns consolidate into semantic summaries. Low-relevance facts decay automatically.
Truth Maintenance System
Retract a fact and every conclusion that depended on it disappears automatically. No stale inferences, no manual cleanup — the knowledge base stays consistent by design.
Temporal Atoms
Every fact carries validFrom, validUntil, and TTL fields. Facts auto-expire. Query what was true at any point in time. Agents reason over history, not just the present snapshot.
ACID Transactions
Multi-agent systems write concurrently. Transactions ensure atomic commits with contradiction detection — agents can explore hypotheticals without polluting shared state.
Production Durability
WAL + snapshots for crash recovery. Leader/follower replication for read scaling. Prometheus metrics. Kubernetes-ready health probes. Self-hosted, your data, your infrastructure.
Universal Protocol Support
MCP, REST, Python SDK, TypeScript SDK, A2A agent discovery. Whatever your stack, NocturnusAI plugs in. New protocols don't require rewriting your knowledge layer.
Up and Running in 60 Seconds
No signup. No cloud dependency. No schemas to design. Production-grade infrastructure, self-hosted, on your terms.
Deploy the Logic Engine
One curl command. The installer checks Docker, pulls the image, starts the server, waits for healthy, and installs the native CLI binary. Nocturnus is live on port 9300 in under 30 seconds.
Load Your World
Assert facts about your domain: customers, products, rules, state, relationships. Everything is structured, typed, and time-aware. Rules you define teach the engine what to derive. The KB grows as your world grows.
Connect Your Agents
Point any MCP-compatible framework, the Python SDK, TypeScript SDK, or direct HTTP at the running server. Your agents get 9 tools backed by the full reasoning stack — ask questions, get provable answers.
The Difference
What happens when your agent needs to know a customer's subscription tier?
// Agent prompt stuffing...
"Based on the conversation, I believe
the customer is on the premium plan.
I'm not entirely sure, but they
mentioned something about enterprise
features in a previous message..."
// Wrong. The customer is on "starter".
// Your agent just offered a 50% discount
// to the wrong tier. 💸 // 1. Ingest plain English → facts extracted
POST /extract
{ "text": "Acme Corp is on the starter plan",
"assert": true }
// ✓ Extracted: subscription_tier(acme_corp, starter)
// 2. Agent asks in natural language
POST /synthesize
{ "question": "What plan is Acme on?" }
{ "answer": "Acme Corp is on the starter plan.",
"derivation": ["subscription_tier(acme_corp, starter)"],
"confidence": 0.95 }
// Correct. Sourced. Provable. ✓ Built for production from day one