Graphiti MCP Server

Graphiti MCP Server

Graphiti MCP Server is a CLI-driven, multi-project knowledge graph backend for AI agents, built on a fork of getzep/graphiti. It enables rapid deployment and management of temporal knowledge graphs, project isolation, and seamless IDE/agent integration—all powered by the Model Context Protocol (MCP) and backed by a single Neo4j instance. Each project gets its own isolated MCP server and graph namespace, allowing flexible entity extraction, hot-reloading, and robust crash containment, making it ideal for scalable AI workflows.

Author: rawr-ai


View Protocol

What is Graphiti MCP Server?

Graphiti MCP Server is an open-source management layer for building, running, and isolating knowledge graph backends across multiple projects, using a unified Neo4j database. It extends the upstream Graphiti MCP server by enabling multiple project-scoped MCP servers, each with its own configuration, group namespace, entity extraction rules, and LLM model, but sharing common infrastructure. Its automated CLI generates Docker Compose and IDE configuration files, ensuring easy multi-project setup and seamless integration with popular AI development tools and editors supporting MCP.

How to Configure

  1. Clone the repository:
    git clone https://github.com/rawr-ai/mcp-graphiti.git && cd mcp-graphiti
  2. Set up environment:
    Copy .env.example to .env, then provide secure Neo4j password and OpenAI key.
    Note: The server refuses to boot with the default password unless running in explicit development mode (GRAPHITI_ENV=dev).
  3. Install the CLI:
    Install globally via pipx install . --include-deps (users), or set up a local virtualenv (contributors).
  4. Configure projects:
    Edit or create mcp-projects.yaml at the repository root. Define project entries with their root directories and any project-specific settings.
  5. (Optional) Manage IDE integration:
    The CLI auto-generates .cursor/mcp.json for each project and port mapping, ensuring your editor can auto-discover MCP endpoints.

How to Use

  1. Generate Docker Compose and MCP config:
    From the repo root, run graphiti compose to scan mcp-projects.yaml and output docker-compose.yml plus editor configs.
  2. Launch all services:
    Run graphiti up -d to start Neo4j and one MCP server per project (on ports 8000, 8001, ...).
  3. Initialize new projects:
    In the root of your project repo, run graphiti init [your-project] to scaffold config and entity extraction YAMLs.
  4. Reload or update a running project:
    After editing entity definitions or model settings, run graphiti reload mcp-[project] to apply changes without affecting other projects.
  5. Access endpoints:
    • MCP status: http://localhost:800X/status
    • SSE endpoint for agents: http://localhost:800X/sse
    • Neo4j browser: http://localhost:7474
  6. Maintain/project isolation:
    Each server is namespaced via group_id, keeping data and LLM contexts separate and resilient to crashes.

Key Features

  • Multi-project support: Launch and isolate several MCP servers (one per project) sharing a single Neo4j instance for cost and maintenance efficiency.
  • Temporal knowledge graphs: Automatically builds and versions entity/relation graphs over time, letting agents access historical, versioned data and context.
  • Automated CLI workflows: Scans project configs, generates all compose/IDE configs, and supports hot-reloading with minimal friction.
  • Crash and dependency isolation: Each project gets its own container and group_id—rogue prompts or misconfigurations are contained.
  • Hot-swappable extraction: Reload entity YAMLs or LLM models per project without affecting others or requiring full stack restarts.
  • Editor & agent auto-discovery: MCP endpoints are auto-added to .cursor/mcp.json so tools like VS Code, Cursor, LangGraph, Autogen, or Claude Desktop can immediately access project tools.
  • Production-grade safety: Strict password checks for Neo4j; destructive actions require explicit, environment-based confirmation.

Use Cases

  • Enterprise multi-agent systems: Safely support multiple teams or AI agents collaborating on isolated but co-hosted knowledge graphs (e.g., product knowledge, support bots, compliance audits).
  • LLM-powered research tools: Enable researchers to continuously ingest literature or resource dumps into versioned, queryable knowledge graphs—per topic or grant.
  • Pluggable IDE assistants: Provide always-synced, context-aware project intelligence to AI coding assistants, without risking data leaks or project collisions.
  • Experiment tracking: Maintain separate, timestamped graphs for each ML/RAG experiment, with roll-back and auditing for every change.
  • Dev/test parallelization: Safely test new entity extraction models and updated logic in parallel, without downtime or risk to production knowledge graphs.

FAQ

Q: Can I run only one MCP server (single-project)?
A: Yes. Delete all other projects from mcp-projects.yaml or set MCP_SINGLE_SERVER=true in your environment, then re-run graphiti compose. All requests will go through a single container and port.

Q: How is project isolation enforced—ports or something else?
A: Both. Each MCP server runs on its unique port, but data-level isolation is enforced by a required group_id parameter in every read/write, so queries and graphs never cross boundaries.

Q: Can I add a reverse proxy or API gateway in front?
A: Absolutely. You can route all agent traffic through a single endpoint, injecting group_id as a claim or header, and have the root server fan out as needed.

Q: What happens if I need to change an entity extraction rule or swap LLM models?
A: Just update the relevant YAML or config in your project folder, then run graphiti reload mcp-[project]; no restart or downtime for other projects is needed.

Q: How do I completely wipe the Neo4j graph?
A: Set NEO4J_DESTROY_ENTIRE_GRAPH=true in your .env, then run graphiti upthis will irreversibly delete ALL project data. Use only when you are certain.