What is Consult7 MCP Server?
Consult7 MCP Server is a bridge that allows language model agents to process and analyze data sets, codebases, or document repositories that are too large to fit within their usual context window. It leverages external LLMs (from OpenRouter, OpenAI, or Google) capable of handling much larger contexts, by extracting all relevant files matching user-specified patterns, sending them to a large-window model for analysis, and returning the results directly to the agent. The server is integrated as a tool under the Model Context Protocol for easy discovery and invocation.
How to Configure
Claude Code:
Add Consult7 via the command line:
claude mcp add -s user consult7 uvx -- consult7 <provider> <api-key>
Replace <provider>
with openrouter
, google
, or openai
, and <api-key>
with your actual API key.
Claude Desktop:
Edit your mcpServers
configuration:
{
"mcpServers": {
"consult7": {
"type": "stdio",
"command": "uvx",
"args": ["consult7", "openrouter", "your-api-key"]
}
}
}
Substitute the provider and API key as needed. No manual server installation is required; uvx
downloads and runs everything in an isolated environment.
Command Line Option:
You can also test connectivity directly:
uvx consult7 <provider> <api-key> [--test]
The model itself is chosen at each tool invocation, not during configuration.
How to Use
- Configure Consult7 in your Claude Code or Desktop environment as described above.
- From your AI agent, use the Consult7 tool to submit queries along with:
- The directory path to analyze
- A regex pattern to select files (e.g.,
".*\\.py$"
for Python files) - Optional: specify a particular model (with or without
|thinking
mode)
- Consult7 will recursively gather all matching files, assemble them into a large context, send your query plus data to the selected provider's large-context LLM, and return the analysis to your agent.
- Example command line queries to test connections:
uvx consult7 openai <api-key> --test
- To remove Consult7 from Claude Code:
claude mcp remove consult7 -s user
Key Features
- Seamless bridge from AI agents to models with extremely large context windows (1M+ tokens)
- Supports multiple leading providers: OpenAI, Google AI (Gemini), OpenRouter
- Recursively collects and filters files from any directory using regex patterns
- Returns model responses directly to your agent for further workflow automation
- Can run in "thinking" or reasoning mode for deeper analysis where supported
- Easily installed and managed through Claude Code or Desktop, no manual setup needed
- Discovery and invocation fully compatible with MCP client tooling
Use Cases
- Summarize Large Codebases: "Summarize the architecture and main components of this Python project" (analyzes all .py files)
- Locate Specific Implementations: "Find the implementation of the authenticate_user method and explain how it handles password verification" (searches Python, JavaScript, and TypeScript files)
- Test Coverage Analysis: "List all the test files and identify which components lack test coverage" (search and cross-reference test-related files)
- Security Review: "Analyze authentication flow and think step by step about vulnerabilities" (asks for deep analysis with |thinking mode)
- Documentation Extraction: Extract high-level summaries, TODOs, or API docs from diverse and massive project folders
FAQ
Q: Which models does Consult7 support?
A: Consult7 supports major large context LLMs from OpenAI, Google (Gemini), and OpenRouter, including models with 1 million+ tokens context length. Model choice is flexible per query.
Q: Does Consult7 read and send my data to external providers?
A: Yes. Files matching your path and pattern are sent (temporarily, securely) to the chosen cloud provider for processing, depending on your model selection.
Q: What if my codebase is larger than even the big model's context window?
A: Consult7 will attempt to assemble and send as much as can fit in the selected model's context. For extremely large codebases, consider narrowing patterns or splitting analysis.
Q: Is there any installation required on my machine?
A: No manual installation is needed; uvx
handles Consult7 in an isolated environment automatically when you configure via Claude Code or Desktop.
Q: How do I pass special modes like "thinking"?
A: Add |thinking
to your model name in your tool invocation (e.g., gemini-2.5-flash|thinking
). Some models also accept custom reasoning token counts (rarely required).
Q: What providers and API keys can I use?
A: You can use OpenAI, Google AI (Gemini), or OpenRouter, but you must supply your own valid API key for each.