Adaptive Graph of Thoughts
An intelligent scientific reasoning framework that uses graph structures and Neo4j to perform advanced reasoning via the Model Context Protocol (MCP).
π§ Adaptive Graph of Thoughts
π Next-Generation AI Reasoning Framework for Scientific Research
Leveraging graph structures to transform how AI systems approach scientific reasoning
π Overview
Adaptive Graph of Thoughts (AGoT) is a high-performance MCP server that implements the Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework. It uses a Neo4j graph database as a dynamic knowledge store and exposes reasoning capabilities through the Model Context Protocol (MCP), enabling seamless integration with AI assistants like Claude Desktop.
Key Highlights
| Feature | Description |
|---|---|
| π§ Graph-Based Reasoning | Multi-stage pipeline with 8 specialized reasoning stages |
| π Dynamic Confidence Scoring | Multi-dimensional evaluation with uncertainty quantification |
| π¬ Evidence Integration | Real-time connection to PubMed, Google Scholar & Exa Search |
| β‘ High Performance | Async FastAPI with Neo4j graph operations |
| π MCP Protocol | Native Claude Desktop & VS Code integration |
| π³ Cloud-Ready | Full Docker + Kubernetes (Helm) support |
ποΈ System Architecture
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#4A90D9', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2C5F8A', 'lineColor': '#666', 'secondaryColor': '#52B788', 'tertiaryColor': '#F8F9FA'}}}%%
graph TB
subgraph Clients["π₯οΈ Client Layer"]
CD["π€ Claude Desktop"]
VS["π» VS Code / Cursor"]
CC["π Custom MCP Clients"]
end
subgraph Gateway["π API Gateway Layer"]
MCP_EP["β‘ MCP Endpoint\n/mcp"]
NLQ_EP["π NLQ Endpoint\n/nlq"]
GE_EP["π Graph Explorer\n/graph"]
HE["π Health Check\n/health"]
end
subgraph Core["π§ Core Application Layer"]
direction TB
GTP["π GoT Processor\nOrchestrator"]
subgraph Pipeline["ASR-GoT 8-Stage Pipeline"]
S1["1οΈβ£ Init &\nContext Setup"]
S2["2οΈβ£ Query\nDecomposition"]
S3["3οΈβ£ Hypothesis\nGeneration"]
S4["4οΈβ£ Evidence\nIntegration"]
S5["5οΈβ£ Pruning &\nMerging"]
S6["6οΈβ£ Subgraph\nExtraction"]
S7["7οΈβ£ Synthesis &\nComposition"]
S8["8οΈβ£ Reflection &\nAudit"]
S1 --> S2 --> S3 --> S4 --> S5 --> S6 --> S7 --> S8
end
GTP --> Pipeline
end
subgraph Services["π οΈ Service Layer"]
LLM["π€ LLM Service\nOpenAI / Claude"]
EDB["π Evidence DB\nPubMed Β· Scholar Β· Exa"]
end
subgraph Storage["ποΈ Storage Layer"]
NEO4J["π¦ Neo4j\nGraph Database"]
CONFIG["βοΈ Config\n(YAML + ENV)"]
end
Clients -->|"MCP JSON-RPC\nBearer Auth"| Gateway
MCP_EP --> GTP
NLQ_EP --> LLM
GE_EP --> NEO4J
GTP --> Services
GTP --> NEO4J
LLM --> EDB
style Clients fill:#E3F2FD,stroke:#1565C0
style Gateway fill:#F3E5F5,stroke:#6A1B9A
style Core fill:#E8F5E9,stroke:#1B5E20
style Services fill:#FFF8E1,stroke:#F57F17
style Storage fill:#FCE4EC,stroke:#880E4F
π ASR-GoT Reasoning Pipeline
The 8-stage reasoning pipeline transforms a raw question into a comprehensive, evidence-backed answer stored in the knowledge graph:
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#7B68EE', 'edgeLabelBackground': '#fff'}}}%%
flowchart LR
Q([β Scientific\nQuestion]) --> S1
subgraph S1["Stage 1: Initialization"]
I1["Set context\n& parameters"]
I2["Create root\ngraph node"]
I1 --> I2
end
subgraph S2["Stage 2: Decomposition"]
D1["Identify\nsub-questions"]
D2["Map knowledge\ndomains"]
D1 --> D2
end
subgraph S3["Stage 3: Hypothesis"]
H1["Generate\nhypotheses"]
H2["Score initial\nconfidence"]
H1 --> H2
end
subgraph S4["Stage 4: Evidence"]
E1["Query PubMed\nScholar Β· Exa"]
E2["Integrate\nevidence nodes"]
E1 --> E2
end
subgraph S5["Stage 5: Pruning"]
P1["Remove weak\nhypotheses"]
P2["Merge related\nnodes"]
P1 --> P2
end
subgraph S6["Stage 6: Subgraph"]
SG1["Extract key\nsubgraphs"]
SG2["Score relevance\n& centrality"]
SG1 --> SG2
end
subgraph S7["Stage 7: Synthesis"]
C1["Compose final\nnarrative"]
C2["Build\nconclusions"]
C1 --> C2
end
subgraph S8["Stage 8: Reflection"]
R1["Audit\nconsistency"]
R2["Return final\nresult"]
R1 --> R2
end
S1 --> S2 --> S3 --> S4 --> S5 --> S6 --> S7 --> S8
S8 --> A([β
Reasoned\nAnswer])
style Q fill:#FFD700,stroke:#DAA520,color:#000
style A fill:#90EE90,stroke:#228B22,color:#000
πΈοΈ Knowledge Graph Connectome
The Neo4j knowledge graph captures the reasoning structure as a rich connectome β nodes represent concepts, hypotheses, and evidence, while edges represent semantic and logical relationships:
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#6C63FF', 'primaryTextColor': '#fff', 'edgeLabelBackground': '#f0f0ff'}}}%%
graph TD
RootQuery["π Root Query\n(Session Node)"]
subgraph Decomp["π Decomposition Layer"]
D1["π Sub-question A\n[domain: biology]"]
D2["π Sub-question B\n[domain: chemistry]"]
D3["π Sub-question C\n[domain: physics]"]
end
subgraph Hypo["π‘ Hypothesis Layer"]
H1["π Hypothesis 1\nconf: 0.85"]
H2["π Hypothesis 2\nconf: 0.72"]
H3["π Hypothesis 3\nconf: 0.61"]
H4["π Hypothesis 4\nconf: 0.90"]
end
subgraph Evid["π¬ Evidence Layer"]
E1["π PubMed Paper\nPMID: 38492"]
E2["π Scholar Article\nDOI: 10.1038/..."]
E3["π Exa Result\nexpert consensus"]
E4["π Statistical\nMeta-analysis"]
end
subgraph Synth["π― Synthesis Layer"]
C1["β
Merged\nConclusion A"]
C2["β
Merged\nConclusion B"]
FINAL["π Final Answer\n[confidence: 0.88]"]
end
RootQuery -->|"DECOMPOSES_TO"| D1
RootQuery -->|"DECOMPOSES_TO"| D2
RootQuery -->|"DECOMPOSES_TO"| D3
D1 -->|"GENERATES"| H1
D1 -->|"GENERATES"| H2
D2 -->|"GENERATES"| H3
D3 -->|"GENERATES"| H4
H1 -->|"SUPPORTED_BY"| E1
H2 -->|"SUPPORTED_BY"| E2
H3 -->|"CONTRADICTED_BY"| E3
H4 -->|"SUPPORTED_BY"| E4
H1 -->|"MERGES_WITH"| C1
H4 -->|"MERGES_WITH"| C1
H2 -->|"MERGES_WITH"| C2
C1 -->|"SYNTHESIZES_TO"| FINAL
C2 -->|"SYNTHESIZES_TO"| FINAL
style RootQuery fill:#4A90D9,color:#fff,stroke:#2C5F8A
style FINAL fill:#27AE60,color:#fff,stroke:#1E8449
style H3 fill:#E74C3C,color:#fff,stroke:#C0392B
π Request Flow
%%{init: {'theme': 'base'}}%%
sequenceDiagram
actor User as π€ Claude / MCP Client
participant API as β‘ FastAPI Server
participant Auth as π Auth Middleware
participant GTP as π§ GoT Processor
participant NEO as π¦ Neo4j DB
participant LLM as π€ LLM Service
User->>API: POST /mcp {"method": "asr_got.query"}
API->>Auth: Verify Bearer Token
Auth-->>API: β
Authorized
API->>GTP: Process query
GTP->>NEO: Create session + root node
loop 8 Pipeline Stages
GTP->>LLM: Generate hypotheses / summaries
LLM-->>GTP: LLM response
GTP->>NEO: Write nodes & relationships
NEO-->>GTP: Confirmed
end
GTP->>NEO: Extract final subgraph
NEO-->>GTP: Final answer graph
GTP-->>API: Structured result
API-->>User: JSON-RPC response\n{result, confidence, graph_state}
π Documentation
Full documentation including API reference, configuration guide, and contribution guidelines:
β‘οΈ Adaptive Graph of Thoughts Documentation Site
π Project Structure
Adaptive-Graph-of-Thoughts-MCP-server/
βββ π .github/ # CI/CD workflows (CodeQL, Dependabot)
βββ π agt_setup/ # Interactive setup wizard CLI
βββ π config/ # settings.yaml configuration
βββ π docs_src/ # MkDocs documentation source
βββ π helm/ # Kubernetes Helm chart
βββ π src/
β βββ π adaptive_graph_of_thoughts/
β βββ π api/ # FastAPI routes & schemas
β βββ π application/ # GoTProcessor orchestrator
β βββ π domain/ # 8-stage pipeline & models
β βββ π infrastructure/ # Neo4j utilities
β βββ π services/ # LLM & external API clients
βββ π tests/ # Comprehensive test suite
βββ Dockerfile
βββ docker-compose.yml
βββ pyproject.toml
βββ README.md
π Quick Start
git clone https://github.com/SaptaDey/Adaptive-Graph-of-Thoughts-MCP-server.git
cd Adaptive-Graph-of-Thoughts-MCP-server
poetry install
poetry run python -m agt_setup # Interactive credential setup wizard
poetry run uvicorn adaptive_graph_of_thoughts.main:app --reload
Visit http://localhost:8000/docs for the interactive API documentation.
π Getting Started
Deployment Prerequisites
Before running Adaptive Graph of Thoughts (either locally or via Docker if not using the provided docker-compose.prod.yml which includes Neo4j), ensure you have:
-
A running Neo4j Instance: Adaptive Graph of Thoughts requires a connection to a Neo4j graph database.
- APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g.,
apoc.create.addLabels,apoc.merge.node). Without APOC, the application will not function correctly. You can find installation instructions on the official APOC website. - Configuration: Ensure that your
config/settings.yaml(or corresponding environment variables) correctly points to your Neo4j instance URI, username, and password. - Indexing: For optimal performance, ensure appropriate Neo4j indexes are created. You can run
python scripts/run_cypher_migrations.pyto apply the provided Cypher migrations automatically. See Neo4j Indexing Strategy for details.
Note: The provided
docker-compose.yml(for development) anddocker-compose.prod.yml(for production) already include a Neo4j service with the APOC library pre-configured, satisfying this requirement when using Docker Compose. - APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g.,
Prerequisites
- Python 3.11+ (as specified in
pyproject.toml, e.g., the Docker image uses Python 3.11.x or 3.12.x, 3.13.x) - Poetry: For dependency management
- Docker and Docker Compose: For containerized deployment
Installation and Setup (Local Development)
-
Clone the repository:
git clone https://github.com/SaptaDey/Adaptive-Graph-of-Thoughts-MCP-server.git cd Adaptive-Graph-of-Thoughts-MCP-server -
Install dependencies using Poetry:
poetry installThis creates a virtual environment and installs all necessary packages specified in
pyproject.toml. -
Activate the virtual environment:
poetry shell -
Configure the application:
# Copy example configuration cp config/settings.example.yaml config/settings.yaml # Edit configuration as needed vim config/settings.yaml -
Set up environment variables (optional):
# Create .env file for sensitive configuration echo "LOG_LEVEL=DEBUG" > .env echo "API_HOST=0.0.0.0" >> .env echo "API_PORT=8000" >> .env
Secret Management
In production environments, set the SECRETS_PROVIDER environment variable to
aws, gcp, or vault to fetch sensitive values from a supported secrets
manager. Optionally provide <VAR>_SECRET_NAME variables (for example
OPENAI_API_KEY_SECRET_NAME) to control the name of each secret. When a secrets
provider is configured, values for OPENAI_API_KEY, ANTHROPIC_API_KEY, and
NEO4J_PASSWORD are loaded automatically at startup.
-
Run the development server:
python src/adaptive_graph_of_thoughts/main.pyAlternatively, for more control:
uvicorn adaptive_graph_of_thoughts.main:app --reload --host 0.0.0.0 --port 8000The API will be available at
http://localhost:8000.
β¨ Setup Wizard
An interactive wizard is available to streamline initial configuration.
poetry run python -m agt_setup
Then visit http://localhost:8000/setup to complete the web-based steps.
Setup wizard demo GIF will appear here in the full documentation.
Docker Deployment
graph TB
subgraph "Development Environment"
A[π¨βπ» Developer] --> B[π³ Docker Compose]
end
subgraph "Container Orchestration"
B --> C[π¦ Adaptive Graph of Thoughts Container]
B --> D[π Monitoring Container]
B --> E[ποΈ Database Container]
end
subgraph "Adaptive Graph of Thoughts Application"
C --> F[β‘ FastAPI Server]
F --> G[π§ ASR-GoT Engine]
F --> H[π MCP Protocol]
end
subgraph "External Integrations"
H --> I[π€ Claude Desktop]
H --> J[π Other AI Clients]
end
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style F fill:#fff3e0
style G fill:#ffebee
style H fill:#f1f8e9
-
Quick Start with Docker Compose:
# Build and run all services docker-compose up --build # For detached mode (background) docker-compose up --build -d # View logs docker-compose logs -f adaptive-graph-of-thoughts -
Individual Docker Container:
# Build the image docker build -t adaptive-graph-of-thoughts:latest . # Run the container docker run -p 8000:8000 -v $(pwd)/config:/app/config adaptive-graph-of-thoughts:latest -
Production Deployment:
# Use production compose file docker-compose -f docker-compose.prod.yml up --build -d
Kubernetes Deployment (Helm)
A minimal Helm chart is provided under helm/agot-server for
running Adaptive Graph of Thoughts on a Kubernetes cluster.
helm install agot helm/agot-server
Customize values in helm/agot-server/values.yaml to set the image repository,
resource limits, and other options.
Notes on Specific Deployment Platforms
- Smithery.ai: Deploy using the included
smithery.yaml.- Connect your GitHub repository on Smithery and click Deploy.
- The container listens on the
PORTenvironment variable (default8000). - Health Checks rely on the
/healthendpoint. - The
Dockerfileanddocker-compose.prod.ymlillustrate the container setup.
- Access the Services:
- API Documentation:
http://localhost:8000/docs - Health Check:
http://localhost:8000/health - MCP Endpoint:
http://localhost:8000/mcp
- API Documentation:
π MCP Client Integration
Supported MCP Clients
Adaptive Graph of Thoughts supports integration with various MCP clients:
- Claude Desktop - Full STDIO and HTTP support
- VS Code - Via MCP extensions
- Custom MCP Clients - Generic configuration available
Quick Client Setup
Claude Desktop / VS Code settings
{
"mcpServers": {
"adaptive-graph-of-thoughts": {
"command": "python",
"args": ["-m", "adaptive_graph_of_thoughts.main"],
"cwd": "/path/to/Adaptive-Graph-of-Thoughts-MCP-server",
"env": {
"NEO4J_URI": "bolt://localhost:7687",
"NEO4J_USER": "neo4j",
"NEO4J_PASSWORD": "your_password",
"MCP_TRANSPORT_TYPE": "stdio"
}
}
}
}
Available MCP Tools
| Tool | Description |
|---|---|
scientific_reasoning_query | Advanced scientific reasoning with graph analysis |
analyze_research_hypothesis | Hypothesis evaluation with confidence scoring |
explore_scientific_relationships | Concept relationship mapping |
validate_scientific_claims | Evidence-based claim validation |
π API Endpoints
The primary API endpoints exposed by Adaptive Graph of Thoughts are:
-
MCP Protocol Endpoint:
POST /mcp- This endpoint is used for communication with MCP clients like Claude Desktop.
- Example Request for the
asr_got.querymethod:{ "jsonrpc": "2.0", "method": "asr_got.query", "params": { "query": "Analyze the relationship between microbiome diversity and cancer progression.", "parameters": { "include_reasoning_trace": true, "include_graph_state": false } }, "id": "123" } - Other supported MCP methods include
initializeandshutdown.
-
Health Check Endpoint:
GET /health- Provides a simple health status of the application.
- Example Response:
{ "status": "healthy", "version": "0.1.0" }
The advanced API endpoints previously listed (e.g., /api/v1/graph/query) are not implemented in the current version and are reserved for potential future development.
Session Handling (session_id)
Currently, the session_id parameter available in API requests (e.g., for asr_got.query) and present in responses serves primarily to identify and track a single, complete query-response cycle. It is also used for correlating progress notifications (like got/queryProgress) with the originating query.
While the system generates and utilizes session_ids, Adaptive Graph of Thoughts does not currently support true multi-turn conversational continuity where the detailed graph state or reasoning context from a previous query is automatically loaded and reused for a follow-up query using the same session_id. Each query is processed independently at this time.
Future Enhancement: Persistent Sessions
A potential future enhancement for Adaptive Graph of Thoughts is the implementation of persistent sessions. This would enable more interactive and evolving reasoning processes by allowing users to:
- Persist State: Store the generated graph state and relevant reasoning context from a query, associated with its
session_id, likely within the Neo4j database. - Reload State: When a new query is submitted with an existing
session_id, the system could reload this saved state as the starting point for further processing. - Refine and Extend: Allow the new query to interact with the loaded graphβfor example, by refining previous hypotheses, adding new evidence to existing structures, or exploring alternative reasoning paths based on the established context.
Implementing persistent sessions would involve developing robust strategies for:
- Efficiently storing and retrieving session-specific graph data in Neo4j.
- Managing the lifecycle (e.g., creation, update, expiration) of session data.
- Designing sophisticated logic for how new queries merge with, modify, or extend pre-existing session contexts and graphs.
This is a significant feature that could greatly enhance the interactive capabilities of Adaptive Graph of Thoughts. Contributions from the community in designing and implementing persistent session functionality are welcome.
Future Enhancement: Asynchronous and Parallel Stage Execution
Currently, the 8 stages of the Adaptive Graph of Thoughts reasoning pipeline are executed sequentially. For complex queries or to further optimize performance, exploring asynchronous or parallel execution for certain parts of the pipeline is a potential future enhancement.
Potential Areas for Parallelism:
- Hypothesis Generation: The
HypothesisStagegenerates hypotheses for each dimension identified by theDecompositionStage. The process of generating hypotheses for different, independent dimensions could potentially be parallelized. For instance, if three dimensions are decomposed, three parallel tasks could work on generating hypotheses for each respective dimension. - Evidence Integration (Partial): Within the
EvidenceStage, if multiple hypotheses are selected for evaluation, the "plan execution" phase (simulated evidence gathering) for these different hypotheses might be performed concurrently.
Challenges and Considerations:
Implementing parallel stage execution would introduce complexities that need careful management:
- Data Consistency: Concurrent operations, especially writes to the Neo4j database (e.g., creating multiple hypothesis nodes or evidence nodes simultaneously), must be handled carefully to ensure data integrity and avoid race conditions. Unique ID generation schemes would need to be robust for parallel execution.
- Transaction Management: Neo4j transactions for concurrent writes would need to be managed appropriately.
- Dependency Management: Ensuring that stages (or parts of stages) that truly depend on the output of others are correctly sequenced would be critical.
- Resource Utilization: Parallel execution could increase resource demands (CPU, memory, database connections).
- Complexity: The overall control flow of the
GoTProcessorwould become more complex.
While the current sequential execution ensures a clear and manageable data flow, targeted parallelism in areas like hypothesis generation for independent dimensions could offer performance benefits for future versions of Adaptive Graph of Thoughts. This remains an open area for research and development.
π§ͺ Testing & Quality Assurance
Development Commands
Continuous integration pipelines on GitHub Actions run tests, CodeQL analysis, and Microsoft Defender for DevOps security scans.
# Run full test suite with coverage using Poetry
poetry run pytest --cov=src --cov-report=html --cov-report=term
# Or using Makefile for the default test run
make test
# Run specific test categories (using poetry)
poetry run pytest tests/unit/stages/ # Stage-specific tests
poetry run pytest tests/integration/ # Integration tests
poetry run pytest -k "test_confidence" # Tests matching pattern
# Type checking and linting (can also be run via Makefile targets: make lint, make check-types)
poetry run mypy src/ --strict # Strict type checking
poetry run ruff check . --fix # Auto-fix linting issues
poetry run ruff format . # Format code
# Pre-commit hooks (recommended)
poetry run pre-commit install # Install hooks
poetry run pre-commit run --all-files # Run all hooks (runs Ruff and MyPy)
# See Makefile for other useful targets like 'make all-checks'.
π₯ Dashboard Tour
Dashboard demo GIF coming soon.
π» IDE Integration
Use the vscode-agot extension to query the server from VS Code. Run the extension and execute AGoT: Ask Graph⦠from the Command Palette.
β Troubleshooting
If the server fails to start or setup reports errors, ensure your Neo4j instance is running and the credentials in .env are correct. Consult the console output for details.
πΊοΈ Roadmap and Future Directions
We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.
For more details on our planned features and long-term goals, please see our Roadmap (also available on the documentation site).
π€ Contributing
We welcome contributions! Please see our Contributing Guidelines (also available on the documentation site) for details on how to get started, our branching strategy, code style, and more.
π License
This project is licensed under the Apache License 2.0. License.
π Security
Please see our Security Policy for reporting vulnerabilities and details on supported versions.
π Acknowledgments
- NetworkX community for graph analysis capabilities
- FastAPI team for the excellent web framework
- Pydantic for robust data validation
- The scientific research community for inspiration and feedback
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Quick Chart MCP Server
A server for creating charts and visualizations using the Quick Chart API.
Cache Overflow
knowledge network for AI coding agents. Developers connect their agents to a shared pool of verified solutions β saving tokens, reducing debugging time, and getting better results. Solution authors earn when their work helps others.
nREPL MCP Server
Interact with a running Clojure nREPL instance for code evaluation, namespace inspection, and other utilities.
Fastn Server
A scalable platform for dynamic tool registration and execution based on API definitions, with integrations for services like Claude.ai and Cursor.ai.
Prompt MCP Server for Amazon Q
An MCP server for the Amazon Q Developer CLI to manage local prompt files.
Packmind
Access and manage your team's coding best practices and knowledge base from Packmind.
ε³ζ’¦AIε€ζ¨‘ζMCP
A multimodal generation service using Volcengine Jimeng AI for image generation, video generation, and image-to-video conversion.
MCP Installer
Set up MCP servers in Claude Desktop
XCF Xcode MCP Server
A Swift-based MCP server that integrates with Xcode to enhance AI development workflows.
Markdown2PDF
Convert Markdown documents to PDF files with syntax highlighting, custom styling, and optional watermarking.
