Predict anything with Chronulus AI forecasting and prediction agents.
Claude for Desktop is currently available on macOS and Windows.
Install Claude for Desktop here
Follow the general instructions here to configure the Claude desktop client.
You can find your Claude config at one of the following locations:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
Then choose one of the following methods that best suits your needs and add it to your claude_desktop_config.json
(Option 1) Install release from PyPI
pip install chronulus-mcp
(Option 2) Install from Github
git clone https://github.com/ChronulusAI/chronulus-mcp.git
cd chronulus-mcp
pip install .
{
"mcpServers": {
"chronulus-agents": {
"command": "python",
"args": ["-m", "chronulus_mcp"],
"env": {
"CHRONULUS_API_KEY": "<YOUR_CHRONULUS_API_KEY>"
}
}
}
}
Note, if you get an error like "MCP chronulus-agents: spawn python ENOENT",
then you most likely need to provide the absolute path to python
.
For example /Library/Frameworks/Python.framework/Versions/3.11/bin/python3
instead of just python
Here we will build a docker image called 'chronulus-mcp' that we can reuse in our Claude config.
git clone https://github.com/ChronulusAI/chronulus-mcp.git
cd chronulus-mcp
docker build . -t 'chronulus-mcp'
In your Claude config, be sure that the final argument matches the name you give to the docker image in the build command.
{
"mcpServers": {
"chronulus-agents": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "CHRONULUS_API_KEY", "chronulus-mcp"],
"env": {
"CHRONULUS_API_KEY": "<YOUR_CHRONULUS_API_KEY>"
}
}
}
}
uvx
will pull the latest version of chronulus-mcp
from the PyPI registry, install it, and then run it.
{
"mcpServers": {
"chronulus-agents": {
"command": "uvx",
"args": ["chronulus-mcp"],
"env": {
"CHRONULUS_API_KEY": "<YOUR_CHRONULUS_API_KEY>"
}
}
}
}
Note, if you get an error like "MCP chronulus-agents: spawn uvx ENOENT", then you most likely need to either:
uvx
. For example /Users/username/.local/bin/uvx
instead of just uvx
In our demo, we use third-party servers like fetch and filesystem.
For details on installing and configure third-party server, please reference the documentation provided by the server maintainer.
Below is an example of how to configure filesystem and fetch alongside Chronulus in your claude_desktop_config.json
:
{
"mcpServers": {
"chronulus-agents": {
"command": "uvx",
"args": ["chronulus-mcp"],
"env": {
"CHRONULUS_API_KEY": "<YOUR_CHRONULUS_API_KEY>"
}
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/AIWorkspace"
]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
}
To streamline your experience using Claude across multiple sets of tools, it is best to add your preferences to under Claude Settings.
You can upgrade your Claude preferences in a couple ways:
Settings -> General -> Claude Settings -> Profile (tab)
Profile (tab)
Preferences are shared across both Claude for Desktop and Claude.ai (the web interface). So your instruction need to work across both experiences.
Below are the preferences we used to achieve the results shown in our demos:
## Tools-Dependent Protocols
The following instructions apply only when tools/MCP Servers are accessible.
### Filesystem - Tool Instructions
- Do not use 'read_file' or 'read_multiple_files' on binary files (e.g., images, pdfs, docx) .
- When working with binary files (e.g., images, pdfs, docx) use 'get_info' instead of 'read_*' tools to inspect a file.
### Chronulus Agents - Tool Instructions
- When using Chronulus, prefer to use input field types like TextFromFile, PdfFromFile, and ImageFromFile over scanning the files directly.
- When plotting forecasts from Chronulus, always include the Chronulus-provided forecast explanation below the plot and label it as Chronulus Explanation.
Enables AI assistants to use a Neo4j knowledge graph for standardized coding workflows, acting as a dynamic instruction manual and project memory.
An iOS mobile automation server using Appium and WebDriverAgent, built with clean architecture and SOLID principles.
Integrates ComfyUI with MCP, allowing the use of custom workflows. Requires a running ComfyUI server.
Server for advanced AI-driven video editing, semantic search, multilingual transcription, generative media, voice cloning, and content moderation.
Query the BuiltWith API to discover the technology stacks of websites. Requires a BuiltWith API key.
A context insertion and search server for Claude Desktop and Cursor IDE, using configurable API endpoints.
Interact with the Lean theorem prover via the Language Server Protocol (LSP), enabling LLM agents to understand, analyze, and modify Lean projects.
Interact with over 100 cryptocurrency exchange APIs using the CCXT library.
Integrates with the Neo N3 blockchain for wallet management, asset transfers, contract interactions, and blockchain queries.
Create and manage end-to-end tests using the Octomind platform.