A read-only MCP server for querying live data from various APIs using the CData JDBC Driver for API Driver.
CData's Model Context Protocol (MCP) Server for API Driver
:heavy_exclamation_mark: This project builds a read-only MCP server. For full read, write, update, delete, and action capabilities and a simplified setup, check out our free CData MCP Server for API Driver (beta).
We created this read-only MCP Server to allow LLMs (like Claude Desktop) to query live data API Driver supported by the CData JDBC Driver for API Driver.
CData JDBC Driver connects to API Driver by exposing them as relational SQL models.
This server wraps that driver and makes API Driver data available through a simple MCP interface, so LLMs can retrieve live information by asking natural language questions — no SQL required.
git clone https://github.com/cdatasoftware/api-driver-mcp-server-by-cdata.git
cd api-driver-mcp-server-by-cdata
mvn clean install
This creates the JAR file: CDataMCP-jar-with-dependencies.jarlib
folder in the installation directory, typically:
C:\Program Files\CData\CData JDBC Driver for API Driver\
/Applications/CData JDBC Driver for API Driver/
java -jar cdata.jdbc.apidriver.jar --license
Run the command java -jar cdata.jdbc.apidriver.jar
to open the Connection String utility.
Configure the connection string and click "Test Connection"
Note: If the data sources uses OAuth, you will need to authenticate in your browser.
Once successful, copy the connection string for use later.
.prp
file for your JDBC connection (e.g. api-driver.prp
) using the following properties and format:
Prefix=apidriver
ServerName=CDataAPIDriver
ServerVersion=1.0
DriverPath=PATH\TO\cdata.jdbc.apidriver.jar
DriverClass=cdata.jdbc.apidriver.APIDriverDriver
JdbcUrl=jdbc:apidriver:InitiateOAuth=GETANDREFRESH;
Tables=
Create the config file for Claude Desktop ( claude_desktop_config.json) to add the new MCP server, using the format below. If the file already exists, add the entry to the mcpServers
in the config file.
Windows
{
"mcpServers": {
"{classname_dash}": {
"command": "PATH\\TO\\java.exe",
"args": [
"-jar",
"PATH\\TO\\CDataMCP-jar-with-dependencies.jar",
"PATH\\TO\\api-driver.prp"
]
},
...
}
}
Linux/Mac
{
"mcpServers": {
"{classname_dash}": {
"command": "/PATH/TO/java",
"args": [
"-jar",
"/PATH/TO/CDataMCP-jar-with-dependencies.jar",
"/PATH/TO/api-driver.prp"
]
},
...
}
}
If needed, copy the config file to the appropriate directory (Claude Desktop as the example). Windows
cp C:\PATH\TO\claude_desktop_config.json %APPDATA%\Claude\claude_desktop_config.json
Linux/Mac
cp /PATH/TO/claude_desktop_config.json /Users/{user}/Library/Application\ Support/Claude/claude_desktop_config.json'
Run or refresh your client (Claude Desktop).
Note: You may need to fully exit or quit your Claude Desktop client and re-open it for the MCP Servers to appear.
java -jar /PATH/TO/CDataMCP-jar-with-dependencies.jar /PATH/TO/Salesforce.prp
Note: The server uses
stdio
so can only be used with clients that run on the same machine as the server.
Once the MCP Server is configured, the AI client will be able to use the built-in tools to read, write, update, and delete the underlying data. In general, you do not need to call the tools explicitly. Simply ask the client to answer questions about the underlying data system. For example:
The list of tools available and their descriptions follow:
In the definitions below, {servername}
refers to the name of the MCP Server in the config file (e.g. {classname_dash}
above).
{servername}_get_tables
- Retrieves a list of tables available in the data source. Use the {servername}_get_columns
tool to list available columns on a table. The output of the tool will be returned in CSV format, with the first line containing column headers.{servername}_get_columns
- Retrieves a list of columns for a table. Use the {servername}_get_tables
tool to get a list of available tables. The output of the tool will be returned in CSV format, with the first line containing column headers.{servername}_run_query
- Execute a SQL SELECT queryIf you are scripting out the requests sent to the MCP Server instead of using an AI Client (e.g. Claude), then you can refer to the JSON payload examples below – following the JSON-RPC 2.0 specification - when calling the available tools.
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "source_get_tables",
"arguments": {}
}
}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "source_get_columns",
"arguments": {
"table": "Account"
}
}
}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "source_run_query",
"arguments": {
"sql": "SELECT * FROM [Account] WHERE [IsDeleted] = true"
}
}
}
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
A server for Retrieval Augmented Generation (RAG), providing AI clients access to a private knowledge base built from user documents.
a lightweight, local RAG memory store to record, retrieve, update, delete, and visualize persistent "memories" across sessions—perfect for developers working with multiple AI coders (like Windsurf, Cursor, or Copilot) or anyone who wants their AI to actually remember them.
A comprehensive movie database server supporting advanced search, CRUD operations, and image management via a PostgreSQL database.
Interact with the Solana blockchain to check balances, send SOL, and airdrop SOL.
A server that enables LLMs to connect and interact with databases via JDBC, built using the Spring AI MCP framework.
A standardized interface for AI assistants to interact with a SurrealDB database.
MCP server for SQLite files. Supports Datasette-compatible metadata!
Provides read-only access to PostgreSQL databases, enabling schema inspection and query execution.
MCP server for dbt-core (OSS) users as the official dbt MCP only supports dbt Cloud. Supports project metadata, model and column-level lineage and dbt documentation.
An MCP server for integrating with SAP OData services, configured via environment variables.