VoidLang
VoidLang - LLM Native Machine Code MCP
VoidLang MCP
A bytecode-style language designed for LLMs. The model writes numeric opcodes as JSON, the server returns a working binary, a React app, an iOS / Android scaffold, or a
docker-compose.yml. No syntax. No parsing. No token waste on punctuation.
┌──────────────────────────────────────────────────────────────────────┐
│ LLM ──[opcode JSON]──▶ voidmcp ──▶ Go / React / Swift / Kotlin │
│ │ │
│ └──▶ go build / npm build │
│ │
│ ──▶ artifact URL (binary / zip) │
└──────────────────────────────────────────────────────────────────────┘
- 9 compilation targets:
linux,macos,windows,ios,android,web,pwa,wasm,docker. - ~160 opcodes covering HTTP, SQL, Redis, JWT, bcrypt, web UI, mobile UI, file IO, and Docker topology.
- One-shot ISA endpoint returns the entire instruction set in one call — no per-call schema discovery.
- Two transports: HTTP (any LLM, any IDE, any agent) and MCP stdio JSON-RPC (Claude Desktop / Claude Code).
- Production-grade: stdlib-only server, multi-stage Dockerfile, healthcheck, Railway-ready.
Why this exists
LLMs are terrible at writing valid syntax in a brand-new language, but
they are excellent at outputting JSON arrays. VoidLang flips the
contract: the language is a JSON array of [opcode, args…] pairs.
The compiler does all of the work — formatting, imports, error
handling, deployment topology — so the model only has to express
intent, not boilerplate.
A web API that talks to Postgres, authenticates with JWT, has CRUD on two tables, and deploys to Docker compose, fits in ~80 instructions. A handwritten Go version of the same app is ~1500 lines.
Quick start
Run the server locally
make build && ./build/voidmcp --addr :7070
or without installing:
make dev
Open http://localhost:7070 for the landing page, or hit any endpoint:
curl -s http://localhost:7070/isa | jq '.opcodes | length'
# → ~95
Compile your first void file
curl -sX POST http://localhost:7070/compile \
-H 'Content-Type: application/json' \
-d '{"void": {
"v": 1,
"name": "hello",
"tgt": ["linux"],
"ins": [
[1, "hello"],
[3, "linux"],
[242, "Hello from VoidLang!"]
]
}}' | jq .
The response contains the generated Go source, a go.mod, and (if go
is on the server's PATH) a URL to download the compiled binary.
Or run it as a Claude Desktop / Claude Code MCP server
Add to ~/.config/claude/claude_desktop_config.json (or equivalent):
{
"mcpServers": {
"voidlang": {
"command": "/absolute/path/to/build/voidmcp",
"args": ["stdio"]
}
}
}
Claude will see four tools: isa, isa_quick, targets, compile.
Endpoints
| Method | Path | What it does |
|---|---|---|
| GET | / | Landing page (handy for verifying a deploy). |
| GET | /isa | Full ISA — give this once to any LLM. |
| GET | /isa/quick | ~2k-token condensed ISA. |
| GET | /targets | Available compilation targets. |
| POST | /compile | {void, target?, name?} → artifacts. |
| POST | /run | Compile + run locally (opt-in, see below). |
| GET | /artifacts/{id} | Download a previously-compiled binary blob. |
| GET | /health | Liveness + toolchain capability report. |
| GET | /mcp | Lightweight tool manifest. |
POST /compile
// request
{
"void": { "v": 1, "name": "todo_api", "tgt": ["linux", "docker"], "ins": [/*…*/] },
"target": "linux", // optional override
"name": "todo_api" // optional override
}
// response
{
"ok": true,
"app": "todo_api",
"targets": [
{
"target": "linux",
"kind": "binary",
"binary_size": 9482240,
"download_url": "http://localhost:7070/artifacts/3f6a2b1c8e9d4f70",
"files": { "main.go": "…", "go.mod": "…" }
},
{
"target": "docker-compose",
"kind": "text",
"files": { "docker-compose.yml": "…", ".env.example": "…", "Dockerfile": "…" }
}
]
}
POST /run
Disabled by default. Set VOIDMCP_ALLOW_RUN=1 on the server to enable.
The server will compile and execute the generated binary, returning
stdout/stderr/exit code. Do not enable this on a public Railway
deployment — only use it on a sandbox where running arbitrary code is
safe.
Deploy to Railway
The repo ships with a Dockerfile, a railway.json, and a nixpacks.toml fallback. To deploy:
# install the Railway CLI once
npm i -g @railway/cli
railway login
# inside this repo
railway init # pick "Empty Project"
railway up # builds + deploys the Dockerfile
railway domain # mint a public URL
The healthcheck path (/health) is wired through railway.json, so
Railway will fail-fast if the server can't boot.
After deploy:
curl https://your-app.up.railway.app/isa | jq '.opcodes | length'
Environment variables
| Var | Default | Meaning |
|---|---|---|
PORT | 7070 | HTTP port (Railway sets this automatically). |
VOIDMCP_ALLOW_RUN | unset | Set to 1 to enable POST /run. Do not enable in prod. |
VOIDMCP_PUBLIC_BASE | autodetected | Override the base URL embedded in download_url. |
Monetisation hooks
The repo is MIT-licensed, so you're free to deploy it as a paid SaaS. A common pattern:
- Put an API gateway (e.g. Kong, Cloudflare Workers, or a tiny proxy
service) in front of
voidmcp. Authenticate by API key. Meter/compilecalls per key. - Charge per-compile or per-month per-LLM-agent. The cost basis is the ~50–500 ms of CPU each call uses; price output value (a working binary), not CPU time.
- Optional: rate-limit
/compilewith a Redis-backed sliding window (the ISA exposes the opcodes you'd need to build this in VoidLang itself, recursively). - Free tier idea:
/isa,/isa/quick,/targetsare read-only and cheap — leave them unauthenticated to maximise model adoption.
How an LLM uses it
The expected loop is:
- Once per session:
GET /isa— load the entire instruction set into the model's context. ~30k tokens. (OrGET /isa/quickfor ~2k.) - For each user request: emit a void file as a JSON object, send
to
POST /compile. Stream the response back to the user with a download link. - Optional refinement: if the LLM made a mistake, the server returns
build_logwith the Go compiler's diagnostic. Feed it back to the LLM and ask for a fixed instruction array.
A complete system prompt template is in docs/LLM_PROMPT.md.
Examples
- examples/hello.void — minimal Hello World.
- examples/todo_api.void — full CRUD API with Postgres + JWT auth + docker-compose.
Architecture
voidLang/
├── cmd/voidmcp/ main entrypoint (HTTP + stdio mode)
├── internal/
│ ├── isa/ opcode definitions + metadata
│ ├── void/ .void file decoder
│ ├── codegen/
│ │ ├── golang/ Go backend (linux/macos/windows/docker/wasm seed)
│ │ ├── web/ React + Vite project generator
│ │ ├── mobile/ iOS (SwiftUI) + Android (Compose) scaffolds
│ │ ├── wasm/ WebAssembly build helper
│ │ └── docker/ docker-compose.yml generator
│ └── mcp/ HTTP server + stdio JSON-RPC transport
├── examples/ sample .void files
├── docs/ ISA reference, deployment, LLM prompt template
├── Dockerfile multi-stage build for production
├── railway.json Railway deploy config
├── nixpacks.toml Railway nixpacks fallback
├── Makefile build / run / docker helpers
└── go.mod stdlib-only (no external deps)
The server has zero external Go dependencies. The generated
programs do depend on gin, pgx, go-redis, golang-jwt, and
x/crypto — those get downloaded on first compile and cached.
Development
make dev # run via `go run` (no install)
make stdio # run MCP stdio mode
make test # run unit tests
make fmt # gofmt
make docker-run # full Dockerised cycle
The server itself is stateless aside from a 30-minute in-memory cache
of compiled artifacts (so /artifacts/{id} links don't expire too
fast). Restart-and-go.
Docs
- docs/ISA.md — opcode-by-opcode reference.
- docs/LLM_PROMPT.md — drop-in system prompt for any LLM.
- docs/DEPLOY.md — Railway / Fly / bare-VM deploy notes.
License
MIT — a product of voidback. Use it, fork it, deploy it, charge for it, rewrite it in a language we've never heard of. voidback exists to advance the agentic AI era openly. Please contribute, toy around, and keep moving forward. There is no limit. Not even AGI.
Related Servers
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Electron Driver
Drive Electron apps from AI agents via MCP - click, type, drag, screenshot, eval JS, and more.
Claude Memory MCP Server
A persistent memory server for Large Language Models, designed to integrate with the Claude desktop application. It supports tiered memory, semantic search, and automatic memory management.
MCP Server
A backend service providing tools, resources, and prompts for AI models using the Model Context Protocol (MCP).
Wrapping MCP server with Express
A simple example of wrapping an MCP server with Express for web integration.
MCPJam Inspector
A developer tool for testing and debugging MCP servers, supporting STDIO, SSE, and Streamable HTTP protocols.
Vigil
Cognitive infrastructure for AI agents — awareness daemon, frame-based tool filtering, signal protocol, session handoff, and event triggers.
Yellhorn MCP
An MCP server that integrates Gemini 2.5 Pro and OpenAI models for software development tasks, allowing the use of your entire codebase as context.
Vibetest Use
Automated QA testing for websites to find UI bugs, broken links, and accessibility issues.
Feishu MCP Server
An MCP server with built-in Feishu OAuth authentication, deployable on Cloudflare Workers.
Dappier MCP Server
An MCP server for interacting with Dappier's Retrieval-Augmented Generation (RAG) models.