VoidLang

VoidLang - LLM Native Machine Code MCP

VoidLang MCP

A bytecode-style language designed for LLMs. The model writes numeric opcodes as JSON, the server returns a working binary, a React app, an iOS / Android scaffold, or a docker-compose.yml. No syntax. No parsing. No token waste on punctuation.

┌──────────────────────────────────────────────────────────────────────┐
│  LLM  ──[opcode JSON]──▶  voidmcp  ──▶  Go / React / Swift / Kotlin  │
│                                  │                                   │
│                                  └──▶  go build / npm build          │
│                                                                      │
│                                  ──▶  artifact URL (binary / zip)    │
└──────────────────────────────────────────────────────────────────────┘
  • 9 compilation targets: linux, macos, windows, ios, android, web, pwa, wasm, docker.
  • ~160 opcodes covering HTTP, SQL, Redis, JWT, bcrypt, web UI, mobile UI, file IO, and Docker topology.
  • One-shot ISA endpoint returns the entire instruction set in one call — no per-call schema discovery.
  • Two transports: HTTP (any LLM, any IDE, any agent) and MCP stdio JSON-RPC (Claude Desktop / Claude Code).
  • Production-grade: stdlib-only server, multi-stage Dockerfile, healthcheck, Railway-ready.

Why this exists

LLMs are terrible at writing valid syntax in a brand-new language, but they are excellent at outputting JSON arrays. VoidLang flips the contract: the language is a JSON array of [opcode, args…] pairs. The compiler does all of the work — formatting, imports, error handling, deployment topology — so the model only has to express intent, not boilerplate.

A web API that talks to Postgres, authenticates with JWT, has CRUD on two tables, and deploys to Docker compose, fits in ~80 instructions. A handwritten Go version of the same app is ~1500 lines.


Quick start

Run the server locally

make build && ./build/voidmcp --addr :7070

or without installing:

make dev

Open http://localhost:7070 for the landing page, or hit any endpoint:

curl -s http://localhost:7070/isa | jq '.opcodes | length'
# → ~95

Compile your first void file

curl -sX POST http://localhost:7070/compile \
  -H 'Content-Type: application/json' \
  -d '{"void": {
    "v": 1,
    "name": "hello",
    "tgt": ["linux"],
    "ins": [
      [1, "hello"],
      [3, "linux"],
      [242, "Hello from VoidLang!"]
    ]
  }}' | jq .

The response contains the generated Go source, a go.mod, and (if go is on the server's PATH) a URL to download the compiled binary.

Or run it as a Claude Desktop / Claude Code MCP server

Add to ~/.config/claude/claude_desktop_config.json (or equivalent):

{
  "mcpServers": {
    "voidlang": {
      "command": "/absolute/path/to/build/voidmcp",
      "args": ["stdio"]
    }
  }
}

Claude will see four tools: isa, isa_quick, targets, compile.


Endpoints

MethodPathWhat it does
GET/Landing page (handy for verifying a deploy).
GET/isaFull ISA — give this once to any LLM.
GET/isa/quick~2k-token condensed ISA.
GET/targetsAvailable compilation targets.
POST/compile{void, target?, name?} → artifacts.
POST/runCompile + run locally (opt-in, see below).
GET/artifacts/{id}Download a previously-compiled binary blob.
GET/healthLiveness + toolchain capability report.
GET/mcpLightweight tool manifest.

POST /compile

// request
{
  "void":   { "v": 1, "name": "todo_api", "tgt": ["linux", "docker"], "ins": [/*…*/] },
  "target": "linux",     // optional override
  "name":   "todo_api"   // optional override
}

// response
{
  "ok": true,
  "app": "todo_api",
  "targets": [
    {
      "target": "linux",
      "kind":   "binary",
      "binary_size": 9482240,
      "download_url": "http://localhost:7070/artifacts/3f6a2b1c8e9d4f70",
      "files": { "main.go": "…", "go.mod": "…" }
    },
    {
      "target": "docker-compose",
      "kind":   "text",
      "files": { "docker-compose.yml": "…", ".env.example": "…", "Dockerfile": "…" }
    }
  ]
}

POST /run

Disabled by default. Set VOIDMCP_ALLOW_RUN=1 on the server to enable. The server will compile and execute the generated binary, returning stdout/stderr/exit code. Do not enable this on a public Railway deployment — only use it on a sandbox where running arbitrary code is safe.


Deploy to Railway

The repo ships with a Dockerfile, a railway.json, and a nixpacks.toml fallback. To deploy:

# install the Railway CLI once
npm i -g @railway/cli
railway login

# inside this repo
railway init                      # pick "Empty Project"
railway up                        # builds + deploys the Dockerfile
railway domain                    # mint a public URL

The healthcheck path (/health) is wired through railway.json, so Railway will fail-fast if the server can't boot.

After deploy:

curl https://your-app.up.railway.app/isa | jq '.opcodes | length'

Environment variables

VarDefaultMeaning
PORT7070HTTP port (Railway sets this automatically).
VOIDMCP_ALLOW_RUNunsetSet to 1 to enable POST /run. Do not enable in prod.
VOIDMCP_PUBLIC_BASEautodetectedOverride the base URL embedded in download_url.

Monetisation hooks

The repo is MIT-licensed, so you're free to deploy it as a paid SaaS. A common pattern:

  1. Put an API gateway (e.g. Kong, Cloudflare Workers, or a tiny proxy service) in front of voidmcp. Authenticate by API key. Meter /compile calls per key.
  2. Charge per-compile or per-month per-LLM-agent. The cost basis is the ~50–500 ms of CPU each call uses; price output value (a working binary), not CPU time.
  3. Optional: rate-limit /compile with a Redis-backed sliding window (the ISA exposes the opcodes you'd need to build this in VoidLang itself, recursively).
  4. Free tier idea: /isa, /isa/quick, /targets are read-only and cheap — leave them unauthenticated to maximise model adoption.

How an LLM uses it

The expected loop is:

  1. Once per session: GET /isa — load the entire instruction set into the model's context. ~30k tokens. (Or GET /isa/quick for ~2k.)
  2. For each user request: emit a void file as a JSON object, send to POST /compile. Stream the response back to the user with a download link.
  3. Optional refinement: if the LLM made a mistake, the server returns build_log with the Go compiler's diagnostic. Feed it back to the LLM and ask for a fixed instruction array.

A complete system prompt template is in docs/LLM_PROMPT.md.

Examples


Architecture

voidLang/
├── cmd/voidmcp/            main entrypoint (HTTP + stdio mode)
├── internal/
│   ├── isa/                opcode definitions + metadata
│   ├── void/               .void file decoder
│   ├── codegen/
│   │   ├── golang/         Go backend (linux/macos/windows/docker/wasm seed)
│   │   ├── web/            React + Vite project generator
│   │   ├── mobile/         iOS (SwiftUI) + Android (Compose) scaffolds
│   │   ├── wasm/           WebAssembly build helper
│   │   └── docker/         docker-compose.yml generator
│   └── mcp/                HTTP server + stdio JSON-RPC transport
├── examples/               sample .void files
├── docs/                   ISA reference, deployment, LLM prompt template
├── Dockerfile              multi-stage build for production
├── railway.json            Railway deploy config
├── nixpacks.toml           Railway nixpacks fallback
├── Makefile                build / run / docker helpers
└── go.mod                  stdlib-only (no external deps)

The server has zero external Go dependencies. The generated programs do depend on gin, pgx, go-redis, golang-jwt, and x/crypto — those get downloaded on first compile and cached.


Development

make dev        # run via `go run` (no install)
make stdio      # run MCP stdio mode
make test       # run unit tests
make fmt        # gofmt
make docker-run # full Dockerised cycle

The server itself is stateless aside from a 30-minute in-memory cache of compiled artifacts (so /artifacts/{id} links don't expire too fast). Restart-and-go.


Docs


License

MIT — a product of voidback. Use it, fork it, deploy it, charge for it, rewrite it in a language we've never heard of. voidback exists to advance the agentic AI era openly. Please contribute, toy around, and keep moving forward. There is no limit. Not even AGI.

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension