Cernion Grid Intelligence

87+ specialized tools for German and European energy data. Direct AI access to Marktstammdatenregister (MaStR), ENTSO-E, Redispatch 2.0, and Grid Operations for utilities and datacenters.

Cernion Energy Tools

MicroService Agent System for Energy Markets

Maintenance CI CodeQL Release codecov

A modular, scalable microservices platform built with Moleculer for developing energy market applications with AI integration (Google Gemini) and MCP (Model Context Protocol) support.

Features

  • ๐Ÿš€ Moleculer Microservices Framework โ€” Fast, modern, and powerful microservices framework
  • ๐ŸŒ API Gateway โ€” HTTP REST API with automatic route generation
  • ๐Ÿค– AI Agent โ€” Natural-language query planner powered by Google Gemini: describe your energy data need in plain text and the agent generates, executes, and interprets a multi-step microservice plan automatically
  • ๐Ÿข Inhouse Data Sources โ€” Register, infer, cache, and discover internal utility datasets (CSV, REST, GeoJSON, XLSX, DOCX, Scraper) alongside public energy tools
  • ๐Ÿงฉ Research Web App โ€” Built-in single-page application at /app for interactive, browser-based testing of the AI agent โ€” no separate tooling required
  • ๐Ÿ“ฅ Live CSV Export โ€” Every agent result exposes a parameterised GET endpoint (/api/agent/session/:id/csv?param=value) for zero-config integration with automation tools such as Microsoft Power Automate, Excel Power Query, or cron jobs
  • ๏ฟฝ Datapoints โ€” Named, versioned, health-monitored data sources backed by embedded PouchDB. Promote any agent session to a managed datapoint, track refresh history and schema stability, and retrieve live data as JSON or CSV via /api/datapoints. See the health overview for a dashboard of all registered datapoints.- ๐Ÿ“ธ Snapshots โ€” Seal a group of datapoints as a consistent unit with SHA-256 provenance hashing. Create, validate (drift detection), list, and remove snapshots via /api/datapoints/snapshot* (v0.13)
  • ๐ŸŒ OSM Geo Layer โ€” Grid infrastructure analysis via OpenStreetMap/Overpass: VNB assignment validation, nearby infrastructure, substation inventory, and grid topology (v0.10)
  • ๐ŸŒ OEP Connector โ€” Read-only access to the Open Energy Platform (scenario data, NEP references, research datasets) via /api/oep/* (v0.12)
  • ๐Ÿ”Œ Grid Connection Validation โ€” Deterministic 6-step Netzanschluss pipeline (POST /api/grid-connection/validate): inventory โ†’ delta โ†’ capacity โ†’ EWK benchmark โ†’ Go/No-Go decision โ†’ audit trail. No LLM โ€” identical inputs, identical findings. Reports sealed with PouchDB snapshots for EU AI Act Art. 12 compliance (v0.14)
  • ๐Ÿค Energy Sharing Validation โ€” Deterministic 6-step ยง 42c EnWG pipeline (POST /api/energy-sharing/validate): generator/consumer eligibility, MaLo validation, share-sum check, DV validation. Regulatory deadline: 01.06.2026 (v0.15)
  • ๐Ÿ“Š MaStR Data Quality Audit โ€” 8-step portfolio quality audit (POST /api/mastr-quality/audit): registration completeness, capacity plausibility, NAP/MeLo connectivity, duplicate detection, geo spot-check. Weighted 0โ€“100 score across 5 dimensions (v0.17)
  • โšก Redispatch Ex-Post Audit โ€” 7-step Redispatch 2.0 settlement readiness audit (POST /api/redispatch/audit): portfolio assembly (Weg A/B), NAP/MeLo/DV checks, curtailment data, financial risk scoring (v0.18)
  • ๐Ÿ—‚๏ธ Dashboard API โ€” Read-only UI aggregator with 4 composite endpoints (GET /api/dashboard/*): VNB overview, market snapshot, quality summary, finding-codes reference. All upstream calls parallel via Promise.allSettled, graceful degradation, 5โ€“15 min cache (v0.19)
  • ๐Ÿง  OEO / OEMetadata โ€” Open Energy Ontology annotations on all 45+ REST endpoints, OEMetadata v2.0 export with optional JSON Schema validation (v0.11.4โ€“v0.12)
  • ๐Ÿ” Data Provenance โ€” SHA-256 provenance hashing on every datapoint refresh for EU AI Act Art. 12 compliance, plus explainability log for agent corrections (v0.11.5)
  • ๐Ÿงน Prompt Scrubber โ€” Field-level PII masking with energy-domain allowlist before sending data to external LLMs (v0.11.5)- ๏ฟฝ๐Ÿ”Œ MCP Support โ€” Model Context Protocol SDK integration
  • ๐Ÿ“ OpenAPI Documentation โ€” Automatic API documentation at /api/docs
  • ๐Ÿงญ DSO/VNB Lookup โ€” VNBdigital search/lookup and BDEW โ†’ MaStR resolution
  • ๐Ÿ› ๏ธ CLI Tool โ€” Command-line interface for calling microservices
  • ๐Ÿ“ฆ Service Templates โ€” Ready-to-use skeleton service template
  • ๐Ÿ”„ Hot Reload โ€” Automatic service reloading during development
  • ๐ŸŽฏ Best Practices โ€” ESLint, Prettier, and structured project layout

Documentation

CI/CD & Transparency

  • Pull requests and pushes to main run automated quality checks (lint, build, unit coverage gates, integration discovery sanity, OpenAPI audit, security audits).
  • Security analysis is continuously enforced with CodeQL.
  • Version tags (v*) trigger a release pipeline (release:check + build + GitHub Release).
  • llm.txt is validated in release checks and regenerated from source-of-truth files via npm run generate:llm.
  • In maintenance CI, llm.txt sync is checked strictly when CHANGELOG.md changes.
  • Coverage reports are uploaded and publicly visible via Codecov.
  • Recommended repository setting: enable branch protection on main and require Maintenance CI + CodeQL checks before merge.

Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn

Installation

# Clone the repository
git clone https://github.com/energychain/cernion-energy-tools.git
cd cernion-energy-tools

# Install dependencies
npm install

# Copy environment variables
cp .env.example .env

# Edit .env and add your API keys (see Configuration section)
nano .env

Running the Services

# Start all services
npm start

# Or use development mode with hot reload
npm run dev

The API Gateway will start on http://localhost:3000 by default.

URLDescription
http://localhost:3000/appResearch Web App โ€” AI agent UI for interactive testing
http://localhost:3000/api/docsSwagger UI โ€” full OpenAPI documentation
http://localhost:3000/api/openapi.jsonRaw OpenAPI spec

Using the CLI

# Call a microservice action
npm run cli -- skeleton.hello --name=John

# Health check
npm run cli -- skeleton.health

# Get help
npm run cli -- --help

Research Web App

The built-in web application at /app lets you explore all microservices using plain-text natural language โ€” no curl, no Swagger form, no coding required.

Workflow

  1. Describe your question โ€” type in plain English or German, e.g. "Alle PV-Anlagen im Netz der Enercity in Hannover"

  2. Review the plan โ€” the AI decomposes the question into a numbered sequence of microservice calls and shows you exactly which services will be called and with which parameters.

  3. Adjust parameters โ€” concrete values extracted from your query (dates, postal codes, MeLo IDs, operator names, โ€ฆ) appear as pre-filled, editable form fields. Change any value without re-generating the plan.

  4. Run & explore โ€” results appear in a sortable, filterable table. The raw JSON from every step is available for debugging.

  5. Share or automate โ€” a shareable URL and a Live CSV link are generated automatically (see below).

Live CSV for Automation

Every completed analysis exposes a parameterised CSV endpoint:

GET /api/agent/session/<id>/csv?param1=value1&param2=value2
  • The query re-runs live against the real data sources every time it is called โ€” data is never stale.
  • GET parameters override the saved values, so the same session URL can be reused with different dates, regions, or identifiers.
  • The CSV URL updates in real time in the UI as you change any form field.

Power Automate / Excel Power Query example:

http://10.0.0.8:3900/api/agent/session/2a70e478-90ce-4fa5-b996-6f98efdba7cf/csv?startDate=2026-03-01

Point a HTTP โ†’ Get file action or a Power Query Web data source at this URL. Change the startDate parameter to fetch a different reporting period โ€” no re-analysis needed.

Other automation patterns:

  • Schedule a cron job / GitHub Action to pull fresh CSVs daily
  • Feed directly into pandas read_csv(url) in a Jupyter notebook
  • Use as a data source in Grafana, Power BI, or any tool that accepts a CSV URL

Creating New Services

Using the Service Creator

# Create a new service interactively
npm run create

# Or specify a name directly
npm run create -- my-service

This creates a new service in custom-services/ from the skeleton template and generates a matching test in custom-tests/.

Custom services are local-only and ignored by git. Core services shipped with the project live in services/.

Manual Service Creation

  1. Copy the skeleton template:

    cp templates/skeleton.service.js custom-services/my-service.service.js
    
  2. Edit the service โ€” change the name property, add actions, events, and methods.

  3. Restart services:

    npm start
    

Custom Services & Tests

  • Custom services live in custom-services/ and are loaded at startup.
  • Custom tests live in custom-tests/ and are excluded from release coverage.
  • Run custom tests without global coverage thresholds:
    npm run test:custom -- my-service.service.test.js
    

Project Structure

cernion-energy-tools/
โ”œโ”€โ”€ services/              # Core microservices (shipped with release)
โ”‚   โ”œโ”€โ”€ api.service.js     # API Gateway + Swagger UI
โ”‚   โ”œโ”€โ”€ agent.service.js   # AI agent โ€” plan/execute/export
โ”‚   โ”œโ”€โ”€ assets.service.js  # MaStR installation assets
โ”‚   โ”œโ”€โ”€ datapoint.service.js # Named datapoints + snapshots (v0.11โ€“v0.13)
โ”‚   โ”œโ”€โ”€ osm-geo.service.js # OSM geo layer (v0.10)
โ”‚   โ”œโ”€โ”€ oep.service.js     # Open Energy Platform (v0.12)
โ”‚   โ”œโ”€โ”€ datasource-registry.service.js
โ”‚   โ”œโ”€โ”€ datasource-connector.service.js
โ”‚   โ”œโ”€โ”€ datasource-cache.service.js
โ”‚   โ”œโ”€โ”€ datasource-discovery.service.js
โ”‚   โ”œโ”€โ”€ forecast.service.js
โ”‚   โ”œโ”€โ”€ gas-storage.service.js
โ”‚   โ”œโ”€โ”€ german-grid.service.js
โ”‚   โ”œโ”€โ”€ grid-operations.service.js
โ”‚   โ””โ”€โ”€ ...                # See services/ for full list
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ app.html           # Research Web App (single-page)
โ”‚   โ”œโ”€โ”€ connectors/        # Built-in datasource connector plugins
โ”‚   โ”œโ”€โ”€ mcp-client.js      # Centralised MCP tool caller
โ”‚   โ”œโ”€โ”€ async-job-poller.js # Async job polling
โ”‚   โ”œโ”€โ”€ prompt-scrubber.js  # PII masking for LLM prompts
โ”‚   โ”œโ”€โ”€ oeo-mappings.js    # OEO class mappings (~150 entries)
โ”‚   โ”œโ”€โ”€ validation-findings.js # Grid connection finding constants (v0.14)
โ”‚   โ””โ”€โ”€ oemetadata-builder.js # OEMetadata v2.0 builder
โ”œโ”€โ”€ custom-services/       # Local/custom services (git-ignored)
โ”œโ”€โ”€ custom-connectors/     # Local/custom datasource plugins (git-ignored)
โ”œโ”€โ”€ custom-tests/          # Local/custom tests (git-ignored)
โ”œโ”€โ”€ templates/
โ”‚   โ””โ”€โ”€ skeleton.service.js
โ”œโ”€โ”€ tests/                 # Core test suite
โ”œโ”€โ”€ scripts/               # Build / audit scripts
โ”œโ”€โ”€ index.js               # Main entry point
โ”œโ”€โ”€ cli.js                 # CLI tool
โ”œโ”€โ”€ create-service.js      # Interactive service creator
โ”œโ”€โ”€ moleculer.config.js    # Moleculer configuration
โ”œโ”€โ”€ .env.example           # Environment variables template
โ””โ”€โ”€ package.json

Configuration

Environment Variables

Copy .env.example to .env and edit:

VariableDefaultDescription
PORT3000API Gateway port
LOG_LEVELinfoLogging level (info, debug, warn, error)
GEMINI_API_KEYโ€”Google Gemini API key (required for AI agent)
GEMINI_MODELgemini-3-pro-previewGemini model name
MCP_SERVER_URLโ€”MCP server URL
CERNION_TOKENโ€”Cernion MCP token (request here or email [email protected])
NAMESPACEโ€”Moleculer namespace for service isolation
TRANSPORTERโ€”Message transporter (NATS, Redis, MQTT, โ€ฆ)
REQUEST_TIMEOUT_MS900000Broker request timeout in ms
RETRY_POLICY_ENABLEDfalseEnable broker-level retries for retryable errors
CIRCUIT_BREAKER_ENABLEDfalseEnable circuit breaker protection
BULKHEAD_ENABLEDfalseEnable bulkhead concurrency protection
METRICS_ENABLEDfalseEnable Moleculer metrics collection
TRACING_ENABLEDfalseEnable Moleculer tracing
ASYNC_POLLER_DEBUGfalseEnable verbose async job poller debug logging
ASYNC_POLLER_LOG_MAX_CHARS400Max chars for poller debug payload snippets
DATASOURCE_MONGO_COLLECTION_REGISTRYdatasource_registryCollection name for datasource definitions
DATASOURCE_MONGO_COLLECTION_CACHEdatasource_cacheCollection name for cached datasource rows
DATASOURCE_MONGO_COLLECTION_AUDITdatasource_auditCollection name for privacy/audit records
DATASOURCE_CONNECTOR_PLUGINS_DIRsrc/connectorsBuilt-in datasource connector directory
DATASOURCE_CUSTOM_PLUGINS_DIRcustom-connectorsCustom datasource connector directory
DATASOURCE_MAX_INFER_SAMPLE_ROWS200Max sample rows used for schema inference
DATASOURCE_SCRAPER_TIMEOUT_MS30000Timeout for scraper connector page loads
DATASOURCE_DEFAULT_PRIVACY_CONTEXTai-agentDefault privacy mode for datasource reads
GRID_CONNECTION_DB_PATH./.grid-connectionsPouchDB path for Netzanschluss validation reports (v0.14)

For complete operational options (retry backoff, circuit-breaker thresholds, bulkhead queue limits), see .env.example.

Inhouse Data Sources (v0.9)

The v0.9 datasource layer adds a second data plane next to MCP-backed public energy tools: internal utility and grid-operator data.

Services

  • datasource-registry โ€” CRUD for source definitions, cache policy, Data Dictionary, dictionary version history, and schema inference drafts
  • datasource-connector โ€” plugin runtime for reading heterogeneous sources through built-in or custom connectors
  • datasource-cache โ€” privacy-aware cached row access, status inspection, refresh, invalidation, and DSGVO audit trail
  • datasource-discovery โ€” AI-ready inhouse source descriptors for the agent and future Logic Builder integrations

Built-in connector plugins

  • csv โ€” delimited files from disk, including .gz
  • rest โ€” JSON/CSV HTTP endpoints
  • geojson โ€” feature flattening with centroid coordinates
  • xlsx โ€” spreadsheet row extraction via SheetJS
  • docx โ€” Word extraction scaffold (optional mammoth dependency)
  • scraper โ€” HTML/table extraction scaffold via cheerio or puppeteer

Public REST endpoints

  • POST /api/datasources
  • GET /api/datasources
  • GET /api/datasources/:id
  • PUT /api/datasources/:id
  • DELETE /api/datasources/:id
  • GET /api/datasources/:id/dictionary
  • PUT /api/datasources/:id/dictionary
  • GET /api/datasources/:id/dictionary/history
  • GET /api/datasources/:id/dictionary/:version
  • POST /api/datasources/:id/infer
  • POST /api/datasources/:id/refresh
  • GET /api/datasource-cache/:sourceId
  • GET /api/datasource-cache/:sourceId/status
  • GET /api/datasource-cache/:sourceId/audit
  • POST /api/datasource-cache/:sourceId/refresh
  • DELETE /api/datasource-cache/:sourceId
  • GET /api/datasource-discovery
  • GET /api/datasource-discovery/search?q=...
  • GET /api/datasource-discovery/:sourceId/descriptor

Current implementation status

  • Implemented: service scaffolds, public REST exposure, OpenAPI tag grouping, in-memory cache/registry flow, connector loader, CSV/REST/GeoJSON/XLSX reads, discovery descriptors, and agent prompt integration
  • Scaffolded with optional dependencies: docx, scraper
  • Planned follow-up: persistent MongoDB backend, richer connector validation, and Logic Builder integration

Moleculer Configuration

Edit moleculer.config.js to customise logger settings, transporter, cacher, circuit breaker, metrics, and tracing.

Open Energy Ontology (OEO) Integration

Since v0.11.4 Cernion is annotated with machine-readable mappings to the Open Energy Ontology (v2.11.0).

LayerWhat it does
src/oeo-mappings.jsStatic lookup (~150 entries): installation types, grid concepts, voltage levels, market types, ENTSO-E PSR codes, units. Includes German labels.
x-oeo-class in OpenAPIEvery REST endpoint carries x-oeo-class arrays linking to OEO class IRIs.
semanticHints.oeoClassesDatasource discovery descriptors expose domain-level OEO annotations.
Classifier keyword boostGerman OEO labels (e.g. "Solaranlage", "Stromnetz") enrich the heuristic scorer for German-language uploads.
GET /api/datapoints/oeo-contextJSON-LD @context document mapping datapoint fields to OEO IRIs.
scripts/sync-oeo.jsValidates mappings against upstream OEO releases. Run: npm run sync:oeo.

Upstream dependency

The ontology is maintained by @OpenEnergyPlatform/ontology. All inline references are tagged with // @OpenEnergyPlatform/ontology โ€” OEO_XXXXX label so that GitHub search surfaces our dependency to upstream maintainers.

Available Scripts

ScriptDescription
npm startStart all services
npm run devStart with hot reload and REPL
npm run cliRun CLI tool
npm run createCreate new service from template
npm run lintRun ESLint
npm run lint:fixAuto-fix ESLint issues
npm run formatFormat code with Prettier
npm testRun full test suite with coverage
npm run test:unitRun unit/service tests with coverage thresholds
npm run test:unit:ciCI-safe unit run (--runInBand --forceExit)
npm run test:integrationRun integration tests (*.integration.test.js)
npm run test:e2eRun live end-to-end integration test (assets.integration.test.js)
npm run test:customRun custom tests (no coverage threshold)
npm run test:watchWatch mode
npm run audit:openapiAudit OpenAPI request/parameter quality
npm run audit:securityRun blocking dependency audit (critical severity)
npm run audit:security:advisoryRun advisory dependency audit (high+)
npm run export:openapiGenerate openapi-export.json with x-ui-page annotations
npm run release:checkRun core release gates (unit coverage, OpenAPI, critical security audit)
npm run sync:oeoValidate/update OEO mappings from upstream release
npm run sync:oemetadataValidate/update OEMetadata schema from upstream
npm run buildNo-op passthrough for CI compatibility

Operational Profiles

  • Local development: keep reliability toggles off (RETRY_POLICY_ENABLED=false, CIRCUIT_BREAKER_ENABLED=false, BULKHEAD_ENABLED=false).
  • Production baseline: enable at least CIRCUIT_BREAKER_ENABLED=true and BULKHEAD_ENABLED=true after validation in staging.
  • Incident debugging: temporarily enable ASYNC_POLLER_DEBUG=true with conservative ASYNC_POLLER_LOG_MAX_CHARS.

Service Architecture

Each service follows this structure:

module.exports = {
  name: 'service-name',
  settings: { /* service-specific settings */ },
  actions: {
    myAction: {
      rest: 'GET /my-action',
      params: { param1: { type: 'string' } },
      openapi: { summary: 'โ€ฆ', tags: ['MyService'] },
      async handler(ctx) { /* โ€ฆ */ }
    }
  },
  events: { /* event handlers */ },
  methods: { /* internal methods */ },
  created() {}, async started() {}, async stopped() {}
};

AI Agent

The agent service (services/agent.service.js) exposes four REST actions used by the Research Web App:

EndpointDescription
POST /api/agent/analyzeGenerate a multi-step execution plan from a free-text query
POST /api/agent/executeRun the plan and return results + an AI-generated summary
GET /api/agent/session/:idRetrieve a saved session (shareable URL)
GET /api/agent/session/:id/csv?โ€ฆRe-run plan and download results as CSV

Parameter Extraction (RULE 5)

Every concrete value from the user's message (dates, postal codes, IDs, operator names, โ€ฆ) is automatically surfaced as an editable form field with the extracted value pre-filled. Structural parameters (format, limit, type, โ€ฆ) remain hardcoded. This makes every generated query a reusable template that can be adjusted without re-analysis.

Robust Plan Execution

  • normalizePlan() โ€” normalises varying key names from the LLM (useTool/args/label โ†’ action/params/description)
  • resolveChainedRef() โ€” resolves __step_N.fieldPath references between steps, strips {{โ€ฆ}} wrappers
  • effectiveInputs โ€” seeds from requiredInputs[].default, overlaid by user-supplied values; overrides hardcoded step params for any declared requiredInput name
  • Self-healing re-plan โ€” if a step returns an empty result, the agent automatically retries with a re-generated plan (one attempt, guarded by repairAttempt flag)

API Gateway

The API Gateway (services/api.service.js) provides:

  • Base URL: http://localhost:3000/api
  • Research Web App: http://localhost:3000/app
  • API Docs: http://localhost:3000/api/docs
  • OpenAPI spec: http://localhost:3000/api/openapi.json

Release Checklist

  1. Update version in package.json and OpenAPI version in services/api.service.js
  2. Update CHANGELOG.md
  3. Run tests: npm test (must pass with coverage thresholds)
  4. Run lint: npm run lint
  5. Run OpenAPI audit: npm run audit:openapi
  6. Run dependency security audit: npm run audit:security
  7. Ensure custom-services/, custom-tests/, .sessions/, and .env are not committed
  8. Commit, tag, and push: see Release Process in copilot-instructions

Contributing

Contributions are welcome! Please read CONTRIBUTING.md before submitting a pull request.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feat/my-feature)
  3. Make your changes with tests
  4. Run npm test and npm run lint
  5. Submit a pull request

Versioning

This project follows Semantic Versioning. See CHANGELOG.md for the full release history.

Security

Please report security issues privately. See SECURITY.md for the responsible disclosure policy.

Code of Conduct

Please follow our community guidelines in CODE_OF_CONDUCT.md.

License

GPL-3.0 โ€” see LICENSE for details.

Support

Acknowledgments

  • Moleculer โ€” Microservices framework
  • Google Gemini โ€” AI plan generation
  • MCP โ€” Model Context Protocol
  • Cernion โ€” German energy data backend

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension