MnemoPay
Trust and reputation layer for AI agents that handle money. Agent Credit Score (300-850), hash-chained ledger, behavioral finance, real payment rails (Stripe, Paystack, Lightning), autonomous shopping with escrow.
MnemoPay Mobile SDK
On-device persistent memory (encrypted SQLite + sqlite-vec), agent-to-agent payments, and spatial proofs. TypeScript / Node 20+.
Development
npm ci
npm run lint # tsc --noEmit
npm test # unit tests (excludes tests/benchmarks/)
npm run build # emits dist/
Crypto keys and migration
MnemoPay.create() wires NodeCrypto with:
encryptionKey— AES-GCM; defaults toSHA256("mnemopay:" + agentId)when omitted.hmacKey— memory integrity HMAC; defaults toSHA256("mnemopay:mac:" + agentId).signingKey— Ed25519 seed; defaults toSHA256("mnemopay:sign:" + agentId).
Older builds only fixed the encryption key and drew random HMAC/signing material per process. That broke cross-device sync and manifest signatures. If you open an existing database after upgrading:
- Same device, same code: keys are now deterministic per
agentId, so behavior is stable. - Existing rows written under random HMAC keys may fail integrity verification on recall unless you still have the old keys. For production, set
encryptionKey,hmacKey, andsigningKeyexplicitly and store them in the platform keystore.
See MnemoPayConfig in src/types/index.ts for optional overrides.
Memory embeddings
MemoryStore / EncryptedSync use one async embedder, configured on MnemoPayConfig:
| Option | Behavior |
|---|---|
| (default) | Hash — embedHash() (SHA-256 expanded + L2 normalize). Fast, deterministic, not semantic. |
embeddings: 'semantic' | Xenova Xenova/all-MiniLM-L6-v2 via ONNX Runtime (384-d, mean pooling, normalized). Requires optional peer @xenova/transformers. Also set embeddingDimensions: 384 (default). |
embed: (text, dim) => … | Custom — sync or async; overrides embeddings. Vector length must match dim / memory_vectors (384). |
Install semantic backend when you need it:
npm install @xenova/transformers
MnemoPay.create({
agentId: 'agent-1',
embeddings: 'semantic',
embeddingDimensions: 384,
});
LongMem eval (memory scale + recall)
npm run eval:longmem # default hash embeddings
npm run eval:longmem:semantic # same benchmark with Xenova MiniLM (peer dep installed)
| Variable | Default | Purpose |
|---|---|---|
LONGMEM_N | 200 | How many memories to retain |
LONGMEM_SAMPLES | scales with N | How many query points (spread across indices) |
LONGMEM_RECALL_LIMIT | scales with N | recall({ limit }); sqlite-vec uses k ≈ limit × 3 internally |
LONGMEM_EMBEDDINGS | (unset) | Set to semantic to match eval:longmem:semantic |
Examples:
LONGMEM_N=1000 npm run eval:longmem
LONGMEM_N=5000 LONGMEM_SAMPLES=64 LONGMEM_RECALL_LIMIT=60 npm run eval:longmem
The benchmark resets the in-process memory write rate limiter every 200 retains so LONGMEM_N=5000 can finish in one run. Production apps still enforce normal limits.
The eval prints two JSON blocks:
- exact query — recall text identical to the stored line. With hash embeds this stays near 100% hit@3 at large N unless
kis too small; with semantic embeds it should also stay very high for identical strings. - paraphrase query — natural-language question referencing the fact index without copying the stored string. Hash embeds yield near-zero hit@5/hit@15; semantic embeds should improve this materially (run
npm run eval:longmem:semanticto measure).
Observed locally (hash, default LONGMEM_RECALL_LIMIT): exact hit@3 = 1.0 for LONGMEM_N through 5000; paraphrase hit@5 ≈ 0 (occasional hit@15). Raise LONGMEM_RECALL_LIMIT if exact recall starts missing at huge N.
The first semantic run downloads model weights into the Hugging Face cache (can take a minute on CI — default CI keeps hash-only eval).
This repo’s Jest config uses jest-environment-node-single-context so onnxruntime-node’s instanceof Float32Array checks succeed under Jest (the default VM-isolated environment breaks typed-array identity).
CI
GitHub Actions runs npm test and npm run eval:longmem (with a small LONGMEM_N) on push and pull requests. See .github/workflows/ci.yml.
License
MIT — see package.json.
Related Servers
Sweet Home 3D MCP Server
MCP server plugin for Sweet Home 3D that lets AI assistants create walls, place furniture, apply textures, and render 3D interior designs — 42 tools, zero external dependencies.
Kultur.dev
Multimodal cultural intelligence MCP server that analyzes text, images, and video for cultural risks across 200+ markets with 9 specialized tools.
Tickory MCP Server
Scheduled scans across all Binance spot and perpetual pairs using CEL rules (RSI, volume, MAs, price action). Runs server-side 24/7, fires webhooks on match, with delivery proof and alert explainability.
mcp-server-sentinel
Crypto trading backtesting, bot deployment, and account management for Sentinel Bot via AI agents.
SwitchBot MCP Server
Control SwitchBot devices interactively using the SwitchBot API.
Interzoid Data Quality
AI-powered data matching, enrichment, standardization, and verification APIs. 29 tools for company/name/address deduplication, business intelligence, email trust scoring, and more. Supports x402 crypto micropayments.
Word Orb
The language layer for AI agents. One call returns IPA, definitions, etymology, translations in 47 languages, and ethics guidance — in 3ms.
guessmarket-mcp
Trade on prediction markets from Claude Code. Browse markets, check odds, build and execute trades on-chain.
GetHumanDesign
Calculate your human design chart and ask AI how you're designed to make decisions, build relationships, and find your path.
Card Catalog
Certification authority for AI agents. Adversarial exams, Ed25519-signed credentials, examiner economy. 20K free credits on registration.