Tuteliq

AI-powered safety tools for detecting grooming, bullying, threats, and harmful interactions in conversations. The server integrates Tuteliq’s behavioral risk detection API via the Model Context Protocol (MCP), enabling AI assistants to analyze interaction patterns rather than relying on keyword moderation. Use cases include platform safety, chat moderation, child protection, and compliance with regulations such as the EU Digital Services Act (DSA), COPPA, and KOSA.


What is this?

Tuteliq MCP Server brings AI-powered child safety tools directly into Claude, Cursor, and other MCP-compatible AI assistants. Ask Claude to check messages for bullying, detect grooming patterns, or generate safety action plans.

Available Tools (41 MCP + 2 API-only)

Safety Detection

ToolDescription
detect_bullyingAnalyze text for bullying, harassment, or harmful language
detect_groomingDetect grooming patterns and predatory behavior in conversations
detect_unsafeIdentify unsafe content (self-harm, violence, explicit material)
analyzeQuick comprehensive safety check (bullying + unsafe)
analyse_multiRun multiple detection endpoints on a single piece of text in one call
analyze_emotionsAnalyze emotional content and mental state indicators
get_action_planGenerate age-appropriate guidance for safety situations
generate_reportCreate incident reports from conversations

Fraud & Harm Detection

ToolDescription
detect_social_engineeringDetect social engineering tactics (pretexting, urgency fabrication, authority impersonation)
detect_app_fraudDetect app-based fraud (fake investment platforms, phishing apps, subscription traps)
detect_romance_scamDetect romance scam patterns (love-bombing, financial requests, identity deception)
detect_mule_recruitmentDetect money mule recruitment tactics (easy-money offers, bank account sharing)
detect_gambling_harmDetect gambling-related harm indicators (chasing losses, concealment, distress)
detect_coercive_controlDetect coercive control patterns (isolation, financial control, monitoring, threats)
detect_vulnerability_exploitationDetect exploitation of vulnerable individuals (elderly, disabled, financially distressed)
detect_radicalisationDetect radicalisation indicators (extremist rhetoric, us-vs-them framing, ideological grooming)

Voice, Image & Video Analysis

ToolDescription
analyze_voiceTranscribe audio and run safety analysis on the transcript
analyze_imageAnalyze images for visual safety + OCR text extraction
analyze_videoAnalyze video files for safety concerns via key frame extraction (supports mp4, mov, avi, webm, mkv)

Webhook Management

ToolDescription
list_webhooksList all configured webhooks
create_webhookCreate a new webhook endpoint
update_webhookUpdate webhook configuration
delete_webhookDelete a webhook
test_webhookSend a test payload to verify webhook
regenerate_webhook_secretRegenerate webhook signing secret

Pricing

ToolDescription
get_pricingGet available pricing plans
get_pricing_detailsGet detailed pricing with features and limits

Usage & Billing

ToolDescription
get_usage_historyGet daily usage history
get_usage_by_toolGet usage by tool/endpoint
get_usage_monthlyGet monthly usage with billing info

GDPR Account

ToolDescription
delete_account_dataDelete all account data (Right to Erasure)
export_account_dataExport all account data as JSON (Data Portability)
record_consentRecord user consent for data processing
get_consent_statusGet current consent status
withdraw_consentWithdraw a previously granted consent
rectify_dataCorrect user data (Right to Rectification)
get_audit_logsGet audit trail of all data operations

Breach Management

ToolDescription
log_breachLog a new data breach (starts 72-hour notification clock)
list_breachesList all data breaches, optionally filtered by status
get_breachGet details of a specific data breach
update_breach_statusUpdate breach status and notification progress

Verification (API & SDK only)

These tools are available via the REST API and the @tuteliq/sdk Node SDK — not yet exposed as MCP tools.

ToolDescription
verify_ageVerify a user's age via document analysis, biometric estimation, or both. Methods: document, biometric, combined. Returns verified age range, confidence score, and minor status. Beta — requires Pro tier. 5 credits per call.
verify_identityConfirm user identity with document authentication, face matching, and liveness detection. Returns match score, liveness result, and document authentication status. Beta — requires Business tier. 10 credits per call.

Common Parameters

Context Fields

All detection tools accept an optional context object. These fields influence severity scoring and classification:

FieldTypeDescription
languagestringISO 639-1 code (e.g., "en", "sv"). Auto-detected if omitted.
ageGroupstringAge group (e.g., "10-12", "13-15", "under 18"). Triggers age-calibrated scoring.
platformstringPlatform name (e.g., "Discord", "Roblox"). Adjusts detection for platform norms.
relationshipstringRelationship context (e.g., "classmates", "stranger").
sender_truststringSender verification status: "verified", "trusted", or "unknown".
sender_namestringName of the sender (used with sender_trust).

sender_trust Behavior

When sender_trust is set to "verified" or "trusted":

  • AUTH_IMPERSONATION is fully suppressed — a verified sender cannot be impersonating an authority
  • URGENCY_FABRICATION is suppressed for routine time-sensitive information (schedules, deadlines, appointments)
  • Content is only flagged if it contains genuinely malicious elements (credential theft, phishing links, financial demands)
  • This prevents false positives on legitimate institutional messages (school notifications, hospital reminders, government advisories)

support_threshold

Controls when crisis support resources (helplines, text lines, web resources) are included in the response:

ValueBehavior
lowInclude support for Low severity and above
mediumInclude support for Medium severity and above
high(Default) Include support for High severity and above
criticalInclude support only for Critical severity

Note: Critical severity always includes support resources regardless of the threshold setting.

analyse_multi Endpoint Values

The analyse_multi tool accepts up to 10 endpoints per call. Valid endpoint values:

Endpoint IDDescription
bullyingBullying and harassment detection
groomingGrooming pattern detection
unsafeUnsafe content detection (self-harm, violence, explicit material)
social-engineeringSocial engineering and pretexting
app-fraudApp-based fraud patterns
romance-scamRomance scam patterns
mule-recruitmentMoney mule recruitment
gambling-harmGambling-related harm
coercive-controlCoercive control patterns
vulnerability-exploitationExploitation of vulnerable individuals
radicalisationRadicalisation indicators

Installation

Claude Desktop (Recommended)

  1. Open Claude Desktop and go to Settings > Connectors
  2. Click Add custom connector
  3. Set the name to Tuteliq and the URL to:
    https://api.tuteliq.ai/mcp
    
  4. When prompted, enter your Tuteliq API key

That's it — Tuteliq tools will be available in your next conversation.

Cursor

Add to your Cursor MCP settings:

{
  "mcpServers": {
    "tuteliq": {
      "url": "https://api.tuteliq.ai/mcp",
      "headers": {
        "Authorization": "Bearer your-api-key"
      }
    }
  }
}

Other MCP clients (npx)

For clients that support stdio transport:

{
  "mcpServers": {
    "tuteliq": {
      "command": "npx",
      "args": ["-y", "@tuteliq/mcp"],
      "env": {
        "TUTELIQ_API_KEY": "your-api-key"
      }
    }
  }
}

Usage Examples

Once configured, you can ask Claude:

Bullying Detection

"Check if this message is bullying: 'Nobody likes you, just go away'"

Response:

## ⚠️ Bullying Detected

**Severity:** 🟠 Medium
**Confidence:** 92%
**Risk Score:** 75%

**Types:** exclusion, verbal_abuse

### Rationale
The message contains direct exclusionary language...

### Recommended Action
`flag_for_moderator`

Grooming Detection

"Analyze this conversation for grooming patterns..."

Quick Safety Check

"Is this message safe? 'I don't want to be here anymore'"

Emotion Analysis

"Analyze the emotions in: 'I'm so stressed about school and nobody understands'"

Action Plan

"Give me an action plan for a 12-year-old being cyberbullied"

Incident Report

"Generate an incident report from these messages..."

Voice Analysis

"Analyze this audio file for safety: /path/to/recording.mp3"

Image Analysis

"Check this screenshot for harmful content: /path/to/screenshot.png"

Webhook Management

"List my webhooks" "Create a webhook for critical incidents at https://example.com/webhook"

Usage

"Show my monthly usage"

Fraud Detection

"Check this message for social engineering: 'Your account will be suspended unless you verify now'" "Is this a romance scam? 'I know we just met online but I need help with a medical bill'"


Get Started (Free)

  1. Create a free Tuteliq account
  2. Go to your Dashboard and generate an API Key
  3. For Claude Desktop and other MCP plugins, generate a Secure Token under Settings > Plugins
  4. Use the API key for direct API/SDK access, or the Secure Token when connecting via MCP

Requirements

  • Node.js 18+
  • Tuteliq API key

Supported Languages (27)

Language is auto-detected when not specified. Beta languages have good accuracy but may have edge cases compared to English.

LanguageCodeStatus
EnglishenStable
SpanishesBeta
PortugueseptBeta
FrenchfrBeta
GermandeBeta
ItalianitBeta
DutchnlBeta
PolishplBeta
RomanianroBeta
TurkishtrBeta
GreekelBeta
CzechcsBeta
HungarianhuBeta
BulgarianbgBeta
CroatianhrBeta
SlovakskBeta
SlovenianslBeta
LithuanianltBeta
LatvianlvBeta
EstonianetBeta
MaltesemtBeta
IrishgaBeta
SwedishsvBeta
NorwegiannoBeta
DanishdaBeta
FinnishfiBeta
UkrainianukBeta

Best Practices

Message Batching

The bullying and unsafe content tools analyze a single text field per request. If you're analyzing a conversation, concatenate a sliding window of recent messages into one string rather than sending each message individually. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.

The grooming tool already accepts a messages[] array and analyzes the full conversation in context.

PII Redaction

Enable PII_REDACTION_ENABLED=true on your Tuteliq API to automatically strip emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed.


Supported Languages

Tuteliq supports 27 languages with automatic detection — no configuration required.

English (stable) and 26 beta languages: Spanish, Portuguese, Ukrainian, Swedish, Norwegian, Danish, Finnish, German, French, Dutch, Polish, Italian, Turkish, Romanian, Greek, Czech, Hungarian, Bulgarian, Croatian, Slovak, Lithuanian, Latvian, Estonian, Slovenian, Maltese, and Irish.

All 24 EU official languages + Ukrainian, Norwegian, and Turkish. Each language includes culture-specific safety guidelines covering local slang, grooming patterns, self-harm coded vocabulary, and filter evasion techniques.

See the Language Support docs for details.


Support


License

MIT License - see LICENSE for details.


Get Certified — Free

Tuteliq offers a free certification program for anyone who wants to deepen their understanding of online child safety. Complete a track, pass the quiz, and earn your official Tuteliq certificate — verified and shareable.

Three tracks available:

TrackWho it's forDuration
Parents & CaregiversParents, guardians, grandparents, teachers, coaches~90 min
Young People (10–16)Young people who want to learn to spot manipulation~60 min
Companies & PlatformsProduct managers, trust & safety teams, CTOs, compliance officers~120 min

Start here → tuteliq.ai/certify

  • 100% Free — no login required
  • Verifiable certificate on completion
  • Covers grooming recognition, sextortion, cyberbullying, regulatory obligations (KOSA, EU DSA), and more

The Mission: Why This Matters

Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.

  • 302 million children are victims of online sexual exploitation and abuse every year. That is 10 children every second. (Childlight / University of Edinburgh, 2024)
  • 1 in 8 children globally have been victims of non-consensual sexual imagery in the past year. (Childlight, 2024)
  • 370 million girls and women alive today experienced rape or sexual assault in childhood. An estimated 240–310 million boys and men experienced the same. (UNICEF, 2024)
  • 29.2 million incidents of suspected child sexual exploitation were reported to NCMEC's CyberTipline in 2024 alone — containing 62.9 million files (images, videos). (NCMEC, 2025)
  • 546,000 reports of online enticement (adults grooming children) in 2024 — a 192% increase from the year before. (NCMEC, 2025)
  • 1,325% increase in AI-generated child sexual abuse material reports between 2023 and 2024. The technology that should protect children is being weaponized against them. (NCMEC, 2025)
  • 100 sextortion reports per day to NCMEC. Since 2021, at least 36 teenage boys have taken their own lives because they were victimized by sextortion. (NCMEC, 2025)
  • 84% of reports resolve outside the United States. This is not an American problem. This is a global emergency. (NCMEC, 2025)

End-to-end encryption is making platforms blind. In 2024, platforms reported 7 million fewer incidents than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations exists right now. It is running at api.tuteliq.ai.

The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.

Every second we wait, another child is harmed.

We have the technology. We need the support.

If this mission matters to you, consider sponsoring our open-source work so we can keep building the tools that protect children — and keep them free and accessible for everyone.


Related Servers