mobile-device-mcp

MCP server for AI-powered mobile device control — 26 tools for screenshots, UI inspection, touch interaction, and AI visual analysis. Supports Anthropic Claude & Google Gemini.

mobile-device-mcp

mobile-device-mcp

npm version npm downloads GitHub stars License: BSL 1.1

MCP server that gives AI coding assistants (Claude Code, Cursor, Windsurf) the ability to see and interact with mobile devices. 49 tools for screenshots, UI inspection, touch interaction, AI-powered visual analysis, Flutter widget tree inspection, video recording, and test generation.

AI assistants can read your code but can't see your phone. This fixes that.

Why This One?

Featuremobile-device-mcpmobile-next/mobile-mcpappium/appium-mcp
Total tools4920~15
Setupnpx (30 sec)npxRequires Appium server
AI visual analysis12 tools (Claude + Gemini)NoneVision-based finding
Flutter widget tree10 tools (Dart VM Service)NoneNone
Smart element finding4-tier (<1ms local search)Accessibility tree onlyXPath/selectors
Companion app (23x faster UI tree)YesNoNo
Video recordingYesNoNo
Test script generationTS, Python, JSONNoJava/TestNG only
iOS simulator supportYesYesYes
iOS real devicePlannedYesYes
Screenshot compression89% (251KB->28KB)None50-80%
Multi-provider AIClaude + GeminiN/ASingle provider
PriceFree + Pro (₹499/mo)FreeFree

The Problem

Web developers have browser DevTools, Playwright, and Puppeteer -- AI assistants can click around, take screenshots, and verify fixes. Mobile developers? They're stuck manually screenshotting, copying logs, and describing what's on screen. They're human middleware between the AI and the device.

What This Does

Developer: "The login button doesn't work"

Without this tool:                    With this tool:
  1. Manually screenshot              1. AI calls take_screenshot -> sees the screen
  2. Paste into AI chat               2. AI calls smart_tap("login button") -> taps it
  3. AI guesses what's wrong          3. AI calls verify_screen("error message shown") -> sees result
  4. Apply fix, rebuild               4. AI calls visual_diff -> confirms fix worked
  5. Repeat 4-5 times                 5. Done.

Quick Start

Install

npx mobile-device-mcp

No global install needed. Runs directly via npx.

Prerequisites

Setup (One-time, 30 seconds)

  1. Get a Google AI key (free tier available): aistudio.google.com/apikey

  2. Add .mcp.json to your project root:

macOS / Linux:

{
  "mcpServers": {
    "mobile-device": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "mobile-device-mcp"],
      "env": {
        "GOOGLE_API_KEY": "your-google-api-key"
      }
    }
  }
}

Windows:

{
  "mcpServers": {
    "mobile-device": {
      "type": "stdio",
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mobile-device-mcp"],
      "env": {
        "GOOGLE_API_KEY": "your-google-api-key"
      }
    }
  }
}

With Pro license key (after purchasing Pro):

macOS / Linux (Pro)
{
  "mcpServers": {
    "mobile-device": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "mobile-device-mcp"],
      "env": {
        "GOOGLE_API_KEY": "your-google-api-key",
        "MOBILE_MCP_LICENSE_KEY": "MDMCP-XXXXX-XXXXX-XXXXX-XXXXX"
      }
    }
  }
}
Windows (Pro)
{
  "mcpServers": {
    "mobile-device": {
      "type": "stdio",
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mobile-device-mcp"],
      "env": {
        "GOOGLE_API_KEY": "your-google-api-key",
        "MOBILE_MCP_LICENSE_KEY": "MDMCP-XXXXX-XXXXX-XXXXX-XXXXX"
      }
    }
  }
}
  1. Open your AI coding assistant from that directory. That's it.

The server starts and stops automatically -- you never run it manually. Your AI assistant manages it as a background process via the MCP protocol.

Verify It Works

Claude Code: type /mcp -- you should see mobile-device: Connected

Cursor: check MCP panel in settings

Then just talk to your phone:

You: "Open my app, tap the login button, type [email protected] in the email field"
AI:  [takes screenshot -> sees the screen -> smart_tap("login button") -> smart_type("email field", "[email protected]")]

You: "Find all the bugs on this screen"
AI:  [analyze_screen -> inspects layout, checks for overflow, missing labels, broken states]

You: "Navigate to settings and verify dark mode works"
AI:  [smart_tap("settings") -> take_screenshot -> smart_tap("dark mode toggle") -> visual_diff -> reports result]

No test scripts. No manual screenshots. Just describe what you want in plain English.

Works with Any AI Coding Assistant

ToolConfig fileDocs
Claude Code.mcp.json in project rootclaude.ai/docs
Cursor.cursor/mcp.jsoncursor.com/docs
VS Code + CopilotMCP settingscode.visualstudio.com
WindsurfMCP settingswindsurf.com

All use the same JSON config -- just put it in the right file for your editor.

Drop Into Any Project

Copy .mcp.json into any mobile project -- Flutter, React Native, Kotlin, Swift -- and your AI assistant gets device superpowers in that directory. No global install needed.

Free vs Pro

Free (14 tools) -- no license key needed

ToolWhat it does
list_devicesList all connected Android devices/emulators
get_device_infoModel, manufacturer, Android version, SDK level
get_screen_sizeScreen resolution in pixels
take_screenshotCapture screenshot (PNG or JPEG, configurable quality & resize)
get_ui_elementsGet the accessibility/UI element tree as structured JSON
tapTap at coordinates
double_tapDouble tap at coordinates
long_pressLong press at coordinates
swipeSwipe between two points
type_textType text into the focused field
press_keyPress a key (home, back, enter, volume, etc.)
list_appsList installed apps
get_current_appGet the foreground app
get_logsGet logcat entries with filtering

Pro (35 additional tools) -- ₹499/mo

Get Pro License -- unlock all 49 tools. After payment, you'll receive your license key via email within 1 hour. Add it to your .mcp.json:

{
  "mcpServers": {
    "mobile-device": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "mobile-device-mcp"],
      "env": {
        "GOOGLE_API_KEY": "your-google-api-key",
        "MOBILE_MCP_LICENSE_KEY": "your-license-key"
      }
    }
  }
}

AI Visual Analysis (12 tools)

Use AI vision (Claude or Gemini) to understand what's on screen.

ToolWhat it does
analyze_screenAI describes the screen: app name, screen type, interactive elements, visible text, suggestions
find_elementFind a UI element by description: "the login button", "email input field"
smart_tapFind an element by description and tap it in one step
smart_typeFind an input field by description, focus it, and type text
suggest_actionsPlan actions to achieve a goal: "log into the app", "add item to cart"
visual_diffCompare current screen with a previous screenshot -- what changed?
extract_textExtract all visible text from the screen (AI-powered OCR)
verify_screenVerify an assertion: "the login was successful", "error message is showing"
wait_for_settleWait until the screen stops changing
wait_for_elementWait for a specific element to appear on screen
handle_popupDetect and dismiss popups, dialogs, permission prompts
fill_formFill multiple form fields in one step

Flutter Widget Tree (10 tools)

Connect to running Flutter apps via Dart VM Service Protocol. Maps every widget to its source code location (file:line).

ToolWhat it does
flutter_connectDiscover and connect to a running Flutter app on the device
flutter_disconnectDisconnect from the Flutter app and clean up resources
flutter_get_widget_treeGet the full widget tree (summary or detailed)
flutter_get_widget_detailsGet detailed properties of a specific widget by ID
flutter_find_widgetSearch the widget tree by type, text, or description
flutter_get_source_mapMap every widget to its source code location (file:line:column)
flutter_screenshot_widgetScreenshot a specific widget in isolation
flutter_debug_paintToggle debug paint overlay (shows widget boundaries & padding)
flutter_hot_reloadHot reload Flutter app (preserves state)
flutter_hot_restartHot restart Flutter app (resets state)

iOS Simulator (4 tools)

macOS only. Control iOS simulators via xcrun simctl.

ToolWhat it does
ios_list_simulatorsList available iOS simulators
ios_boot_simulatorBoot a simulator by name or UDID
ios_shutdown_simulatorShut down a running simulator
ios_screenshotTake a screenshot of a simulator

Video Recording (2 tools)

ToolWhat it does
record_screenStart recording the device screen
stop_recordingStop recording and save the video

Test Generation (3 tools)

ToolWhat it does
start_test_recordingStart recording your MCP tool calls
stop_test_recordingStop recording and generate a test script
get_recorded_actionsGet recorded actions as TypeScript, Python, or JSON

App Management (4 tools)

ToolWhat it does
launch_appLaunch an app by package name
stop_appForce stop an app
install_appInstall an APK
uninstall_appUninstall an app

Performance

The server is optimized to minimize latency and AI token costs:

  • 4-tier element search: companion app (instant) -> local text match (<1ms) -> cached AI -> fresh AI. smart_tap is 35x faster than naive AI calls (205ms vs 7.6s).
  • Companion app: AccessibilityService-based Android app provides UI tree in 105ms (23x faster than UIAutomator's 2448ms). Auto-installs on first use.
  • Screenshot compression: AI tools auto-compress to JPEG q=60, 400w -- 89% smaller (251KB -> 28KB) with zero AI quality loss.
  • Parallel capture: Screenshot + UI tree fetched simultaneously via Promise.all().
  • TTL caching: 5-second cache avoids redundant ADB calls for rapid-fire tool usage.

Environment Variables

VariableDescriptionDefault
GOOGLE_API_KEY or GEMINI_API_KEYGoogle API key for Gemini vision (recommended)--
ANTHROPIC_API_KEYAnthropic API key for Claude vision--
MOBILE_MCP_LICENSE_KEYLicense key to unlock Pro tools--
MCP_AI_PROVIDERForce AI provider: "anthropic" or "google"Auto-detected
MCP_AI_MODELOverride AI modelgemini-2.5-flash / claude-sonnet-4-20250514
MCP_ADB_PATHCustom ADB binary pathAuto-discovered
MCP_DEFAULT_DEVICEDefault device serialAuto-discovered
MCP_SCREENSHOT_FORMAT"png" or "jpeg"jpeg
MCP_SCREENSHOT_QUALITYJPEG quality (1-100)80
MCP_SCREENSHOT_MAX_WIDTHResize screenshots to this max width720

Architecture

src/
|-- index.ts              # CLI entry point (auto-discovery, env config)
|-- server.ts             # MCP server factory
|-- license.ts            # License validation and tier gating
|-- types.ts              # Shared interfaces
|-- drivers/android/      # ADB driver (DeviceDriver implementation)
|   |-- adb.ts            # Low-level ADB command wrapper
|   |-- companion-client.ts # TCP client for companion app
|   +-- index.ts          # AndroidDriver class (4-strategy UI element retrieval)
|-- drivers/flutter/      # Dart VM Service driver
|   |-- index.ts          # FlutterDriver (discovery, inspection, source mapping, hot reload)
|   +-- vm-service.ts     # JSON-RPC 2.0 WebSocket client (DDS redirect handling)
|-- drivers/ios/          # iOS Simulator driver (macOS only)
|   |-- index.ts          # IOSSimulatorDriver via xcrun simctl
|   +-- simctl.ts         # Low-level simctl command wrapper
|-- tools/                # MCP tool registrations (free + pro gating)
|   |-- device-tools.ts   # Device management
|   |-- screen-tools.ts   # Screenshots & UI inspection
|   |-- interaction-tools.ts # Touch, type, keys
|   |-- app-tools.ts      # App management
|   |-- log-tools.ts      # Logcat
|   |-- ai-tools.ts       # AI-powered tools
|   |-- flutter-tools.ts  # Flutter widget inspection
|   |-- ios-tools.ts      # iOS simulator tools
|   |-- video-tools.ts    # Screen recording
|   +-- recording-tools.ts # Test generation
|-- recording/            # Test script generation
|   |-- recorder.ts       # ActionRecorder (records MCP tool calls)
|   +-- generator.ts      # TestGenerator (TypeScript/Python/JSON output)
|-- ai/                   # AI visual analysis engine
|   |-- client.ts         # Multi-provider client (Anthropic + Google)
|   |-- prompts.ts        # System prompts & UI element summarizer
|   |-- analyzer.ts       # ScreenAnalyzer orchestrator (caching, parallel capture)
|   +-- element-search.ts # Local element search (text/alias matching, no AI needed)
+-- utils/
    |-- discovery.ts      # ADB auto-discovery
    +-- image.ts          # PNG parsing, JPEG compression, bilinear resize

companion-app/            # Android companion app (Kotlin)
                          # AccessibilityService + TCP JSON-RPC for fast UI tree

Roadmap

  • iOS physical device support
  • Multi-device orchestration
  • CI/CD integration
  • Cloud device farm support

Tested On

  • Devices: Pixel 8 (Android 16), Samsung Galaxy series, Android emulators
  • Apps: Telegram, Instagram, Spotify, WhatsApp, YouTube, Chrome, Settings, and Flutter apps
  • AI Providers: Google Gemini 2.5 Flash, Anthropic Claude
  • Platforms: Windows 11, macOS (iOS simulators)
  • Connection: USB and wireless ADB

License

Business Source License 1.1

  • Free for individuals and non-commercial use
  • Commercial use requires a paid license
  • Converts to Apache 2.0 on March 23, 2030

See LICENSE for full terms.

Servidores relacionados