Squidler.io

Squidler is designed to validate your web app as a human based on natural language use cases, without write brittle, DOM-dependent tests.

Squidler MCP Server

Squidler.io

AI coding agents excel at the Build phase of a web app, but struggle with the Verify phase. Squidler is designed to validate your web app as a human based on natural language use cases, and without writing brittle, DOM-dependent tests.

Squidler is not just a testing tool; it enables your coding agent to analyze its own work from a user perspective, decoupled from the implementation. Squidler closes the autonomous Build → Verify → Fix loop, and enables you to act as the true human-in-the-loop who manages the outcome rather than the output.

Squidler MCP Server

With the Squidler MCP server you enable you AI coding agents, like Cursor, Claude Code, Replit, Lovable etc to work with Squidler. This means that you can have your coding agent build a feature, describe the use case in natural language for Squidler, ask Squidler to test it, to get the details of how the test run went, and details of all problems or friction that were found when using your web app.

Below are details and example for how to get started with Squidler via MCP. If you have any questions, don't hesitate to reach out to [email protected].

Configuration

Add to your MCP client configuration:

{
  "mcpServers": {
    "squidler": {
      "transport": "http",
      "url": "https://mcp.squidler.io",
      "headers": {
        "Authorization": "Bearer YOUR_SQUIDLER_API_TOKEN"
      }
    }
  }
}

Get your API token from squidler.io/integrations/api-keys.

Built-in Agent Guidance

The MCP server includes built-in instructions that teach your AI agent a Test-After-Feature Workflow. When connected, the agent automatically knows how to use Squidler without you having to explain it. The workflow it follows:

  1. Run existing tests — verify nothing is broken by the new feature.
  2. Investigate failures — use test run outcomes and events to understand what failed. Fix the code if it's a bug, or update/delete the test if it's obsolete.
  3. Label the feature — create a label for the completed feature (e.g., "User Login", "Checkout Flow") to organize your test catalog.
  4. Cover the new feature — create test cases for the new behavior using the guided conversation (create_test_conversation), tag them with the feature label, and run them to confirm they pass.
    • For pages with no clickable link (hidden endpoints, deep links), write steps as: "Navigate directly to /path"
  5. Confirm green — re-run until all tests pass, then move on to the next feature.

Over time this builds a growing catalog of tests organized by feature that prevents regressions and documents how the system is meant to work.

Resources

Resource URIDescription
squidler://helpServer documentation and available capabilities
squidler://sitesList of all available sites

Tools

Sites

ToolDescription
sites_listList all available sites
site_detailsGet site information (owner, URL, name, frontend link)
site_createCreate a new site

Problems

ToolDescription
problems_summaryOverview of problems (total count by type)
problems_listList problems with pagination and filtering
problem_getGet detailed information about a specific problem
problem_resolveMark a problem as resolved
problem_dismissMark a problem as false positive

Test Case Creation

ToolDescription
create_test_conversationPreferred. Start a guided conversation to create a high-quality test case. Leverages site page knowledge, existing test coverage, and testing best practices.
test_conversation_respondContinue a test case creation conversation — answer questions, review the draft, or confirm creation.
test_case_createCreate a test case directly (simple cases only; includes automatic validation feedback)

Test Cases

ToolDescription
test_cases_listList all test cases
test_case_getGet test case details
test_cases_list_by_labelList test cases with a specific label
validate_test_caseValidate a test case for quality, feasibility, and best practices
test_case_updateUpdate an existing test case (includes automatic validation feedback)
test_case_deleteDelete a test case
test_case_runRun a single test case
test_cases_run_allRun all test cases
test_cases_run_by_labelRun test cases with a specific label

Test Runs

ToolDescription
test_runs_listList execution history for a test case
test_case_pollLong-poll for test execution status updates and events
test_run_outcomeGet pass/fail outcome and goal results for a test run
test_run_eventsGet detailed execution events and screenshots
test_run_stopStop a currently running test

Labels

ToolDescription
labels_listList all labels
label_getGet label details
label_createCreate a new label
label_updateUpdate a label
label_deleteDelete a label
test_case_labels_addAdd labels to a test case
test_case_label_removeRemove a label from a test case

Headers

ToolDescription
headers_listList custom headers for a site
header_addAdd or update a custom header
header_deleteRemove a custom header

Credentials

ToolDescription
credentials_listList available login credentials
credentials_create_userpassCreate credentials with explicit email/username and password
credentials_create_emailCreate credentials with auto-generated @squidlermail.io email for verification testing
credentials_create_oidcCreate OIDC credentials for sites using OpenID Connect

Persona Reviews (UX Feedback)

ToolDescription
personas_listList all available personas for UX reviews
persona_review_runRun a persona review on a validated test case
persona_review_findingsGet detailed findings from a completed persona review
persona_findings_listList persona findings for a site and persona, filtered by status
persona_finding_triageUpdate finding status (RESOLVED, DISMISSED, IN_PROGRESS, OPEN)

Account Management

ToolDescription
account_statusCheck current plan, remaining test quota, and available upgrades
account_upgrade_urlGenerate a short-lived URL to upgrade your plan

Prompts

PromptDescription
create-test-caseGuided test case creation with goals, steps, and failure strategies
site-health-checkQuick overview of site quality status with actionable recommendations
run-test-suiteRun all or labeled tests and monitor results
get-started-with-squidlerComplete onboarding: create site, generate initial tests, run them
suggest-initial-testsAnalyze codebase to suggest 2-4 initial test cases
enhance-tests-from-runsAnalyze test run events to identify and fix test weaknesses
identify-ux-improvementsDiscover usability issues from AI tester friction patterns
correlate-accessibility-and-uxCross-reference accessibility problems with test execution friction
run-persona-reviewRun persona reviews on validated test cases to discover UX issues
migrate-legacy-testsMigrate legacy test cases to current format

Example Prompts

What you can sayWhat it does
"Create a site for https://myapp.com called 'My App'"Register your website with Squidler to start monitoring and testing
"Analyze the codebase and create test cases for the key user flows"Uses guided conversations to create comprehensive test cases covering happy flows, edge cases, and negative tests
"Run all my test cases"Execute your test cases in a real browser with goal validation and issue detection
"Show me the results of my latest test runs"See which goals passed or failed, and understand what happened during execution
"Get the detailed test events for test case #1 and analyze the UX"Review execution logs to identify UX friction and usability issues
"Fix the issues preventing test case #1 from passing"Analyzes test failures and fixes the underlying code issues
"Get problem #42 and help me fix it"Address specific accessibility, functionality, or content issues
"Fix all accessibility issues in my site"Batch-fix WCAG compliance issues by iterating through problems

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension