PageLens AI
Your AI assistant, plugged into your PageLens audits.
PageLens AI — MCP Server
AI agents can run senior-level website audits in seconds.
An MCP (Model Context Protocol) server that gives AI agents direct access to PageLens AI — automated website reviews covering UX, SEO, Performance, Accessibility, Security, Conversion, and QA journey audits.
Plug it into Cursor, Claude, or any MCP-compatible client and your agent can read scan results, drill into findings, surface quick wins, and close the fix loop — all without leaving the IDE.
MCP Endpoint
https://www.pagelensai.com/api/mcp
This is a remote MCP server — no local install required.
Quick Start
Cursor
Add to your ~/.cursor/mcp.json (or workspace .cursor/mcp.json):
{
"mcpServers": {
"pagelens": {
"url": "https://www.pagelensai.com/api/mcp"
}
}
}
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"pagelens": {
"url": "https://www.pagelensai.com/api/mcp"
}
}
}
Authentication
This server uses OAuth 2.0. Your MCP client will open a browser window to authorize with your PageLens account on first connect. No API keys to manage.
Scopes granted on authorization:
| Scope | What it unlocks |
|---|---|
read:scans | List and read your scan results |
read:findings | Read findings and quick wins |
write:feedback | Submit finding feedback and owner decisions |
Available Tools
whoami
Confirm which PageLens account this MCP session is operating on, plus granted OAuth scopes.
{}
list_domains
List the domains you've verified ownership of, along with badge tier and the scan currently anchored to the public badge.
{
"include_unverified": false
}
list_scans
List your most recent PageLens scans. Filter by status, domain, or date. Returns a slim summary per scan (id, URL, score, grade, severity counts). QA Audit scans include a compact qaAudit summary with confidence, journey-step count, pages reviewed versus page budget, blocked/needs-review step counts, auth-profile status, and the reason fewer than the page budget were captured when the safe same-origin link graph is exhausted.
{
"limit": 20,
"status": "COMPLETE",
"domain": "example.com",
"since": "2026-01-01T00:00:00Z"
}
Status values: PENDING · RUNNING · COMPLETE · FAILED · CANCELLED
get_scan
Read the full summary for a single scan: score, grade, severity counts, executive summary, top-5 highest-priority findings, and per-persona reviews. For QA Audit scans, get_scan also returns a qaAudit block containing the product-flow synthesis, journey replay, safe blocked actions, confidence, authenticated-route context, and page-budget coverage.
{
"scan_id": "clxxxxxxxxxxxxxxx"
}
Available Resources
pagelensai://scan/{id}/markdown
Fetch the same agent-flavoured Markdown report available from the PageLens UI. For QA Audit scans, this includes front matter such as qa_journey_event_count, qa_confidence, and qa_needs_review_step_count, followed by the application interpretation, journey replay, blocked/risky paths, safe actions, and next QA tests.
Use this when an agent needs rich context to reason about a QA Audit:
pagelensai://scan/clxxxxxxxxxxxxxxx/markdown
pagelensai://scan/{id}/summary.json
Fetch compact JSON for a scan. For QA Audit scans, the qaAudit object includes journey metadata, synthesis, page-budget coverage, and any queue_exhausted reason explaining why fewer than the purchased page count were discoverable.
pagelensai://scan/clxxxxxxxxxxxxxxx/summary.json
list_findings
Page through all findings for a scan. Filter by severity, category, persona, page URL, or rule ID. Use format: "full" to include descriptions, suggestions, and evidence.
{
"scan_id": "clxxxxxxxxxxxxxxx",
"severity": "HIGH",
"category": "SECURITY",
"format": "full",
"limit": 50
}
Severity levels: CRITICAL · HIGH · MEDIUM · LOW · INFO
Categories: UX · SEO · PERFORMANCE · ACCESSIBILITY · SECURITY · CONTENT · HEADERS · DESIGN · ERROR
Personas: MARKETER · CRO · UX · ACCESSIBILITY · BRAND · EXECUTIVE · PERFORMANCE · SEO
get_quick_wins
Return the top N quick-win findings — high impact, low-to-moderate effort — ranked by the same Impact × Effort scorer used in the PageLens dashboard. Optionally override the scan's preset to re-rank under a different lens.
{
"scan_id": "clxxxxxxxxxxxxxxx",
"limit": 5,
"preset_override": "CONVERSION"
}
Presets: PRE_SALES · PRE_LAUNCH · CONVERSION · INVESTOR · BRAND_POLISH
report_finding_feedback
Flag a finding as a false positive, wrong severity, wrong category, or not actionable. Requires a paragraph of reasoning and a concrete evidence snippet.
{
"finding_id": "clxxxxxxxxxxxxxxx",
"kind": "FALSE_POSITIVE",
"reason": "The selector .pointer-events-auto matches 100+ utility classes, not a single offending element.",
"evidence": "<div class=\"pointer-events-auto ...\"> — Tailwind utility, not an event-handler.",
"proposed_severity": "LOW"
}
Feedback kinds: FALSE_POSITIVE · INCORRECT_SEVERITY · INCORRECT_CATEGORY · NOT_ACTIONABLE · OTHER
acknowledge_finding_decision
Attach owner-controlled context to a finding when the issue is real, but reflects an intentional architecture, security, or product tradeoff.
This does not hide the finding, edit the report, or change the PageLens score. It records the rationale so the report can show that the owner has acknowledged the tradeoff, and future scans can recognise the same finding as previously acknowledged.
{
"finding_id": "clxxxxxxxxxxxxxxx",
"decision": "INTENTIONAL_TRADEOFF",
"reason": "Next.js Cache Components and PPR currently prevent us from using per-request CSP nonces safely.",
"evidence": "proxy.ts documents the CSP tradeoff: script-src uses 'unsafe-inline' because cached HTML shells cannot vary nonces per request.",
"expires_at": "2026-10-01T00:00:00Z"
}
Decision kinds: ACKNOWLEDGED · ACCEPTED_RISK · INTENTIONAL_TRADEOFF · WONT_FIX_NOW
clear_finding_decision
Clear a previously acknowledged decision so it stops appearing on current and future reports. The audit history is preserved.
{
"decision_id": "clxxxxxxxxxxxxxxx",
"reason": "We have migrated to a nonce-compatible rendering path."
}
You can also clear by finding_id when you do not have the decision_id.
What PageLens Checks
Each scan runs a deterministic rule engine + AI reviewer pipeline across every page:
| Area | Examples |
|---|---|
| SEO | Title/meta length, canonical URL, Open Graph, heading hierarchy |
| Performance | Core Web Vitals (LCP, CLS, INP), page weight, render-blocking resources, DOM size, third-party load |
| Security | HTTP→HTTPS redirect, exposed .env/.git files, SSL expiry, XSS surfaces, source maps, exposed secrets in page source |
| Accessibility | Focus-visible CSS, reduced-motion support, ARIA patterns |
| UX | Hero hierarchy, mobile menu patterns, CTA structure |
| Content | Placeholder text, stale copyright year, mixed-content references |
| QA Audit | Journey replay, safe form/input exploration, blocked risky actions, app-flow synthesis, page-budget coverage |
Findings include severity, effort estimate, copy-pasteable evidence, and a one-line fix suggestion.
QA Audit scans are different from standard technical scans: the primary artifact is the agentic journey report. PageLens will attempt to review the purchased page budget (for example, up to 10 pages on QA Audit) and should only return fewer pages when the safe same-origin link graph is exhausted. It can use validated auth profiles for scoped post-login routes, while still avoiding SSO, CAPTCHA, MFA, signup, payment, destructive changes, and other committing actions.
Example Agent Workflows
Pre-launch audit in Cursor:
"Run PageLens on my staging site, list all CRITICAL and HIGH findings, and create GitHub issues for the top 5."
CRO review:
"Fetch the latest scan for example.com, get the CONVERSION quick wins, and summarise what to fix before the campaign launch."
Security sweep:
"List all SECURITY findings with severity HIGH or above from my last scan and show me the evidence for each."
Accepted architecture decision:
"For the CSP unsafe-inline finding, acknowledge this as an intentional Next.js/PPR tradeoff with the rationale from our security docs. Do not mark it as a false positive."
Post-fix validation:
"After I've fixed the findings, start a new scan and compare the score to the previous one."
QA Audit review:
"Fetch the latest QA Audit for example.com, read the markdown resource, and tell me which user journeys were verified, which actions were blocked safely, and whether PageLens reviewed the full page budget."
Pricing
PageLens is pay-per-scan — no subscription required for one-off audits.
| Tier | Price | Pages per scan |
|---|---|---|
| Starter | $1 | 3 pages |
| QA Audit | $10 | Up to 10 pages · agentic journey review |
| Professional | $15 | 25 pages |
| Monitor | $5 / month | Weekly automated scans · 5 pages |
Every technical scan tier produces the same full report — Starter through Professional differ only by page-count cap, not report depth. QA Audit produces a journey-first report focused on application flow, safe exploration, design/UX critique, and next QA tests. The Monitor subscription runs weekly scans automatically and surfaces drift between runs.
Links
- Website: https://www.pagelensai.com
- MCP endpoint:
https://www.pagelensai.com/api/mcp - Dashboard: https://www.pagelensai.com/dashboard
- GitHub: https://github.com/PageLens-AI/pagelensai-mcp-server
- MCP docs: https://www.pagelensai.com/mcp
License
MIT © PageLens AI
İlgili Sunucular
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
mcp-ssh-sre
An MCP server providing read-only server monitoring tools to AI assistants. Runs predefined diagnostic commands over SSH and passes only the results to the LLM - your server credentials and shell are never exposed.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers, without authentication.
Devcontainers
Integrates with the devcontainers CLI to manage development containers. Requires Docker.
SAME (Stateless Agent Memory Engine
Your AI's memory shouldn't live on someone else's server — 12 MCP tools that give it persistent context from your local markdown, no cloud, no API keys, single binary.
CodeRabbit
Integrate with CodeRabbit AI for automated code reviews, pull request analysis, and report generation.
Burp Suite
Integrate Burp Suite with AI clients using the Model Context Protocol (MCP).
Apache SkyWalking MCP
An MCP server for integrating AI agents with the SkyWalking observability platform and its ecosystem.
Petstore MCP Server & Client
An MCP server and client implementation for the Swagger Petstore API.
JFrog MCP Server
Interact with the JFrog Platform API for repository management, build tracking, and release lifecycle management.
Sequa MCP
A proxy that connects local STDIO with remote MCP servers, enabling IDEs to use MCP without extra infrastructure.