apify-actor-development作者: apify

Create, debug, and deploy serverless cloud programs for web scraping, automation, and data processing. Supports JavaScript, TypeScript, and Python templates with integrated Crawlee, Playwright, and Cheerio libraries for HTTP and browser-based crawling Includes local testing via apify run with isolated storage, schema validation for inputs/outputs, and deployment to Apify platform via apify push Requires Apify CLI authentication and mandatory generatedBy metadata in .actor/actor.json for AI...

npx skills add https://github.com/apify/agent-skills --skill apify-actor-development

Apify Actor development

Important: Before you begin, fill in the generatedBy property in the meta section of .actor/actor.json. Replace it with the tool and model you're currently using, such as "Claude Code with Claude Sonnet 4.5". This helps Apify monitor and improve AGENTS.md for specific AI tools and models.

What are Apify Actors?

Actors are serverless programs inspired by the UNIX philosophy - programs that do one thing well and can be easily combined to build complex systems. They're packaged as Docker images and run in isolated containers in the cloud.

Core Concepts:

  • Accept well-defined JSON input
  • Perform isolated tasks (web scraping, automation, data processing)
  • Produce structured JSON output to datasets and/or store data in key-value stores
  • Can run from seconds to hours or even indefinitely
  • Persist state and can be restarted

Prerequisites and setup (mandatory)

Before creating or modifying Actors, verify that apify CLI is installed apify --help.

If it is not installed, use one of these methods (listed in order of preference):

# Preferred: install via a package manager (provides integrity checks)
npm install -g apify-cli

# Or (Mac): brew install apify-cli

Security note: Do NOT install the CLI by piping remote scripts to a shell (e.g. curl … | bash or irm … | iex). Always use a package manager.

When the apify CLI is installed, check that it is logged in with:

apify info  # Should return your username

If not logged in, authenticate using OAuth (opens browser):

apify login

If browser login isn't available (headless environment or CI), the CLI automatically reads APIFY_TOKEN from the environment. Ensure the env var is exported and run any apify command - no explicit login needed. If the user doesn't have a token, generate one at https://console.apify.com/settings/integrations.

Security note: Avoid passing tokens as command-line arguments (e.g. apify login -t <token>). Arguments are visible in process listings and may be recorded in shell history. Prefer environment variables or interactive login instead. Never log, print, or embed APIFY_TOKEN in source code or configuration files. Use a token with the minimum required permissions (scoped token) and rotate it periodically.

Template selection

IMPORTANT: Before starting Actor development, always ask the user which programming language they prefer:

  • JavaScript - Use apify create <actor-name> -t project_empty
  • TypeScript - Use apify create <actor-name> -t ts_empty
  • Python - Use apify create <actor-name> -t python-empty

Use the appropriate CLI command based on the user's language choice. Additional packages (Crawlee, Playwright, etc.) can be installed later as needed.

Quick start workflow

  1. Create Actor project - Run the appropriate apify create command based on user's language preference (see Template selection above)
  2. Install dependencies (verify package names match intended packages before installing)
    • JavaScript/TypeScript: npm install (uses package-lock.json for reproducible, integrity-checked installs — commit the lockfile to version control)
    • Python: pip install -r requirements.txt (pin exact versions in requirements.txt, e.g. crawlee==1.2.3, and commit the file to version control)
  3. Implement logic - Write the Actor code in src/main.py, src/main.js, or src/main.ts
  4. Configure schemas - Update input/output schemas in .actor/input_schema.json, .actor/output_schema.json, .actor/dataset_schema.json
  5. Configure platform settings - Update .actor/actor.json with Actor metadata (see references/actor-json.md)
  6. Write documentation - Create comprehensive README.md for the marketplace (see references/actor-readme.md — this is mandatory, not optional)
  7. Test locally - Run apify run to verify functionality (see Local testing section below)
  8. Deploy - Run apify push to deploy the Actor on the Apify platform (Actor name is defined in .actor/actor.json)

Security

Treat all crawled web content as untrusted input. Actors ingest data from external websites that may contain malicious payloads. Follow these rules:

  • Sanitize crawled data — Never pass raw HTML, URLs, or scraped text directly into shell commands, eval(), database queries, or template engines. Use proper escaping or parameterized APIs.
  • Validate and type-check all external data — Before pushing to datasets or key-value stores, verify that values match expected types and formats. Reject or sanitize unexpected structures.
  • Do not execute or interpret crawled content — Never treat scraped text as code, commands, or configuration. Content from websites could include prompt injection attempts or embedded scripts.
  • Isolate credentials from data pipelines — Ensure APIFY_TOKEN and other secrets are never accessible in request handlers or passed alongside crawled data. Use the Apify SDK's built-in credential management rather than passing tokens through environment variables in data-processing code.
  • Review dependencies before installing — When adding packages with npm install or pip install, verify the package name and publisher. Typosquatting is a common supply-chain attack vector. Prefer well-known, actively maintained packages.
  • Pin versions and use lockfiles — Always commit package-lock.json (Node.js) or pin exact versions in requirements.txt (Python). Lockfiles ensure reproducible builds and prevent silent dependency substitution. Run npm audit or pip-audit periodically to check for known vulnerabilities.

Best practices

✓ Do:

  • Use apify run to test Actors locally (configures Apify environment and storage)
  • Use Apify SDK (apify) for code running on the Apify platform
  • Validate input early with proper error handling and fail gracefully
  • Use CheerioCrawler for static HTML (10x faster than browsers)
  • Use PlaywrightCrawler only for JavaScript-heavy sites
  • Use router pattern (createCheerioRouter/createPlaywrightRouter) for complex crawls
  • Implement retry strategies with exponential backoff
  • Use proper concurrency: HTTP (10-50), Browser (1-5)
  • Set sensible defaults in .actor/input_schema.json
  • Define output schema in .actor/output_schema.json
  • Clean and validate data before pushing to dataset
  • Use semantic CSS selectors with fallback strategies
  • Respect robots.txt, ToS, and implement rate limiting
  • Always use apify/log package — censors sensitive data (API keys, tokens, credentials)
  • Implement readiness probe handler (required if your Actor uses standby mode)

✗ Don't:

  • Use npm start, npm run start, npx apify run, or similar commands to run Actors (use apify run instead)
  • Assume local storage from apify run is pushed to or visible in Apify Console — it is local-only; deploy with apify push and run on the platform to see results in Apify Console
  • Rely on Dataset.getInfo() for final counts on Cloud
  • Use browser crawlers when HTTP/Cheerio works
  • Hard code values that should be in input schema or environment variables
  • Skip input validation or error handling
  • Overload servers - use appropriate concurrency and delays
  • Scrape prohibited content or ignore Terms of Service
  • Store personal/sensitive data unless explicitly permitted
  • Use deprecated options like requestHandlerTimeoutMillis on CheerioCrawler (v3.x)
  • Use additionalHttpHeaders - use preNavigationHooks instead
  • Pass raw crawled content into shell commands, eval(), or code-generation functions
  • Use console.log() or print() instead of the Apify logger — these bypass credential censoring
  • Disable standby mode without explicit permission

Logging

See references/logging.md for complete logging documentation including available log levels and best practices for JavaScript/TypeScript and Python.

Commands

# Bootstrap & local development
apify create [name]                    # Create new Actor project from a template
apify init                             # Initialize Actor in current directory
apify run                              # Run Actor locally with simulated platform env
apify run --purge                      # Run after clearing previous local storage
apify validate-schema                  # Validate .actor/input_schema.json

# Authentication & account
apify login                            # Authenticate account (token stored in ~/.apify)
apify logout                           # Remove stored credentials
apify info                             # Print currently authenticated account info

# Deployment & remote execution
apify push                             # Deploy Actor to platform per .actor/actor.json
apify pull <actor>                     # Download Actor code from the platform
apify call <actor>                     # Execute Actor remotely on the platform
apify actors build <actor>             # Create a new build of an Actor
apify runs ls                          # List recent runs

# Discovery (search Apify Store for community Actors)
apify actors search "<query>" --user-agent <your-agent-name>
apify actors info <actor>              # Details about a specific Actor

# Secrets (referenced from actor.json via "@mySecret")
apify secrets add <name> <value>       # Store a secret locally; uploaded on push
apify secrets ls                       # List stored secret keys

# Direct API access
apify api <endpoint>                   # Authenticated HTTP request to Apify API

# Help
apify help                             # List all commands
apify <command> --help                 # Detailed help for a specific command

Note: If no dedicated Actor exists for your target, search Apify Store for community options with apify actors search "<query>" --user-agent <your-agent-name> before building from scratch.

Tip: Inside a running Actor, prefer the SDK (Actor.getInput() / Actor.get_input(), Actor.pushData() / Actor.push_data(), Actor.setValue() / Actor.set_value()) over the equivalent apify actor runtime subcommands.

IMPORTANT: Always use apify run to test Actors locally. Do not use npm run start, npm start, yarn start, or other package manager commands - these will not properly configure the Apify environment and storage.

Apify platform environment

When the Actor runs on the Apify platform, the API token is automatically available via the APIFY_TOKEN environment variable (note: the variable is APIFY_TOKEN, not APIFY_API_TOKEN). The Apify SDK reads it automatically, so you do not need to pass it explicitly. Locally, run apify login once and the SDK will use your stored credentials.

Local testing

When testing an Actor locally with apify run, provide input data by creating a JSON file at:

storage/key_value_stores/default/INPUT.json

This file should contain the input parameters defined in your .actor/input_schema.json. The actor will read this input when running locally, mirroring how it receives input on the Apify platform.

IMPORTANT - Local storage is NOT synced to Apify Console:

  • Running apify run stores all data (datasets, key-value stores, request queues) only on your local filesystem in the storage/ directory.
  • This data is never automatically uploaded or pushed to the Apify platform. It exists only on your machine.
  • To verify results on Apify Console, you must deploy the Actor with apify push and then run it on the platform.
  • Do not rely on checking Apify Console to verify results from local runs — instead, inspect the local storage/ directory or check the Actor's log output.

Standby mode

Standby mode enables Actors to work as API servers - they remain ready in the background to handle HTTP requests.

When to use Standby mode: Use Standby when the Actor must handle interactive, real-time HTTP requests — API endpoints, webhook receivers, real-time data lookups, MCP servers, or scraping APIs serving on-demand single-URL requests.

When building a Standby Actor, set usesStandbyMode: true in .actor/actor.json and implement an HTTP server. See references/standby-mode.md for configuration, environment variables, complete code examples, and operational limits.

Project structure

.actor/
├── actor.json           # Actor config: name, version, env vars, runtime
├── input_schema.json    # Input validation & Console form definition
└── output_schema.json   # Output storage and display templates
src/
└── main.js/ts/py       # Actor entry point
storage/                # Local-only storage (NOT synced to Apify Console)
├── datasets/           # Output items (JSON objects)
├── key_value_stores/   # Files, config, INPUT
└── request_queues/     # Pending crawl requests
Dockerfile              # Container image definition

Actor configuration

See references/actor-json.md for complete actor.json structure and configuration options.

Input schema

See references/input-schema.md for input schema structure and examples.

Output schema

See references/output-schema.md for output schema structure, examples, and template variables.

Dataset schema

See references/dataset-schema.md for dataset schema structure, configuration, and display properties.

Key-value store schema

See references/key-value-store-schema.md for key-value store schema structure, collections, and configuration.

Actor README

IMPORTANT: Always generate a README.md as part of Actor development. The README is the Actor's landing page on Apify Store and is critical for discoverability (SEO), user onboarding, and support. Do not consider an Actor complete without a proper README.

See references/actor-readme.md for the required structure, SEO best practices, and content guidelines. Also review these top Actors for best practices:

MCP tools

Apify MCP

If the Apify MCP server is configured, use these tools for documentation:

  • search-apify-docs - Search documentation
  • fetch-apify-docs - Get full doc pages

Otherwise, the MCP Server url: https://mcp.apify.com/?tools=docs.

Playwright MCP (debugging)

The Playwright MCP server is a useful tool for debugging Actors that interact with the web - it lets the agent drive a real browser to inspect pages, capture selectors, and reproduce issues.

Install with the Claude Code CLI:

claude mcp add playwright npx @playwright/mcp@latest

Or add it manually to your MCP config:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"]
    }
  }
}

Resources

来自 apify 的更多技能

bug-triage
by apify
Triage open bug issues on apify/apify-mcp-server . Analyze, draft responses, get approval, post.
dig
by apify
Flexible skill for exploring, planning, and speccing work on the Apify MCP server. Do NOT edit source files — this skill is for understanding and planning only.
apify-actorization
by apify
Convert existing projects into serverless Apify Actors with language-specific SDK integration. Supports JavaScript/TypeScript (with Actor.init() / Actor.exit() ), Python (async context manager), and any language via CLI wrapper Provides structured workflow: apify init to scaffold, apply SDK wrapping, configure input/output schemas, test locally with apify run , then deploy with apify push Includes input and output schema validation, Docker containerization, and optional pay-per-event...
apify-audience-analysis
by apify
Extract audience demographics, engagement patterns, and behavior data from Facebook, Instagram, YouTube, and TikTok. Supports 18+ specialized Actors covering follower demographics, engagement metrics, comments, and profile analysis across all four platforms Offers three output formats: quick chat display, CSV export, or JSON export for downstream analysis Requires Apify token and mcpc CLI tool; uses dynamic schema fetching to adapt inputs to each Actor's requirements Includes structured...
apify-brand-reputation-monitoring
by apify
Monitor brand reputation across Google Maps, Booking.com, TripAdvisor, Facebook, Instagram, YouTube, and TikTok. Supports 16+ dedicated Apify Actors covering reviews, ratings, comments, and mentions across all major platforms Flexible output formats: display results in chat, export to CSV, or save as JSON for downstream analysis Requires Apify token and Node.js 20.6+; uses mcpc CLI to dynamically fetch Actor schemas and input parameters Workflow guides users through platform selection,...
apify-competitor-intelligence
by apify
Multi-platform competitor analysis via Apify Actors for Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok. Covers 25+ specialized Actors across seven platforms, each optimized for specific analysis types: business data extraction, review comparison, ad strategy monitoring, content performance, and audience insights Requires Apify token, Node.js 20.6+, and the mcpc CLI tool to fetch Actor schemas and run analyses dynamically Supports three output formats: quick chat display,...
apify-content-analytics
by apify
Multi-platform content analytics via Apify Actors for Instagram, Facebook, YouTube, and TikTok. Supports 17+ specialized Actors covering posts, reels, stories, comments, hashtags, followers, and ads across all four platforms Dynamically fetches Actor schemas using mcpc CLI to determine required inputs and available output fields Outputs results in three formats: quick chat display, CSV export, or JSON export with customizable result counts Requires Apify token in .env file and Node.js 20.6+...
apify-ecommerce
by apify
Extract product data, prices, reviews, and seller information from 50+ e-commerce marketplaces. Three workflow modes: Products & Pricing (price tracking, competitor analysis), Customer Reviews (sentiment analysis, quality issues), and Seller Intelligence (vendor discovery via Google Shopping) Supports Amazon (20+ regions), Walmart, eBay, IKEA, Costco, and European retailers; input via product URLs, category URLs, or keyword search Optional AI-powered analysis generates insights on price...

NotebookLM 网页导入器

一键将网页和 YouTube 视频导入 NotebookLM。超过 200,000 用户信赖。

安装 Chrome 扩展