spark-authoring-cliद्वारा microsoft

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

npx skills add https://github.com/microsoft/skills-for-fabric --skill spark-authoring-cli

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

  • GitHub Copilot CLI / VS Code: invoke the check-updates skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.

CRITICAL NOTES

  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering

Spark Authoring — CLI Skill

This skill covers two complementary areas: (1) managing Fabric Spark artifacts via REST APIs (workspaces, lakehouses, notebooks, jobs, pipelines) and (2) writing code inside Fabric Notebook cells (PySpark, Scala, SparkR, SQL with correct lakehouse access, notebookutils, and Spark configuration). For notebook code authoring fundamentals and shared modules, MUST see SPARK-NOTEBOOK-AUTHORING-CORE.md.

Table of Contents

TaskReferenceNotes
RULES — Read these first, follow them alwaysSKILL.md § RULESMUST read — 4 rules for this skill
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id]
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key Concepts
Environment URLsCOMMON-CORE.md § Environment URLs
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; read before any auth issue
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIs
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires storage.azure.com token, not Fabric token
Definition EnvelopeITEM-DEFINITIONS-CORE.md § Definition EnvelopeDefinition payload structure
Per-Item-Type DefinitionsITEM-DEFINITIONS-CORE.md § Per-Item-Type DefinitionsSupport matrix, decoded content, part paths — REST specs, CLI recipes
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Authentication RecipesCOMMON-CLI.md § Authentication Recipesaz login flows and token acquisition
Fabric Control-Plane API via az restCOMMON-CLI.md § Fabric Control-Plane API via az restAlways pass --resource https://api.fabric.microsoft.com or az rest fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via curlCOMMON-CLI.md § OneLake Data Access via curlUse curl not az rest (different token audience)
SQL / TDS Data-Plane AccessCOMMON-CLI.md § SQL / TDS Data-Plane Access
Job Execution (CLI)COMMON-CLI.md § Job Execution
Job SchedulingCOMMON-CLI.md § Job SchedulingURL is /jobs/{jobType}/schedules; endDateTime required
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)az rest audience, shell escaping, token expiry
Quick Reference: az rest TemplateCOMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich --resource + tool for each service
Relationship to SPARK-CONSUMPTION-CORE.mdSPARK-AUTHORING-CORE.md § Relationship to SPARK-CONSUMPTION-CORE.md
Data Engineering Authoring Capability MatrixSPARK-AUTHORING-CORE.md § Data Engineering Authoring Capability Matrix
Lakehouse ManagementSPARK-AUTHORING-CORE.md § Lakehouse Management
Notebook ManagementSPARK-AUTHORING-CORE.md § Notebook Management
Notebook Execution & Job ManagementSPARK-AUTHORING-CORE.md § Notebook Execution & Job Management
CI/CD & Automation PatternsSPARK-AUTHORING-CORE.md § CI/CD & Automation Patterns
Infrastructure-as-CodeSPARK-AUTHORING-CORE.md § Infrastructure-as-Code
Performance Optimization & Resource ManagementSPARK-AUTHORING-CORE.md § Performance Optimization & Resource Management
Authoring Gotchas and TroubleshootingSPARK-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting
Quick Reference: Authoring Decision GuideSPARK-AUTHORING-CORE.md § Quick Reference: Authoring Decision Guide
Recommended Patterns (Data Engineering)data-engineering-patterns.md § Recommended patterns
Data Ingestion Principlesdata-engineering-patterns.md § Data Ingestion Principles
Transformation Patternsdata-engineering-patterns.md § Transformation Patterns
Delta Lake Best Practicesdata-engineering-patterns.md § Delta Lake Best Practices
Quality Assurance Strategiesdata-engineering-patterns.md § Quality Assurance Strategies
Recommended Patterns (Development Workflow)development-workflow.md § Recommended patterns
Notebook Lifecycledevelopment-workflow.md § Notebook Lifecycle
Parameterization Patternsdevelopment-workflow.md § Parameterization Patterns
Variable Library (notebook + pipeline usage)development-workflow.md § Method 4: Variable LibrarygetLibrary() + dot notation in notebooks; libraryVariables + @pipeline().libraryVariables in pipelines
Variable Library DefinitionITEM-DEFINITIONS-CORE.md § VariableLibraryDefinition parts, decoded content, types, pipeline mappings, gotchas
Local Testing Strategydevelopment-workflow.md § Local Testing Strategy
Debugging Patternsdevelopment-workflow.md § Debugging Patterns
Recommended Patterns (Infrastructure)infrastructure-orchestration.md § Recommended patterns
Workspace Provisioning Principlesinfrastructure-orchestration.md § Workspace Provisioning Principles
Lakehouse Configuration Guidanceinfrastructure-orchestration.md § Lakehouse Configuration Guidance
Pipeline Design Patternsinfrastructure-orchestration.md § Pipeline Design Patterns
CI/CD Integration Strategyinfrastructure-orchestration.md § CI/CD Integration Strategy
Notebook API — Which Endpoint to Usenotebook-api-operations.md § Quick DecisionStart here for remote notebook edits — getDefinition vs updateDefinition
Notebook Modification Workflownotebook-api-operations.md § WorkflowFive-step flow: retrieve, decode, modify, encode, upload
Notebook API Error Referencenotebook-api-operations.md § Error Reference411, 400 (updateMetadata), 401, 403 explained
Notebook API Gotchasnotebook-api-operations.md § Gotchas/result suffix, empty body, \n per-line rule, format=ipynb
Default Lakehouse Bindingnotebook-api-operations.md § Default Lakehouse Binding.ipynb metadata vs .py # METADATA block; discover IDs dynamically
Public URL Data Ingestionnotebook-api-operations.md § Public URL Data IngestionUse real source URL, stage into Files/, then read with Spark
getDefinition (read notebook content)notebook-api-operations.md § Step 1 — Retrieve Notebook ContentLRO flow, ?format=ipynb, empty body (--body '{}') requirement
Decode Base64 Notebook Payloadnotebook-api-operations.md § Step 2 — Decode the Notebook ContentExtract payload, base64 decode, ipynb JSON structure
Modify Notebook Cellsnotebook-api-operations.md § Step 3 — Modify the Notebook ContentFind cell, insert/replace lines, \n per-line rule
updateDefinition (write notebook content)notebook-api-operations.md § Step 4 — Re-encode and UploadRe-encode, upload, LRO poll, updateMetadata flag pitfall
Verify Notebook Update (Optional)notebook-api-operations.md § Step 5 — Verify the UpdateSkip unless you suspect a silent failure — Succeeded from updateDefinition is sufficient (see Rule 2)
Notebook API Error Referencenotebook-api-operations.md § Error Reference411, 400 (updateMetadata), 401, 403 explained
Notebook API End-to-End Scriptnotebook-api-operations.md § Complete End-to-End ScriptFull bash: get → decode → modify → encode → update → verify
Quick Start ExamplesSKILL.md § Quick Start ExamplesMinimal examples for common operations
— Notebook Code Authoring (shared modules) —
Notebook Authoring CoreSPARK-NOTEBOOK-AUTHORING-CORE.mdREAD FIRST for notebook code tasks — fundamentals, code gen approach, module index

Must/Prefer/Avoid

MUST DO

  • Check for recent jobs BEFORE creating new notebook runs — Query job instances from last 5 minutes; if recent job exists, monitor it instead of creating duplicate
  • Capture job instance ID immediately after POST — Store job ID before any other operations to enable proper monitoring
  • Verify workspace capacity assignment before operations — Workspace must have capacity assigned and active
  • When user provides a public data URL, follow the Public URL Data Ingestion policy — keep detailed behavior in the linked resource section to avoid drift/duplication
  • Format notebook cells correctly — Each line in cell source array MUST end with \n to prevent code merging
  • Use correct Lakehouse Livy session body format — Send a FLAT JSON with name, driverMemory, driverCores, executorMemory, executorCores. Do NOT wrap in {"payload": ...} or send only {"kind": "pyspark"} — that causes HTTP 500. Use valid memory values (28g, 56g, 112g, 224g). See Create Lakehouse Livy Session example below and SPARK-CONSUMPTION-CORE.md.

PREFER

  • Poll job status with proper intervals — 10-30 seconds between polls; timeout after reasonable duration (e.g., 30 minutes)
  • Check job history when POST response is unreadable — If POST returns "No Content" or unreadable response, query recent jobs (last 1 minute) before retrying
  • Use Starter Pool for development — Development/testing workloads should use useStarterPool: true
  • Use Workspace Pool for production — Production workloads need consistent performance with useWorkspacePool: true
  • Enable lakehouse schemas during creation — Set creationPayload.enableSchemas: true for better table organization
  • Implement idempotency checks — Prevent duplicate operations by checking existing state first

AVOID

  • Never retry POST with same parameters — If you have a job ID, only use GET to check status; don't create duplicate job instances
  • Don't skip capacity verification — Operations will fail if workspace capacity is paused or unassigned
  • Avoid immediate POST retries on failures — Check for existing/active jobs first to prevent duplicates
  • Don't create new runs if monitoring existing job — One job at a time; wait for completion before submitting new runs
  • Don't hardcode workspace/lakehouse IDs — Discover dynamically via item listing or catalog search APIs
  • Do NOT use Lakehouse Livy sessions to run a Fabric notebook — Lakehouse Livy sessions (the public Livy API) are for ad-hoc interactive Spark code execution. To run a notebook as a job, use the Jobs API (RunNotebook) which creates a Notebook Spark session internally. See SPARK-AUTHORING-CORE.md § Notebook Execution & Job Management

RULES — Read these first, follow them always

Rule 1 — Validate prerequisites before operations. Verify workspace has capacity assigned (see COMMON-CORE.md Create Workspace and Capacity Management) and resource IDs exist before attempting operations.

Rule 2 — Trust updateDefinition success. A Succeeded poll result from updateDefinition is sufficient confirmation that content and lakehouse bindings persisted. Do NOT call getDefinition after every upload — it is an async LRO that adds significant latency. Only use getDefinition for its intended purpose: reading current notebook content before making modifications.

Rule 3 — Prevent duplicate jobs and monitor execution properly. Before submitting new notebook run, ALWAYS check for recent job instances first (last 5 minutes). If recent job exists, monitor it instead of creating duplicate. After submission, capture job instance ID immediately and poll status - never retry POST. See SPARK-AUTHORING-CORE.md Job Monitoring for patterns.

Rule 4 — For notebook code authoring, MUST follow SPARK-NOTEBOOK-AUTHORING-CORE.md. When writing code inside notebook cells, MUST read SPARK-NOTEBOOK-AUTHORING-CORE.md first — it defines the code generation approach, rules, and a Module Index linking to detailed guides (lakehouse paths, connections, context, orchestration, etc.). Use the Spark-specific resources in this skill (data-engineering-patterns.md, development-workflow.md) for Spark-only implementation details.


Quick Start Examples

For detailed patterns, authentication, and comprehensive API usage, see:

  • COMMON-CORE.md — Fabric REST API patterns, authentication, item discovery
  • COMMON-CLI.mdaz rest usage, environment detection, token acquisition
  • SPARK-AUTHORING-CORE.md — Notebook deployment, lakehouse creation, job execution

Below are minimal quick-start examples. Always reference the COMMON- files for production use.*

Create Workspace & Lakehouse

# See COMMON-CORE.md Environment URLs and SPARK-AUTHORING-CORE.md for full patterns
cat > /tmp/body.json << 'EOF'
{"displayName": "DataEng-Dev"}
EOF
workspace_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \
  --url "https://api.fabric.microsoft.com/v1/workspaces" \
  --body @/tmp/body.json --query "id" --output tsv)

cat > /tmp/body.json << 'EOF'
{"displayName": "DevLakehouse", "type": "Lakehouse", "creationPayload": {"enableSchemas": true}}
EOF
lakehouse_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \
  --url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/items" \
  --body @/tmp/body.json --query "id" --output tsv)

Organize Lakehouse Tables with Schemas

# See SPARK-AUTHORING-CORE.md Lakehouse Schema Organization for table organization patterns
# Create schemas for medallion architecture
spark.sql("CREATE SCHEMA IF NOT EXISTS bronze")
spark.sql("CREATE SCHEMA IF NOT EXISTS silver")
spark.sql("CREATE SCHEMA IF NOT EXISTS gold")

Create Lakehouse Livy Session

# See SPARK-CONSUMPTION-CORE.md for Lakehouse Livy session configuration and management
# IMPORTANT: Body MUST be flat JSON with memory/cores — do NOT wrap in {"payload": ...}
cat > /tmp/body.json << 'EOF'
{"name": "dev-session", "driverMemory": "56g", "driverCores": 8, "executorMemory": "56g", "executorCores": 8, "conf": {"spark.dynamicAllocation.enabled": "true", "spark.fabric.pool.name": "Starter Pool"}}
EOF
az rest --method post --resource "https://api.fabric.microsoft.com" \
  --url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/lakehouses/$lakehouse_id/livyapi/versions/2023-12-01/sessions" \
  --body @/tmp/body.json

Lakehouse Livy Session Body — Common Mistakes

  • {"payload": {"kind": "pyspark"}} → HTTP 500 (wrong wrapper, missing required fields)
  • {"kind": "pyspark"} → HTTP 500 (missing driverMemory, executorMemory, etc.)
  • ✅ Flat JSON with name, driverMemory, driverCores, executorMemory, executorCores (and optionally conf with Starter Pool)

Spark Performance Configs

For detailed workload-specific configurations, see data-engineering-patterns.md Delta Lake Best Practices.

Quick reference:

# Write-heavy (Bronze): Disable V-Order, enable autoCompact
# Balanced (Silver): Enable V-Order, adaptive execution  
# Read-heavy (Gold): Vectorized reads, optimal parallelism
# See data-engineering-patterns.md for complete config tables

Focus: Essential CLI patterns for Spark/data engineering development and notebook code authoring, with intelligent routing to specialized resources. For comprehensive patterns, always reference COMMON-* files and resource documents.

NotebookLM Web Importer

एक क्लिक में वेब पेज और YouTube वीडियो NotebookLM में आयात करें। 200,000+ उपयोगकर्ताओं द्वारा विश्वसनीय।

Chrome एक्सटेंशन इंस्टॉल करें