spark-consumption-clivon microsoft

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

npx skills add https://github.com/microsoft/skills-for-fabric --skill spark-consumption-cli

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

  • GitHub Copilot CLI / VS Code: invoke the check-updates skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.

CRITICAL NOTES

  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering

Data Engineering Consumption — CLI Skill

Table of Contents

TaskReferenceNotes
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key Concepts
Environment URLsCOMMON-CORE.md § Environment URLs
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; read before any auth issue
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIs
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires storage.azure.com token, not Fabric token
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id]
Authentication RecipesCOMMON-CLI.md § Authentication Recipesaz login flows and token acquisition
Fabric Control-Plane API via az restCOMMON-CLI.md § Fabric Control-Plane API via az restAlways pass --resource https://api.fabric.microsoft.com or az rest fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via curlCOMMON-CLI.md § OneLake Data Access via curlUse curl not az rest (different token audience)
SQL / TDS Data-Plane AccessCOMMON-CLI.md § SQL / TDS Data-Plane Accesssqlcmd (Go) connect, query, CSV export
Job Execution (CLI)COMMON-CLI.md § Job Execution
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)az rest audience, shell escaping, token expiry
Quick Reference: az rest TemplateCOMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich --resource + tool for each service
Relationship to SPARK-AUTHORING-CORE.mdSPARK-CONSUMPTION-CORE.md § Relationship to SPARK-AUTHORING-CORE.md
Data Engineering Consumption Capability MatrixSPARK-CONSUMPTION-CORE.md § Data Engineering Consumption Capability Matrix
OneLake Table APIs (Schema-enabled Lakehouses)SPARK-CONSUMPTION-CORE.md § OneLake Table APIs (Schema-enabled Lakehouses)Unity Catalog-compatible metadata; requires storage.azure.com token
Lakehouse Livy Session ManagementSPARK-CONSUMPTION-CORE.md § Livy Session ManagementLakehouse Livy API: session creation, states, lifecycle, termination
Interactive Data ExplorationSPARK-CONSUMPTION-CORE.md § Interactive Data ExplorationStatement execution, output retrieval, data discovery
PySpark Analytics PatternsSPARK-CONSUMPTION-CORE.md § PySpark Analytics PatternsCross-lakehouse 3-part naming, performance optimization
Must/Prefer/AvoidSKILL.md § Must/Prefer/AvoidMUST DO / AVOID / PREFER checklists
Quick StartSKILL.md § Quick StartCLI-specific Lakehouse Livy session setup and data exploration
Key Fabric PatternsSKILL.md § Key Fabric PatternsSpark pattern quick-reference table
Session CleanupSKILL.md § Session CleanupClean up idle Lakehouse Livy sessions via CLI

Must/Prefer/Avoid

MUST DO

  • Check for existing idle sessions before creating new ones
  • Use dynamic workspace/lakehouse discovery
  • Follow API patterns from COMMON-CLI.md

PREFER

  • sqldw-consumption-cli for simple lakehouse queries — row counts, SELECT, schema exploration, filtering, and aggregation on lakehouse Delta tables should use the SQL Endpoint via sqlcmd, not Spark. Only use this skill when the user explicitly requests PySpark, DataFrames, or Spark-specific features.
  • SQL Endpoint for Delta tables
  • Livy for unstructured/JSON data or complex Python analytics
  • Session reuse over creation

AVOID

  • Hardcoded workspace IDs
  • Creating unnecessary sessions
  • Large result sets without LIMIT
  • Confusing Lakehouse Livy sessions with Notebook Spark sessions — This skill covers Lakehouse Livy sessions (the public Livy API at /lakehouses/{lhId}/livyapi/.../sessions). Notebook Spark sessions are created internally when running a notebook via the Jobs API (RunNotebook) and are NOT managed through the Livy API. To run a notebook as a job, see SPARK-AUTHORING-CORE.md § Notebook Execution & Job Management

Quick Start

Environment Setup

Apply environment detection from COMMON-CORE.md Environment Detection Pattern to set:

  • $FABRIC_API_BASE and $FABRIC_RESOURCE_SCOPE
  • $FABRIC_API_URL and $LIVY_API_PATH for Livy operations

Authentication: Use token acquisition from COMMON-CLI.md Environment Detection and API Configuration

Workspace & Item Discovery

Preferred: Use COMMON-CLI.md item discovery patterns (Finding things in Fabric) to find workspaces and items by name.

Fallback (when workspace is already known):

# List workspaces
az rest --method get --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces" --query "value[].{name:displayName, id:id}" --output table
read -p "Workspace ID: " workspaceId

# List lakehouses in workspace
az rest --method get --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/items?type=Lakehouse" --query "value[].{name:displayName, id:id}" --output table  
read -p "Lakehouse ID: " lakehouseId

Lakehouse Livy Session Management

Two types of Spark sessions in Fabric — This skill manages Lakehouse Livy sessions, created via the public Livy API endpoint (/lakehouses/{lhId}/livyapi/.../sessions). These are ad-hoc interactive sessions for remote clients. Notebook Spark sessions are a separate mechanism — they are created internally when a Fabric Notebook is executed (via portal or Jobs API RunNotebook), and are managed through the notebook lifecycle, not the Livy API.

# Check for existing idle Lakehouse Livy session (avoid resource waste)
sessionId=$(az rest --method get --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions" --query "sessions[?state=='idle'][0].id" --output tsv)

# Create if none available - FORCE STARTER POOL USAGE
if [[ -z "$sessionId" ]]; then
    cat > /tmp/body.json << 'EOF'
{
    "name":"analysis",
    "driverMemory":"56g",
    "driverCores":8,
    "executorMemory":"56g",
    "executorCores":8,
    "conf": {
        "spark.dynamicAllocation.enabled": "true",
        "spark.fabric.pool.name": "Starter Pool"
    }
}
EOF
    sessionId=$(az rest --method post --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions" --body @/tmp/body.json --query "id" --output tsv)
    
    echo "⏳ Waiting for starter pool session to be ready..." 
    # With starter pools, this should be 3-5 seconds
    timeout=30  # Reduced from 90s since starter pools are fast
    while [ $timeout -gt 0 ]; do
        state=$(az rest --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions/$sessionId" --query "state" --output tsv)
        if [[ "$state" == "idle" ]]; then
            echo "✅ Session ready in starter pool!"
            break
        fi
        echo "   Session state: $state (${timeout}s remaining)"
        sleep 3
        timeout=$((timeout - 3))
    done
fi

Data Exploration (Fabric-Specific Patterns)

# Execute statement (LLM knows Python/Spark syntax)
cat > /tmp/body.json << 'EOF'
{
  "code": "spark.sql(\"SHOW TABLES\").show(); df = spark.table(\"your_table\"); df.describe().show()",
  "kind": "pyspark"
}
EOF
az rest --method post --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions/$sessionId/statements" --body @/tmp/body.json

Key Fabric Patterns

PatternCodeUse Case
Table Discoveryspark.sql("SHOW TABLES")List available tables
Cross-Lakehousespark.sql("SELECT * FROM other_workspace.table")Query across workspaces
Delta Featuresdf.history(), df.readVersion(1)Time travel, versioning
Schema Evolutiondf.printSchema()Understand structure

Lakehouse Livy Session Cleanup

# Clean up idle Lakehouse Livy sessions (optional)
az rest --method get --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions" --query "sessions[?state=='idle'].id" --output tsv | xargs -I {} az rest --method delete --resource "$FABRIC_RESOURCE_SCOPE" --url "$FABRIC_API_URL/workspaces/$workspaceId/lakehouses/$lakehouseId/$LIVY_API_PATH/sessions/{}"

Focus: This skill provides Fabric-specific REST API patterns. LLM already knows Python/Spark syntax — we focus on Fabric integration, session management, and API endpoints.

NotebookLM Web Importer

Importieren Sie Webseiten und YouTube-Videos mit einem Klick in NotebookLM. Vertraut von über 200.000 Nutzern.

Chrome-Erweiterung installieren