databricks-migrationद्वारा microsoft

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

npx skills add https://github.com/microsoft/skills-for-fabric --skill databricks-migration

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

  • GitHub Copilot CLI / VS Code: invoke the check-updates skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.

CRITICAL NOTES

  1. To find workspace details (including its ID) from a workspace name: list all workspaces, then use JMESPath filtering
  2. To find item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace, then use JMESPath filtering
  3. dbutils.widgets has no direct equivalent in Fabric — use notebook parameters (cell tag parameters) or notebookutils.runtime.context for context injection
  4. dbutils.library (runtime library install) has no equivalent — use Fabric Environments for reproducible library management
  5. Unity Catalog uses a 3-level namespace (catalog.schema.table); Fabric Lakehouse uses 2-level (schema.table within a named Lakehouse)

Databricks → Microsoft Fabric Migration

Prerequisite Knowledge

Read these companion documents before executing migration tasks:

  • COMMON-CORE.md — Fabric REST API patterns, authentication, token audiences, item discovery
  • COMMON-CLI.mdaz rest, az login, token acquisition, Fabric REST via CLI
  • SPARK-AUTHORING-CORE.md — Notebook deployment, lakehouse creation, Spark job execution

For notebook and Lakehouse creation, see spark-authoring-cli. For Fabric Warehouse DDL/DML authoring, see sqldw-authoring-cli.


Table of Contents

TopicReference
Migration Workload Map§ Migration Workload Map
Complete dbutilsnotebookutils Mappingdbutils-to-notebookutils.md
Unity Catalog → Fabric Lakehouse Schemascatalog-migration.md
Before/After Code Patternscode-patterns.md
Cluster Config → Fabric Spark Pools§ Cluster Config → Fabric Spark Pools
Databricks Jobs → Spark Job Definitions§ Databricks Jobs → Spark Job Definitions
Delta Sharing → OneLake Shortcuts§ Delta Sharing → OneLake Shortcuts
MLflow → Fabric ML Experiments§ MLflow → Fabric ML Experiments
Must / Prefer / Avoid§ Must / Prefer / Avoid
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication
Lakehouse ManagementSPARK-AUTHORING-CORE.md § Lakehouse Management
Notebook ManagementSPARK-AUTHORING-CORE.md § Notebook Management

Migration Workload Map

Databricks ComponentFabric TargetNotes
All-purpose cluster (notebooks, REPL)Fabric Notebook (Starter Pool or Custom Pool)No persistent cluster — Fabric provisions compute on session start
Job cluster (automated jobs)Spark Job Definition (SJD)SJD maps one-to-one with Databricks Jobs on job clusters
Unity CatalogFabric Lakehouse (schema per namespace)See catalog-migration.md
Databricks Repos (Git-backed notebooks)Fabric Git IntegrationConnect workspace to Azure DevOps or GitHub; notebooks are synced
Delta Live Tables (DLT)Fabric Notebooks + Data PipelinesNo DLT equivalent — rewrite DLT datasets as parameterized notebook cells with pipeline orchestration
Databricks SQL WarehousesFabric Warehouse or Lakehouse SQL EndpointSQL warehouse sessions → Warehouse (for write) or SQL Endpoint (for read-only)
MLflow TrackingFabric ML ExperimentsMLflow SDK is supported in Fabric — see § MLflow
Delta SharingOneLake Shortcuts + Fabric external data sharingSee § Delta Sharing → OneLake Shortcuts
Databricks Feature StoreFabric Feature Store (preview)Direct conceptual equivalent; APIs differ
dbutils (all sub-modules)notebookutils (most sub-modules)See dbutils-to-notebookutils.md for full mapping

dbutilsnotebookutils Quick Reference

The complete side-by-side API table is in dbutils-to-notebookutils.md. The key mappings are:

dbutils Callnotebookutils EquivalentCompatibility Note
dbutils.fs.ls(path)notebookutils.fs.ls(path)Direct replacement
dbutils.fs.cp(src, dest)notebookutils.fs.cp(src, dest)Direct replacement
dbutils.fs.mv(src, dest)notebookutils.fs.mv(src, dest)Direct replacement
dbutils.fs.rm(path, recurse)notebookutils.fs.rm(path, recurse)Direct replacement
dbutils.fs.mkdirs(path)notebookutils.fs.mkdirs(path)Direct replacement
dbutils.fs.put(path, contents)notebookutils.fs.put(path, contents)Direct replacement
dbutils.fs.head(path, maxBytes)notebookutils.fs.head(path, maxBytes)Direct replacement
dbutils.fs.mount(...)Not available — use OneLake ShortcutsFabric uses token-based access; no FUSE mounts
dbutils.secrets.get(scope, key)notebookutils.credentials.getSecret(keyVaultUrl, secretName)Scope → Key Vault URL; key → secret name
dbutils.notebook.run(path, timeout, args)notebookutils.notebook.run(name, timeout, args)path → notebook name (relative to workspace)
dbutils.notebook.exit(value)notebookutils.notebook.exit(value)Direct replacement
dbutils.widgets.get(name)See § Widgets MigrationNo direct equivalent
dbutils.library.install(...)Not available — use Fabric EnvironmentsRuntime library install not supported
dbutils.data.summarize(df)display(df.summary())Use display() or pandas describe()

Widgets Migration

dbutils.widgets has no direct equivalent in Fabric. Use these patterns instead:

Use CaseFabric Pattern
Pass parameter from parent notebooknotebookutils.notebook.run("child", args={"param": "value"}) — read via notebookutils.runtime.context["parameters"]
Pipeline-driven parameterizationMark cell with parameters tag in notebook; pipeline injects values via notebook activity
Interactive selection in notebookUse display() with input cells or Fabric Data Activator
Read pipeline-injected value in codeimport notebookutils; params = notebookutils.runtime.context.get("parameters", {})

Cluster Config → Fabric Spark Pools

Databricks Cluster ConceptFabric Spark EquivalentNotes
All-purpose cluster (interactive)Starter PoolAuto-provisioned; no config; ideal for notebooks
Job cluster (single-use for jobs)Custom Pool (or Starter Pool) attached to SJDConfigure node size, autoscale in Fabric capacity settings
Node type (e.g., Standard_DS3_v2)Fabric node size (Small/Medium/Large/X-Large/XX-Large)Map by vCore/memory ratio
Autoscale min/max workersCustom Pool min/max node settingsAvailable in workspace Spark settings
spark.conf in cluster settingsFabric Environment Spark propertiesMove to Environment item; attach to workspace or notebook
init_scripts (cluster init)Fabric Environment install scriptNot fully equivalent — only library installs are supported
Databricks Runtime versionFabric Runtime (1.1 = Spark 3.3, 1.2 = Spark 3.4, 1.3 = Spark 3.5)Choose matching Spark version; test deprecated APIs
Photon acceleratorFabric Native Execution Engine (NEE)Enable in workspace Spark settings; vectorized execution similar to Photon

Databricks Jobs → Spark Job Definitions

Databricks Jobs ConceptFabric SJD EquivalentNotes
Job with single notebook taskSJD referencing a notebookAttach a default Lakehouse; pass parameters via SJD args
Multi-task job (DAG of tasks)Fabric Data Pipeline orchestrating multiple SJDs/notebooksPipeline activities map to job tasks; dependencies = activity dependencies
Job schedule (cron)Pipeline schedule triggerCron expression → recurrence trigger in pipeline
Job parametersSJD default arguments or notebook cell parametersParameters cell in notebook is injected at runtime
Job clusters per taskPool attached to SJDEach SJD can specify its Spark pool independently
Databricks WorkflowsFabric Data PipelinesFull DAG orchestration with conditions, loops, and failure branches

Delegate to spark-authoring-cli for SJD creation and notebook deployment.


Delta Sharing → OneLake Shortcuts

Databricks Delta Sharing PatternFabric Equivalent
Provider publishes a Delta shareFabric external data sharing (preview) or OneLake Shortcut to ADLS Gen2 where Delta data resides
Recipient reads shared dataCreate a OneLake Shortcut pointing to the ADLS Gen2 Delta table; access via Lakehouse
Cross-workspace table sharing within orgOneLake Shortcuts pointing to another workspace's Lakehouse tables — no data copy
Cross-tenant sharingFabric external data sharing (GA roadmap) — use ADLS Gen2 shortcut as interim

MLflow → Fabric ML Experiments

Fabric ML Experiments are built on the MLflow SDK — most code is directly portable:

Databricks MLflow PatternFabric EquivalentMigration Action
mlflow.set_tracking_uri("databricks")Remove — Fabric tracking is automaticDelete this line in Fabric notebooks
mlflow.set_experiment("/path/exp")mlflow.set_experiment("experiment_name")Use name only (not path); Fabric creates the Experiment item
mlflow.log_metric(...)mlflow.log_metric(...)identicalNo change
mlflow.log_artifact(...)mlflow.log_artifact(...)identicalNo change
mlflow.autolog()mlflow.autolog()identicalNo change
mlflow.register_model(...)mlflow.register_model(...)identicalModel Registry is available in Fabric ML
Databricks Model ServingAzure ML Online Endpoints or Fabric Data ActivatorNo direct Fabric model serving yet — use Azure ML

Must / Prefer / Avoid

MUST DO

  • Replace all dbutils.* calls using the mapping in dbutils-to-notebookutils.mddbutils is not available in Fabric notebooks
  • Replace dbutils.fs.mount() with OneLake Shortcuts — Fabric uses token-based identity access; FUSE mounts are not supported
  • Replace dbutils.secrets.get(scope, key) with notebookutils.credentials.getSecret(keyVaultUrl, secretName) — secret scopes map to Azure Key Vault URLs
  • Redesign widget-based parameter passing using notebook parameter cells (tagged parameters) or notebookutils.runtime.context
  • Replace dbutils.library.install() with Fabric Environments — runtime library installs are not supported in production workloads
  • Adapt Unity Catalog 3-level namespaces (catalog.schema.table) to Fabric 2-level (schema.table within a Lakehouse) — see catalog-migration.md
  • Map Databricks cluster init scripts to Fabric Environments — cluster-level library installs must move to Environment items

PREFER

  • Fabric Native Execution Engine (NEE) as the Photon equivalent — enable in workspace Spark settings for vectorized execution on Delta Lake
  • OneLake Shortcuts over data copy for Delta tables that already exist in ADLS Gen2 — point directly without re-ingesting
  • Fabric Git Integration as the replacement for Databricks Repos — connect workspace to ADO or GitHub for notebook version control
  • Fabric ML Experiments for direct MLflow continuity — tracking code requires minimal changes (remove set_tracking_uri)
  • Medallion architecture when restructuring migrated Databricks catalogs — align bronze, silver, gold Unity Catalog schemas to separate Fabric Lakehouses
  • Starter Pool for migrating interactive notebook workflows — eliminates cluster startup time that was a common pain point in Databricks job clusters

AVOID

  • Do not import dbutils or attempt dbutils = ... assignments in Fabric notebooks — this will raise NameError; always use notebookutils
  • Do not assume Unity Catalog governance policies transfer automatically — RBAC, row-level security, and column masking must be reconfigured in Fabric using workspace roles and Lakehouse permissions
  • Do not use %pip install in production Fabric notebooks at runtime — use Fabric Environments for stable, versioned library management
  • Do not attempt to port Delta Live Tables (DLT) pipelines verbatim — DLT has no Fabric equivalent; rewrite as parameterized notebooks orchestrated by Fabric Pipelines
  • Do not rely on Databricks-specific Spark configurations (e.g., spark.databricks.*) — these are proprietary and will be silently ignored or raise errors in Fabric
  • Do not use DBFS paths (dbfs:/...) — there is no DBFS in Fabric; all paths must use OneLake abfss:// or Lakehouse-relative paths

Examples

See dbutils-to-notebookutils.md and code-patterns.md for the full mapping. Key quick references:

dbutils.fsnotebookutils.fs

# Databricks
dbutils.fs.ls("/mnt/bronze/orders/")
dbutils.fs.cp("/mnt/raw/file.csv", "/mnt/archive/file.csv")

# Fabric (replace DBFS/mount paths with OneLake relative paths)
notebookutils.fs.ls("Files/bronze/orders/")
notebookutils.fs.cp("Files/raw/file.csv", "Files/archive/file.csv")

dbutils.secretsnotebookutils.credentials

# Databricks
pwd = dbutils.secrets.get(scope="prod", key="db-password")

# Fabric (scope → Key Vault URL, key → secret name)
pwd = notebookutils.credentials.getSecret("https://myvault.vault.azure.net/", "db-password")

Unity Catalog namespace → Lakehouse schema

# Databricks
df = spark.read.table("prod.silver.customers")

# Fabric (catalog dropped; Lakehouse context provides it)
df = spark.read.table("silver.customers")

NotebookLM Web Importer

एक क्लिक में वेब पेज और YouTube वीडियो NotebookLM में आयात करें। 200,000+ उपयोगकर्ताओं द्वारा विश्वसनीय।

Chrome एक्सटेंशन इंस्टॉल करें