huggingface-datasetsby huggingface

Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet…

npx skills add https://github.com/huggingface/skills --skill huggingface-datasets

name: huggingface-datasets description: Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.

Hugging Face Dataset Viewer

Use this skill to execute read-only Dataset Viewer API calls for dataset exploration and extraction.

Core workflow

  1. Optionally validate dataset availability with /is-valid.
  2. Resolve config + split with /splits.
  3. Preview with /first-rows.
  4. Paginate content with /rows using offset and length (max 100).
  5. Use /search for text matching and /filter for row predicates.
  6. Retrieve parquet links via /parquet and totals/metadata via /size and /statistics.

Defaults

  • Base URL: https://datasets-server.huggingface.co
  • Default API method: GET
  • Query params should be URL-encoded.
  • offset is 0-based.
  • length max is usually 100 for row-like endpoints.
  • Gated/private datasets require Authorization: Bearer <HF_TOKEN>.

Dataset Viewer

  • Validate dataset: /is-valid?dataset=<namespace/repo>
  • List subsets and splits: /splits?dataset=<namespace/repo>
  • Preview first rows: /first-rows?dataset=<namespace/repo>&config=<config>&split=<split>
  • Paginate rows: /rows?dataset=<namespace/repo>&config=<config>&split=<split>&offset=<int>&length=<int>
  • Search text: /search?dataset=<namespace/repo>&config=<config>&split=<split>&query=<text>&offset=<int>&length=<int>
  • Filter with predicates: /filter?dataset=<namespace/repo>&config=<config>&split=<split>&where=<predicate>&orderby=<sort>&offset=<int>&length=<int>
  • List parquet shards: /parquet?dataset=<namespace/repo>
  • Get size totals: /size?dataset=<namespace/repo>
  • Get column statistics: /statistics?dataset=<namespace/repo>&config=<config>&split=<split>
  • Get Croissant metadata (if available): /croissant?dataset=<namespace/repo>

Pagination pattern:

curl "https://datasets-server.huggingface.co/rows?dataset=stanfordnlp/imdb&config=plain_text&split=train&offset=0&length=100"
curl "https://datasets-server.huggingface.co/rows?dataset=stanfordnlp/imdb&config=plain_text&split=train&offset=100&length=100"

When pagination is partial, use response fields such as num_rows_total, num_rows_per_page, and partial to drive continuation logic.

Search/filter notes:

  • /search matches string columns (full-text style behavior is internal to the API).
  • /filter requires predicate syntax in where and optional sort in orderby.
  • Keep filtering and searches read-only and side-effect free.

For CLI-based parquet URL discovery or SQL, use the hf-cli skill with hf datasets parquet and hf datasets sql.

Creating and Uploading Datasets

Use one of these flows depending on dependency constraints.

Zero local dependencies (Hub UI):

  • Create dataset repo in browser: https://huggingface.co/new-dataset
  • Upload parquet files in the repo "Files and versions" page.
  • Verify shards appear in Dataset Viewer:
curl -s "https://datasets-server.huggingface.co/parquet?dataset=<namespace>/<repo>"

Low dependency CLI flow (npx @huggingface/hub / hfjs):

  • Set auth token:
export HF_TOKEN=<your_hf_token>
  • Upload parquet folder to a dataset repo (auto-creates repo if missing):
npx -y @huggingface/hub upload datasets/<namespace>/<repo> ./local/parquet-folder data
  • Upload as private repo on creation:
npx -y @huggingface/hub upload datasets/<namespace>/<repo> ./local/parquet-folder data --private

After upload, call /parquet to discover <config>/<split>/<shard> values for querying with @~parquet.

Agent Traces

The Hub supports raw agent session traces from Claude Code, Codex, and Pi Agent. Upload them to Hugging Face Datasets as original JSONL files and the Hub can auto-detect the trace format, tag the dataset as Traces, and enable the trace viewer for browsing sessions, turns, tool calls, and model responses. Common local session directories:

  • Claude Code: ~/.claude/projects
  • Codex: ~/.codex/sessions
  • Pi: ~/.pi/agent/sessions

Default to private dataset repos because traces can contain prompts, file paths, tool outputs, secrets, or PII. Preserve the raw .jsonl files and nest them by project/cwd instead of uploading every session at the dataset root.

hf repos create <namespace>/<repo> --type dataset --private --exist-ok
hf upload <namespace>/<repo> ~/.codex/sessions codex/<project-or-cwd> --type dataset

More skills from huggingface

Hugging Face Cli
by huggingface
Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Hugging Face Datasets
by huggingface
Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.
Hugging Face Evaluation
by huggingface
Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom model evaluations with vLLM/lighteval. Works with the model-index metadata format.
Hugging Face Jobs
by huggingface
Run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks.
Hugging Face Model Trainer
by huggingface
Train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on dataset preparation, hardware selection, cost estimation, and model persistence.
Hugging Face Paper Publisher
by huggingface
Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.
Hugging Face Tool Builder
by huggingface
Build reusable scripts and tools using the Hugging Face API. Useful when chaining or combining API calls, or when tasks will be repeated/automated. Creates reusable command line scripts to fetch, enrich, or process data from Hugging Face Hub.
Hugging Face Trackio
by huggingface
Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API) or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, HF Space syncing, and JSON output for automation.

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension