Skip to content

CLI Commands

The Snowpack CLI provides three commands for interactive use. Run all commands with uv run snowpack from the project root.

Every command accepts --spark-host, --spark-port, and --catalog flags. These can also be set via the SNOWPACK_SPARK_HOST, SNOWPACK_SPARK_PORT, and SNOWPACK_CATALOG environment variables. Explicit flags take precedence over environment variables. The global --verbose flag must be passed before the command name, for example uv run snowpack --verbose health my_db my_table.

snowpack tables

Lists all Iceberg tables visible in the catalog, optionally filtered to a single database.

Flags

FlagDefaultDescription
--spark-hostconfig defaultSpark Thrift Server / Kyuubi hostname.
--spark-portconfig defaultSpark Thrift Server / Kyuubi port.
--catalogconfig defaultIceberg catalog name.
--database, -d(all)Filter to a single database.

Examples

List all tables across every database:

Terminal window
uv run snowpack tables

List tables in a specific database:

Terminal window
uv run snowpack tables --database my_database

Use a non-default Spark host:

Terminal window
uv run snowpack tables --spark-host spark.internal.example.com

snowpack health

Fetches live health metrics for a single table directly from the PyIceberg catalog. Returns small file count, snapshot count, manifest count, and position delete file count.

Flags

FlagDefaultDescription
--spark-hostconfig defaultSpark Thrift Server / Kyuubi hostname.
--spark-portconfig defaultSpark Thrift Server / Kyuubi port.
--catalogconfig defaultIceberg catalog name.

Positional arguments

ArgumentDescription
databaseDatabase name.
tableTable name.

Examples

Check health for a single table:

Terminal window
uv run snowpack health my_database my_table

Verbose output with full metric breakdown:

Terminal window
uv run snowpack --verbose health my_database my_table

snowpack maintain

Runs one or more maintenance actions directly for a single table. This CLI path bypasses the REST API, Postgres queue, KEDA workers, and job history. At least one --action flag is required.

Flags

FlagDefaultDescription
--spark-hostconfig defaultSpark Thrift Server / Kyuubi hostname.
--spark-portconfig defaultSpark Thrift Server / Kyuubi port.
--catalogconfig defaultIceberg catalog name.
--actionrequiredMaintenance action to run. Repeatable — pass multiple --action flags to run several actions in sequence. Valid values: rewrite_data_files, rewrite_position_delete_files, expire_snapshots, rewrite_manifests, remove_orphan_files. Actions are executed in Snowpack’s fixed enum order.
--target-file-size-mb512Target file size in MB for compaction.
--min-file-size-mb384Minimum file size in MB. Files smaller than this are candidates for compaction.
--dry-runfalseLog the maintenance plan without executing it.

Positional arguments

ArgumentDescription
databaseDatabase name.
tableTable name.

Examples

Run only compaction and snapshot expiration:

Terminal window
uv run snowpack maintain \
--action rewrite_data_files \
--action expire_snapshots \
my_database my_table

Dry run with custom file size targets:

Terminal window
uv run snowpack maintain \
--dry-run \
--action rewrite_data_files \
--target-file-size-mb 256 \
--min-file-size-mb 192 \
my_database my_table

Verbose output against a remote Spark host:

Terminal window
uv run snowpack maintain \
--spark-host spark.internal.example.com \
--action rewrite_data_files \
my_database my_table