Skip to main content

Iceberg

Module iceberg

Testing

Important Capabilities

CapabilityStatusNotes
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled via stateful ingestion
DomainsCurrently not supported.
Extract OwnershipOptionally enabled via configuration by specifying which Iceberg table property holds user or group ownership.
Partition SupportCurrently not supported.
Platform InstanceOptionally enabled via configuration, an Iceberg instance represents the datalake name where the table is stored.

Integration Details

The DataHub Iceberg source plugin extracts metadata from Iceberg tables stored in a distributed or local file system. Typically, Iceberg tables are stored in a distributed file system like S3 or Azure Data Lake Storage (ADLS) and registered in a catalog. There are various catalog implementations like Filesystem-based, RDBMS-based or even REST-based catalogs. This Iceberg source plugin relies on the Iceberg python_legacy library and its support for catalogs is limited at the moment. A new version of the Iceberg Python library is currently in development and should fix this. Because of this limitation, this source plugin will only ingest HadoopCatalog-based tables that have a version-hint.text metadata file.

Ingestion of tables happens in 2 steps:

  1. Discover Iceberg tables stored in file system.
  2. Load discovered tables using Iceberg python_legacy library

The current implementation of the Iceberg source plugin will only discover tables stored in a local file system or in ADLS. Support for S3 could be added fairly easily.

CLI based Ingestion

Install the Plugin

pip install 'acryl-datahub[iceberg]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "iceberg"
config:
env: PROD
adls:
# Will be translated to https://{account_name}.dfs.core.windows.net
account_name: my_adls_account
# Can use sas_token or account_key
sas_token: "${SAS_TOKEN}"
# account_key: "${ACCOUNT_KEY}"
container_name: warehouse
base_path: iceberg
platform_instance: my_iceberg_catalog
table_pattern:
allow:
- marketing.*
profiling:
enabled: true

sink:
# sink configs


Config Details

Note that a . is used to denote nested fields in the YAML recipe.

View All Configuration Options
Field [Required]TypeDescriptionDefaultNotes
group_ownership_propertystringIceberg table property to look for a CorpGroup owner. Can only hold a single group value. If property has no value, no owner information will be emitted.None
localfsstringLocal path to crawl for Iceberg tables. This is one filesystem type supported by this source and only one can be configured.None
max_path_depthintegerMaximum folder depth to crawl for Iceberg tables. Folders deeper than this value will be silently ignored.2
platform_instancestringThe instance of the platform that all assets produced by this recipe belong toNone
user_ownership_propertystringIceberg table property to look for a CorpUser owner. Can only hold a single user value. If property has no value, no owner information will be emitted.owner
envstringThe environment that all assets produced by this connector belong toPROD
adlsAdlsSourceConfigAzure Data Lake Storage to crawl for Iceberg tables. This is one filesystem type supported by this source and only one can be configured.None
adls.account_name [❓ (required if adls is set)]stringName of the Azure storage account. See Microsoft official documentation on how to create a storage account.None
adls.container_name [❓ (required if adls is set)]stringAzure storage account container name.None
adls.account_keystringAzure storage account access key that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.base_pathstringBase folder in hierarchical namespaces to start from./
adls.client_idstringAzure client (Application) ID required when a client_secret is used as a credential.None
adls.client_secretstringAzure client secret that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.sas_tokenstringAzure storage account Shared Access Signature (SAS) token that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.tenant_idstringAzure tenant (Directory) ID required when a client_secret is used as a credential.None
table_patternAllowDenyPatternRegex patterns for tables to filter in ingestion.{'allow': ['.*'], 'deny': [], 'ignoreCase': True}
table_pattern.allowarray(string)None
table_pattern.denyarray(string)None
table_pattern.ignoreCasebooleanWhether to ignore case sensitivity during pattern matching.True
profilingIcebergProfilingConfig{'enabled': False, 'include_field_null_count': True, 'include_field_min_value': True, 'include_field_max_value': True}
profiling.enabledbooleanWhether profiling should be done.None
profiling.include_field_max_valuebooleanWhether to profile for the max value of numeric columns.True
profiling.include_field_min_valuebooleanWhether to profile for the min value of numeric columns.True
profiling.include_field_null_countbooleanWhether to profile for the number of nulls for each column.True
stateful_ingestionStatefulStaleMetadataRemovalConfigIceberg Stateful Ingestion Config.None
stateful_ingestion.enabledbooleanThe type of the ingestion state provider registered with datahub.None
stateful_ingestion.ignore_new_statebooleanIf set to True, ignores the current checkpoint state.None
stateful_ingestion.ignore_old_statebooleanIf set to True, ignores the previous checkpoint state.None
stateful_ingestion.remove_stale_metadatabooleanSoft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.True

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
icebergData Platform
TableDatasetEach Iceberg table maps to a Dataset named using the parent folders. If a table is stored under my/namespace/table, the dataset name will be my.namespace.table. If a Platform Instance is configured, it will be used as a prefix: <platform_instance>.my.namespace.table.
Table propertyUser (a.k.a CorpUser)The value of a table property can be used as the name of a CorpUser owner. This table property name can be configured with the source option user_ownership_property.
Table propertyCorpGroupThe value of a table property can be used as the name of a CorpGroup owner. This table property name can be configured with the source option group_ownership_property.
Table parent folders (excluding warehouse catalog location)ContainerAvailable in a future release
Table schemaSchemaFieldMaps to the fields defined within the Iceberg table schema definition.

Troubleshooting

[Common Issue]

[Provide description of common issues with this integration and steps to resolve]

Code Coordinates

  • Class Name: datahub.ingestion.source.iceberg.iceberg.IcebergSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Iceberg, feel free to ping us on our Slack