diff --git a/website/www/site/content/en/documentation/io/managed-io.md b/website/www/site/content/en/documentation/io/managed-io.md index fab9e79e71a6..f4e6c7472a9b 100644 --- a/website/www/site/content/en/documentation/io/managed-io.md +++ b/website/www/site/content/en/documentation/io/managed-io.md @@ -58,31 +58,6 @@ and Beam SQL is invoked via the Managed API under the hood. Read Configuration Write Configuration - - ICEBERG - - table (str)
- catalog_name (str)
- catalog_properties (map[str, str])
- config_properties (map[str, str])
- drop (list[str])
- filter (str)
- keep (list[str])
- - - table (str)
- catalog_name (str)
- catalog_properties (map[str, str])
- config_properties (map[str, str])
- direct_write_byte_limit (int32)
- drop (list[str])
- keep (list[str])
- only (str)
- partition_fields (list[str])
- table_properties (map[str, str])
- triggering_frequency_seconds (int32)
- - KAFKA @@ -111,6 +86,31 @@ and Beam SQL is invoked via the Managed API under the hood. schema (str)
+ + ICEBERG + + table (str)
+ catalog_name (str)
+ catalog_properties (map[str, str])
+ config_properties (map[str, str])
+ drop (list[str])
+ filter (str)
+ keep (list[str])
+ + + table (str)
+ catalog_name (str)
+ catalog_properties (map[str, str])
+ config_properties (map[str, str])
+ direct_write_byte_limit (int32)
+ drop (list[str])
+ keep (list[str])
+ only (str)
+ partition_fields (list[str])
+ table_properties (map[str, str])
+ triggering_frequency_seconds (int32)
+ + ICEBERG_CDC @@ -134,9 +134,10 @@ and Beam SQL is invoked via the Managed API under the hood. - SQLSERVER + MYSQL jdbc_url (str)
+ connection_init_sql (list[str])
connection_properties (str)
disable_auto_commit (boolean)
fetch_size (int32)
@@ -152,6 +153,7 @@ and Beam SQL is invoked via the Managed API under the hood. jdbc_url (str)
autosharding (boolean)
batch_size (int64)
+ connection_init_sql (list[str])
connection_properties (str)
location (str)
password (str)
@@ -160,10 +162,27 @@ and Beam SQL is invoked via the Managed API under the hood. - MYSQL + BIGQUERY + + kms_key (str)
+ query (str)
+ row_restriction (str)
+ fields (list[str])
+ table (str)
+ + + table (str)
+ drop (list[str])
+ keep (list[str])
+ kms_key (str)
+ only (str)
+ triggering_frequency_seconds (int64)
+ + + + SQLSERVER jdbc_url (str)
- connection_init_sql (list[str])
connection_properties (str)
disable_auto_commit (boolean)
fetch_size (int32)
@@ -179,7 +198,6 @@ and Beam SQL is invoked via the Managed API under the hood. jdbc_url (str)
autosharding (boolean)
batch_size (int64)
- connection_init_sql (list[str])
connection_properties (str)
location (str)
password (str)
@@ -187,24 +205,6 @@ and Beam SQL is invoked via the Managed API under the hood. write_statement (str)
- - BIGQUERY - - kms_key (str)
- query (str)
- row_restriction (str)
- fields (list[str])
- table (str)
- - - table (str)
- drop (list[str])
- keep (list[str])
- kms_key (str)
- only (str)
- triggering_frequency_seconds (int64)
- - POSTGRES @@ -235,7 +235,7 @@ and Beam SQL is invoked via the Managed API under the hood. ## Configuration Details -### `ICEBERG` Write +### `KAFKA` Read
@@ -246,141 +246,162 @@ and Beam SQL is invoked via the Managed API under the hood. + + + + + + + + + + + + + + +
- table + bootstrap_servers str - A fully-qualified table identifier. You may also provide a template to write to multiple dynamic destinations, for example: `dataset.my_{col1}_{col2.nested}_table`. + A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form `host1:port1,host2:port2,...`
- catalog_name + topic str - Name of the catalog containing the table. + n/a
- catalog_properties + allow_duplicates - map[str, str] + boolean - Properties used to set up the Iceberg catalog. + If the Kafka read allows duplicates.
- config_properties + confluent_schema_registry_subject - map[str, str] + str - Properties passed to the Hadoop Configuration. + n/a
- direct_write_byte_limit + confluent_schema_registry_url - int32 + str - For a streaming pipeline, sets the limit for lifting bundles into the direct write path. + n/a
- drop + consumer_config_updates - list[str] + map[str, str] - A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'. + A list of key-value pairs that act as configuration parameters for Kafka consumers. Most of these configurations will not be needed, but if you need to customize your Kafka consumer, you may use this. See a detailed list: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html
- keep + file_descriptor_path - list[str] + str - A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'. + The path to the Protocol Buffer File Descriptor Set file. This file is used for schema definition and message serialization.
- only + format str - The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'. + The encoding format for the data stored in Kafka. Valid options are: RAW,STRING,AVRO,JSON,PROTO
- partition_fields + message_name - list[str] + str - Fields used to create a partition spec that is applied when tables are created. For a field 'foo', the available partition transforms are: - -- `foo` -- `truncate(foo, N)` -- `bucket(foo, N)` -- `hour(foo)` -- `day(foo)` -- `month(foo)` -- `year(foo)` -- `void(foo)` - -For more information on partition transforms, please visit https://iceberg.apache.org/spec/#partition-transforms. + The name of the Protocol Buffer message to be used for schema extraction and data conversion.
- table_properties + offset_deduplication - map[str, str] + boolean - Iceberg table properties to be set on the table when it is created. -For more information on table properties, please visit https://iceberg.apache.org/docs/latest/configuration/#table-properties. + If the redistribute is using offset deduplication mode.
- triggering_frequency_seconds + redistribute_by_record_key + + boolean + + If the redistribute keys by the Kafka record key. +
+ redistribute_num_keys int32 - For a streaming pipeline, sets the frequency at which snapshots are produced. + The number of keys for redistributing Kafka inputs. +
+ redistributed + + boolean + + If the Kafka read should be redistributed. +
+ schema + + str + + The schema in which the data is encoded in the Kafka topic. For AVRO data, this is a schema defined with AVRO schema syntax (https://avro.apache.org/docs/1.10.2/spec.html#schemas). For JSON data, this is a schema defined with JSON-schema syntax (https://json-schema.org/). If a URL to Confluent Schema Registry is provided, then this field is ignored, and the schema is fetched from Confluent Schema Registry.
-### `ICEBERG` Read +### `KAFKA` Write
@@ -391,85 +412,85 @@ For more information on table properties, please visit https://iceberg.apache.or
- table + bootstrap_servers str - Identifier of the Iceberg table. + A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. | Format: host1:port1,host2:port2,...
- catalog_name + format str - Name of the catalog containing the table. + The encoding format for the data stored in Kafka. Valid options are: RAW,JSON,AVRO,PROTO
- catalog_properties + topic - map[str, str] + str - Properties used to set up the Iceberg catalog. + n/a
- config_properties + file_descriptor_path - map[str, str] + str - Properties passed to the Hadoop Configuration. + The path to the Protocol Buffer File Descriptor Set file. This file is used for schema definition and message serialization.
- drop + message_name - list[str] + str - A subset of column names to exclude from reading. If null or empty, all columns will be read. + The name of the Protocol Buffer message to be used for schema extraction and data conversion.
- filter + producer_config_updates - str + map[str, str] - SQL-like predicate to filter data at scan time. Example: "id > 5 AND status = 'ACTIVE'". Uses Apache Calcite syntax: https://calcite.apache.org/docs/reference.html + A list of key-value pairs that act as configuration parameters for Kafka producers. Most of these configurations will not be needed, but if you need to customize your Kafka producer, you may use this. See a detailed list: https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html
- keep + schema - list[str] + str - A subset of column names to read exclusively. If null or empty, all columns will be read. + n/a
-### `KAFKA` Read +### `ICEBERG` Read
@@ -480,245 +501,224 @@ For more information on table properties, please visit https://iceberg.apache.or - - - - - - - - - - +
- bootstrap_servers - - str - - A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form `host1:port1,host2:port2,...` -
- topic + table str - n/a -
- allow_duplicates - - boolean - - If the Kafka read allows duplicates. + Identifier of the Iceberg table.
- confluent_schema_registry_subject + catalog_name str - n/a + Name of the catalog containing the table.
- confluent_schema_registry_url + catalog_properties - str + map[str, str] - n/a + Properties used to set up the Iceberg catalog.
- consumer_config_updates + config_properties map[str, str] - A list of key-value pairs that act as configuration parameters for Kafka consumers. Most of these configurations will not be needed, but if you need to customize your Kafka consumer, you may use this. See a detailed list: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html + Properties passed to the Hadoop Configuration.
- file_descriptor_path + drop - str + list[str] - The path to the Protocol Buffer File Descriptor Set file. This file is used for schema definition and message serialization. + A subset of column names to exclude from reading. If null or empty, all columns will be read.
- format + filter str - The encoding format for the data stored in Kafka. Valid options are: RAW,STRING,AVRO,JSON,PROTO + SQL-like predicate to filter data at scan time. Example: "id > 5 AND status = 'ACTIVE'". Uses Apache Calcite syntax: https://calcite.apache.org/docs/reference.html
- message_name + keep - str + list[str] - The name of the Protocol Buffer message to be used for schema extraction and data conversion. + A subset of column names to read exclusively. If null or empty, all columns will be read.
+
+ +### `ICEBERG` Write + +
+ - - - + + + -
- offset_deduplication - - boolean - - If the redistribute is using offset deduplication mode. - ConfigurationTypeDescription
- redistribute_by_record_key + table - boolean + str - If the redistribute keys by the Kafka record key. + A fully-qualified table identifier. You may also provide a template to write to multiple dynamic destinations, for example: `dataset.my_{col1}_{col2.nested}_table`.
- redistribute_num_keys + catalog_name - int32 + str - The number of keys for redistributing Kafka inputs. + Name of the catalog containing the table.
- redistributed + catalog_properties - boolean + map[str, str] - If the Kafka read should be redistributed. + Properties used to set up the Iceberg catalog.
- schema + config_properties - str + map[str, str] - The schema in which the data is encoded in the Kafka topic. For AVRO data, this is a schema defined with AVRO schema syntax (https://avro.apache.org/docs/1.10.2/spec.html#schemas). For JSON data, this is a schema defined with JSON-schema syntax (https://json-schema.org/). If a URL to Confluent Schema Registry is provided, then this field is ignored, and the schema is fetched from Confluent Schema Registry. + Properties passed to the Hadoop Configuration.
-
- -### `KAFKA` Write - -
- - - - - -
ConfigurationTypeDescription
- bootstrap_servers + direct_write_byte_limit - str + int32 - A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. | Format: host1:port1,host2:port2,... + For a streaming pipeline, sets the limit for lifting bundles into the direct write path.
- format + drop - str + list[str] - The encoding format for the data stored in Kafka. Valid options are: RAW,JSON,AVRO,PROTO + A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'.
- topic + keep - str + list[str] - n/a + A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'.
- file_descriptor_path + only str - The path to the Protocol Buffer File Descriptor Set file. This file is used for schema definition and message serialization. + The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'.
- message_name + partition_fields - str + list[str] - The name of the Protocol Buffer message to be used for schema extraction and data conversion. + Fields used to create a partition spec that is applied when tables are created. For a field 'foo', the available partition transforms are: + +- `foo` +- `truncate(foo, N)` +- `bucket(foo, N)` +- `hour(foo)` +- `day(foo)` +- `month(foo)` +- `year(foo)` +- `void(foo)` + +For more information on partition transforms, please visit https://iceberg.apache.org/spec/#partition-transforms.
- producer_config_updates + table_properties map[str, str] - A list of key-value pairs that act as configuration parameters for Kafka producers. Most of these configurations will not be needed, but if you need to customize your Kafka producer, you may use this. See a detailed list: https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html + Iceberg table properties to be set on the table when it is created. +For more information on table properties, please visit https://iceberg.apache.org/docs/latest/configuration/#table-properties.
- schema + triggering_frequency_seconds - str + int32 - n/a + For a streaming pipeline, sets the frequency at which snapshots are produced.
@@ -890,7 +890,7 @@ For more information on table properties, please visit https://iceberg.apache.or
-### `SQLSERVER` Write +### `MYSQL` Write
@@ -932,6 +932,17 @@ For more information on table properties, please visit https://iceberg.apache.or n/a + + + + +
+ connection_init_sql + + list[str] + + Sets the connection init sql statements used by the Driver. Only MySQL and MariaDB support this. +
connection_properties @@ -990,7 +1001,7 @@ For more information on table properties, please visit https://iceberg.apache.or
-### `SQLSERVER` Read +### `MYSQL` Read
@@ -1010,6 +1021,17 @@ For more information on table properties, please visit https://iceberg.apache.or Connection URL for the JDBC source. + + + + +
+ connection_init_sql + + list[str] + + Sets the connection init sql statements used by the Driver. Only MySQL and MariaDB support this. +
connection_properties @@ -1123,7 +1145,7 @@ For more information on table properties, please visit https://iceberg.apache.or
-### `MYSQL` Read +### `BIGQUERY` Write
@@ -1134,140 +1156,141 @@ For more information on table properties, please visit https://iceberg.apache.or +
- jdbc_url + table str - Connection URL for the JDBC source. + The bigquery table to write to. Format: [${PROJECT}:]${DATASET}.${TABLE}
- connection_init_sql + drop list[str] - Sets the connection init sql statements used by the Driver. Only MySQL and MariaDB support this. + A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'.
- connection_properties + keep - str + list[str] - Used to set connection properties passed to the JDBC driver not already defined as standalone parameter (e.g. username and password can be set using parameters above accordingly). Format of the string must be "key1=value1;key2=value2;". + A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'.
- disable_auto_commit + kms_key - boolean + str - Whether to disable auto commit on read. Defaults to true if not provided. The need for this config varies depending on the database platform. Informix requires this to be set to false while Postgres requires this to be set to true. + Use this Cloud KMS key to encrypt your data
- fetch_size + only - int32 + str - This method is used to override the size of the data that is going to be fetched and loaded in memory per every database call. It should ONLY be used if the default value throws memory errors. + The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'.
- location + triggering_frequency_seconds - str + int64 - Name of the table to read from. + Determines how often to 'commit' progress into BigQuery. Default is every 5 seconds.
+
+ +### `BIGQUERY` Read + +
+ - - - + + +
- num_partitions - - int32 - - The number of partitions - ConfigurationTypeDescription
- output_parallelization + kms_key - boolean + str - Whether to reshuffle the resulting PCollection so results are distributed to all workers. + Use this Cloud KMS key to encrypt your data
- partition_column + query str - Name of a column of numeric type that will be used for partitioning. + The SQL query to be executed to read from the BigQuery table.
- password + row_restriction str - Password for the JDBC source. + Read only rows that match this filter, which must be compatible with Google standard SQL. This is not supported when reading via query.
- read_query + fields - str + list[str] - SQL query used to query the JDBC source. + Read only the specified fields (columns) from a BigQuery table. Fields may not be returned in the order specified. If no value is specified, then all fields are returned. Example: "col1, col2, col3"
- username + table str - Username for the JDBC source. + The fully-qualified name of the BigQuery table to read from. Format: [${PROJECT}:]${DATASET}.${TABLE}
-### `MYSQL` Write +### `SQLSERVER` Read
@@ -1284,240 +1307,217 @@ For more information on table properties, please visit https://iceberg.apache.or str -
- Connection URL for the JDBC sink. + Connection URL for the JDBC source.
- autosharding + connection_properties - boolean + str - If true, enables using a dynamically determined number of shards to write. + Used to set connection properties passed to the JDBC driver not already defined as standalone parameter (e.g. username and password can be set using parameters above accordingly). Format of the string must be "key1=value1;key2=value2;".
- batch_size + disable_auto_commit - int64 + boolean - n/a + Whether to disable auto commit on read. Defaults to true if not provided. The need for this config varies depending on the database platform. Informix requires this to be set to false while Postgres requires this to be set to true.
- connection_init_sql + fetch_size - list[str] + int32 - Sets the connection init sql statements used by the Driver. Only MySQL and MariaDB support this. + This method is used to override the size of the data that is going to be fetched and loaded in memory per every database call. It should ONLY be used if the default value throws memory errors.
- connection_properties + location str - Used to set connection properties passed to the JDBC driver not already defined as standalone parameter (e.g. username and password can be set using parameters above accordingly). Format of the string must be "key1=value1;key2=value2;". + Name of the table to read from.
- location + num_partitions - str + int32 - Name of the table to write to. + The number of partitions
- password + output_parallelization - str + boolean - Password for the JDBC source. + Whether to reshuffle the resulting PCollection so results are distributed to all workers.
- username + partition_column str - Username for the JDBC source. + Name of a column of numeric type that will be used for partitioning.
- write_statement + password str - SQL query used to insert records into the JDBC sink. + Password for the JDBC source.
-
- -### `BIGQUERY` Write - -
- - - - - - +
ConfigurationTypeDescription
- table + read_query str - The bigquery table to write to. Format: [${PROJECT}:]${DATASET}.${TABLE} + SQL query used to query the JDBC source.
- drop + username - list[str] + str - A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'. + Username for the JDBC source.
+
+ +### `SQLSERVER` Write + +
+ - - - + + + -
- keep - - list[str] - - A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'. - ConfigurationTypeDescription
- kms_key + jdbc_url str - Use this Cloud KMS key to encrypt your data + Connection URL for the JDBC sink.
- only + autosharding - str + boolean - The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'. + If true, enables using a dynamically determined number of shards to write.
- triggering_frequency_seconds + batch_size int64 - Determines how often to 'commit' progress into BigQuery. Default is every 5 seconds. + n/a
-
- -### `BIGQUERY` Read - -
- - - - - -
ConfigurationTypeDescription
- kms_key + connection_properties str - Use this Cloud KMS key to encrypt your data + Used to set connection properties passed to the JDBC driver not already defined as standalone parameter (e.g. username and password can be set using parameters above accordingly). Format of the string must be "key1=value1;key2=value2;".
- query + location str - The SQL query to be executed to read from the BigQuery table. + Name of the table to write to.
- row_restriction + password str - Read only rows that match this filter, which must be compatible with Google standard SQL. This is not supported when reading via query. + Password for the JDBC source.
- fields + username - list[str] + str - Read only the specified fields (columns) from a BigQuery table. Fields may not be returned in the order specified. If no value is specified, then all fields are returned. Example: "col1, col2, col3" + Username for the JDBC source.
- table + write_statement str - The fully-qualified name of the BigQuery table to read from. Format: [${PROJECT}:]${DATASET}.${TABLE} + SQL query used to insert records into the JDBC sink.