[USTORAGE-11846] : Update CLI for tableflow catalog integration#3330
[USTORAGE-11846] : Update CLI for tableflow catalog integration#3330Deep Patel (deeppatel1010) wants to merge 3 commits intomainfrom
Conversation
|
🎉 All Contributor License Agreements have been signed. Ready to merge. |
There was a problem hiding this comment.
Pull request overview
Adds CLI support for Tableflow catalog integration “custom” attributes (custom_database / custom_namespace / custom_schema) across create/update flows and corresponding test fixtures.
Changes:
- Add new CLI flags for custom database/namespace/schema on catalog-integration create/update.
- Extend output rendering to include the new custom attributes.
- Update test server handlers and golden fixtures to cover create/update scenarios for the new attributes.
Reviewed changes
Copilot reviewed 22 out of 23 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
internal/tableflow/command_catalog_integration_create.go |
Adds create flags and wires them into the request spec for aws/snowflake/unity. |
internal/tableflow/command_catalog_integration_update.go |
Adds update flags, includes them in “one required” group, and populates update request config. |
internal/tableflow/command_catalog_integration.go |
Extends output struct and describe/update printing to include new custom fields. |
test/test-server/tableflow_handlers.go |
Updates mock server catalog integration create/update/list/get behavior to include custom fields. |
test/tableflow_test.go |
Adds CLI test cases for create/update with the new flags. |
test/fixtures/output/tableflow/catalog-integration/create-aws-glue-custom-database.golden |
New golden fixture for AWS create with custom database. |
test/fixtures/output/tableflow/catalog-integration/create-snowflake-custom-namespace.golden |
New golden fixture for Snowflake create with custom namespace. |
test/fixtures/output/tableflow/catalog-integration/create-unity-custom-schema.golden |
New golden fixture for Unity create with custom schema. |
test/fixtures/output/tableflow/catalog-integration/update-aws-glue-custom-database.golden |
New golden fixture for AWS update custom database. |
test/fixtures/output/tableflow/catalog-integration/update-snowflake-custom-namespace.golden |
New golden fixture for Snowflake update custom namespace. |
test/fixtures/output/tableflow/catalog-integration/update-unity-custom-schema.golden |
New golden fixture for Unity update custom schema. |
test/fixtures/output/tableflow/catalog-integration/update-fail-no-flags.golden |
Updates error/help output to reflect new flags being part of required flag group. |
test/fixtures/output/tableflow/catalog-integration/update-name.golden |
Updates output expectations due to custom fields now present in mock data. |
test/fixtures/output/tableflow/catalog-integration/update-snowflake.golden |
Updates output expectations to include custom namespace. |
test/fixtures/output/tableflow/catalog-integration/update-snowflake-with-name.golden |
Updates output expectations to include custom namespace. |
test/fixtures/output/tableflow/catalog-integration/list.golden |
Updates list table header to include new columns. |
test/fixtures/output/tableflow/catalog-integration/describe-aws-glue.golden |
Updates describe output to include custom database. |
test/fixtures/output/tableflow/catalog-integration/describe-aws-glue-json.golden |
Updates JSON describe output to include custom database. |
test/fixtures/output/tableflow/catalog-integration/describe-snowflake.golden |
Updates describe output to include custom namespace. |
test/fixtures/output/tableflow/catalog-integration/describe-unity.golden |
Updates describe output to include custom schema. |
.golangci.yml |
Allows a specific replace directive via gomoddirectives.replace-allow-list. |
go.mod |
Adds a replace from public tableflow SDK module to an internal module version. |
go.sum |
Updates sums for the replaced/internal tableflow SDK module. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec != nil { | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetEndpoint(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetEndpoint()) | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientId(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetClientId()) | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientSecret(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetClientSecret()) | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetWarehouse(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetWarehouse()) | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetAllowedScope(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetAllowedScope()) | ||
| if body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetCustomNamespace() != "" { | ||
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetCustomNamespace(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetCustomNamespace()) | ||
| } |
There was a problem hiding this comment.
In the Snowflake update handler, the response spec fields (endpoint/client credentials/warehouse/allowed-scope) are always overwritten from the update payload whenever TableflowV1CatalogIntegrationSnowflakeUpdateSpec is non-nil. For partial updates (e.g., updating only custom_namespace), those getters will be empty/default and this handler will incorrectly clear existing fields, making the test server behave unlike a PATCH-style update API. Update the handler to only set each field when it is actually present in the request (e.g., check the underlying pointer/nullable field on the update spec, not the getter’s empty-string default).
| if body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetEndpoint(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetEndpoint()) | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientId(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetClientId()) | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientSecret(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetClientSecret()) | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetWarehouse(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetWarehouse()) | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetAllowedScope(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetAllowedScope()) | |
| if body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetCustomNamespace() != "" { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetCustomNamespace(body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec.GetCustomNamespace()) | |
| } | |
| snowflakeUpdateSpec := body.Spec.GetConfig().TableflowV1CatalogIntegrationSnowflakeUpdateSpec | |
| if snowflakeUpdateSpec != nil { | |
| if snowflakeUpdateSpec.Endpoint != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetEndpoint(snowflakeUpdateSpec.GetEndpoint()) | |
| } | |
| if snowflakeUpdateSpec.ClientId != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientId(snowflakeUpdateSpec.GetClientId()) | |
| } | |
| if snowflakeUpdateSpec.ClientSecret != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetClientSecret(snowflakeUpdateSpec.GetClientSecret()) | |
| } | |
| if snowflakeUpdateSpec.Warehouse != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetWarehouse(snowflakeUpdateSpec.GetWarehouse()) | |
| } | |
| if snowflakeUpdateSpec.AllowedScope.IsSet() { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetAllowedScope(snowflakeUpdateSpec.GetAllowedScope()) | |
| } | |
| if snowflakeUpdateSpec.CustomNamespace != nil { | |
| catalogIntegration.Spec.Config.TableflowV1CatalogIntegrationSnowflakeSpec.SetCustomNamespace(snowflakeUpdateSpec.GetCustomNamespace()) | |
| } |
| CustomDatabase string `human:"Custom Database,omitempty" serialized:"custom_database,omitempty"` | ||
| CustomNamespace string `human:"Custom Namespace,omitempty" serialized:"custom_namespace,omitempty"` | ||
| CustomSchema string `human:"Custom Schema,omitempty" serialized:"custom_schema,omitempty"` |
There was a problem hiding this comment.
catalogIntegrationOut now includes Custom Database/Namespace/Schema, and printCatalogIntegrationTable populates them, but the list command builds catalogIntegrationOut manually and doesn’t set these fields. This will cause tableflow catalog-integration list to show the new columns with blank values even when the API returns them. Consider centralizing the mapping from TableflowV1CatalogIntegration -> catalogIntegrationOut (or at least populating these three fields in the list path) so list/describe output stays consistent.
| cmd.Flags().String("custom-database", "", "Specify the custom database name for AWS Glue.") | ||
| cmd.Flags().String("custom-namespace", "", "Specify the custom namespace for Snowflake Open Catalog.") | ||
| cmd.Flags().String("custom-schema", "", "Specify the custom schema name for Unity Catalog.") | ||
|
|
||
| pcmd.AddClusterFlag(cmd, c.AuthenticatedCLICommand) | ||
| pcmd.AddEnvironmentFlag(cmd, c.AuthenticatedCLICommand) | ||
| pcmd.AddContextFlag(cmd, c.CLICommand) | ||
| pcmd.AddOutputFlag(cmd) | ||
|
|
||
| cmd.MarkFlagsOneRequired("name", "endpoint", "warehouse", "allowed-scope", "client-id", "client-secret") | ||
| cmd.MarkFlagsOneRequired("name", "endpoint", "warehouse", "allowed-scope", "client-id", "client-secret", "custom-database", "custom-namespace", "custom-schema") |
There was a problem hiding this comment.
The update command accepts type-specific flags for multiple catalog integration types at once (e.g., --custom-database and --custom-schema) and will silently let the last SetConfig(...) win, producing an invalid/misleading request. Also, there’s no validation that a flag matches the actual integration type (e.g., --custom-database on a Snowflake integration will send an AwsGlue update spec). It would be safer to determine the integration type once (from the backend) and reject incompatible flag combinations, or otherwise enforce mutual exclusivity between aws/snowflake/unity-specific flag sets.
…egration resources Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
a2230a6 to
fa4aeec
Compare
…space/schema flags
fa4aeec to
4b9536a
Compare
|




Release Notes
New Features
Checklist
Whatsection below whether this PR applies to Confluent Cloud, Confluent Platform, or both.Test & Reviewsection below.Blast Radiussection below.What
Applies to Confluent Cloud.
Adds support for custom database/namespace/schema flags to the
confluent tableflow catalog-integrationcommands (create,update,list,describe). Each flag is specific to a catalog integration type:--custom-databasefor AWS Glue — allows users to specify a custom database name.--custom-namespacefor Snowflake Open Catalog — allows users to specify a custom namespace.--custom-schemafor Unity Catalog — allows users to specify a custom schema name.These are optional flags on
createandupdate, and the values are displayed inlistanddescribeoutput when set. The Tableflow public SDK dependency (ccloud-sdk-go-v2/tableflow) was bumped from v0.5.0 to v0.6.0 to pick up the new spec fields. Integration tests and golden files have been added/updated accordingly.Blast Radius
confluent tableflow catalog-integration create,update,list, ordescribecommands could be impacted if something goes wrong.References
Test & Review
I build the cli binary on the local using "make build", and ran commands to validate the correctness. This MD files has all the details of the manual testing done: