perf: Hoist table_metadata at remaining repeat-access sites in snapshot update#3301
Open
rynewang wants to merge 3 commits intoapache:mainfrom
Open
perf: Hoist table_metadata at remaining repeat-access sites in snapshot update#3301rynewang wants to merge 3 commits intoapache:mainfrom
rynewang wants to merge 3 commits intoapache:mainfrom
Conversation
…ot update Follow-up to apache#2674. Transaction.table_metadata replays all staged updates via model_copy on every access; this applies the apache#2674 hoist pattern to three more sites in snapshot.py that still read the property more than once per invocation: - _SnapshotProducer._summary: hoist spec()/schema() out of the per-data-file loop - _DeleteFiles._compute_deletes: hoist table_metadata/schema once (was 3 accesses at method entry) - _MergeAppendFiles.__init__: 3 consecutive .properties reads -> 1 Adds a regression test asserting _summary() access count is independent of file count and _MergeAppendFiles.__init__ adds exactly one access over its superclass.
Avoids mypy attr-defined on Transaction.table_metadata.fget by counting calls to the underlying update_table_metadata function (the actual expensive operation) via mock.patch with wraps.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Follow-up to #2674.
Transaction.table_metadatareplays all staged updates viamodel_copy(deep=True)on every access, so reading it (orspec()/schema()derived from it) repeatedly within a single snapshot-producer method is redundant deep-copy work.#2674 hoisted the property access in
_summary(); this PR extends the same pattern to three more call sites inpyiceberg/table/update/snapshot.pythat still read the property more than once per invocation.Changes
_SnapshotProducer._summary: hoistspec()/schema()out of the per-data-file loop (they are invariant across files; still called 2× per file before this change)_DeleteFiles._compute_deletes: hoisttable_metadata/schemaonce at method entry (was 3 accesses — two viaself.schema()for the metrics evaluators and one direct forsnapshot_by_id)_MergeAppendFiles.__init__: 3 consecutiveself._transaction.table_metadata.propertiesaccesses → 1All hoists are at method entry. Nothing inside these methods stages a transaction update (the
AddSnapshotUpdateis staged by the caller after_commit()returns), sotable_metadatais invariant for the duration of each method.Not touched here: the
new_manifest_writer(self.spec(id))calls inside per-manifest loops in_write_delete_manifest/_compute_deletes/_OverwriteFiles._existing_manifestsalso trigger 2–3 property accesses per iteration via theschema()/spec()/new_manifest_writer()helpers. Those loops are O(partition-groups or rewritten-manifests) rather than O(files), and fixing them cleanly would mean changing the helper signatures — happy to do that in a follow-up if there's interest.Testing
New
test_snapshot_producer_bounded_metadata_accesswrapsTransaction.table_metadatawith a call counter and asserts:_summary()access count is identical for 10 vs 100 appended files (independent of N), and ≤ 2_MergeAppendFiles.__init__makes exactly 1 more access than_FastAppendFiles.__init__(was 3 before this change — verified the test fails with the production diff reverted)The test constructs
_FastAppendFiles/_MergeAppendFilesdirectly rather than going through the public append path, since the public path writes manifest avro files; the property-access count it measures is the behaviour under test and doesn't require I/O.Existing
tests/table/test_snapshots.pypassing.Motivation
For appends/deletes/overwrites touching large numbers of files or manifests, the per-iteration property access dominates wall-clock (each access replays the staged-updates list through pydantic
model_copy). This keeps the cost constant per method call.