Skip to content

Comments

Implemented ML Pipeline Continuous new table rows RunInference#37647

Open
aIbrahiim wants to merge 5 commits intoapache:masterfrom
aIbrahiim:ml-pipeline-continuous-inference
Open

Implemented ML Pipeline Continuous new table rows RunInference#37647
aIbrahiim wants to merge 5 commits intoapache:masterfrom
aIbrahiim:ml-pipeline-continuous-inference

Conversation

@aIbrahiim
Copy link
Contributor

Please add a meaningful description for your change here


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @aIbrahiim, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Apache Beam's ML capabilities by introducing a robust example pipeline for performing continuous machine learning inference on structured table data. The new pipeline, built around the RunInference transform, is designed to handle both batch and streaming inputs, allowing for flexible data processing from sources like files or Pub/Sub, applying scikit-learn models, and outputting results to BigQuery while maintaining data schema. The changes also include comprehensive support infrastructure, such as data generation utilities, unit tests, and integration into the performance benchmarking system, ensuring the example is well-tested, documented, and its performance characteristics are measurable.

Highlights

  • New ML Pipeline for Table Row Inference: Introduced a new RunInference pipeline example (table_row_inference.py) designed to process continuous structured table rows, supporting both batch and streaming modes with BigQuery output.
  • Batch-Only Inference Example: Added a simplified batch-only version of the table row inference pipeline (table_row_inference_batch.py) for clearer demonstration and use cases.
  • Comprehensive Utilities and Testing: Included utility functions (table_row_inference_utils.py) for generating sample models and data, managing Pub/Sub resources, and provided dedicated unit tests (table_row_inference_test.py) for the new pipeline components.
  • Performance Benchmarking Integration: Integrated the new table row inference pipelines into the performance benchmarking framework, adding new benchmark definitions and updating existing cost benchmark logic to support streaming throughput metrics.
  • Documentation and Metrics Updates: Updated documentation and Looker metrics configurations to reflect the new table row inference pipelines, including dedicated performance pages for both batch and streaming variants.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .test-infra/tools/refresh_looker_metrics.py
    • Updated Looker metric IDs to include new Table Row Inference Sklearn Batch and Streaming pipelines.
  • sdks/python/apache_beam/examples/inference/README.md
    • Documented the new table_row_inference.py example, detailing prerequisites, model/data setup, and execution instructions for batch and streaming modes.
  • sdks/python/apache_beam/examples/inference/table_row_inference.py
    • Added a new RunInference pipeline for continuous table row processing, supporting batch (file input) and streaming (Pub/Sub) modes with BigQuery output.
  • sdks/python/apache_beam/examples/inference/table_row_inference_batch.py
    • Added a simplified batch-only inference pipeline for table rows, offering file or BigQuery output.
  • sdks/python/apache_beam/examples/inference/table_row_inference_test.py
    • Created unit tests for the table_row_inference.py pipeline components, including JSON parsing, schema building, model handling, and pipeline integration.
  • sdks/python/apache_beam/examples/inference/table_row_inference_utils.py
    • Introduced utility functions to create sample scikit-learn models, generate sample JSONL data, and manage Pub/Sub resources for testing and deployment.
  • sdks/python/apache_beam/ml/inference/table_row_inference_requirements.txt
    • Added a new requirements file specifying dependencies for table row inference, including scikit-learn, numpy, google-cloud-monitoring, protobuf, and requests.
  • sdks/python/apache_beam/testing/benchmarks/inference/README.md
    • Updated the README to include documentation for the new Table Row Inference (Sklearn) benchmark, detailing GCS artifacts and pipeline options.
  • sdks/python/apache_beam/testing/benchmarks/inference/table_row_inference_benchmark.py
    • Implemented a new benchmark test for the table row inference pipeline, extending DataflowCostBenchmark to measure performance metrics for both batch and streaming modes.
    • sdks/python/apache_beam/testing/load_tests/dataflow_cost_benchmark.py
  • sdks/python/apache_beam/testing/load_tests/dataflow_cost_benchmark.py
    • Modified DataflowCostBenchmark to improve worker time interval extraction, add streaming throughput metrics from Pub/Sub, and refine PCollection throughput metric filtering.
  • sdks/python/apache_beam/testing/load_tests/load_test.py
    • Updated LoadTest to allow TestPipeline initialization with specific options classes, improving flexibility for benchmark configurations.
  • sdks/python/apache_beam/testing/test_pipeline.py
    • Added a get_options_list class method to TestPipeline for parsing command-line arguments related to test pipeline options.
  • website/www/site/content/en/performance/_index.md
    • Updated the performance index page to include links to the new Table Row Inference Sklearn Streaming and Batch performance pages.
  • website/www/site/content/en/performance/tablerowinference/_index.md
    • Added a new performance documentation page for 'Table Row Inference Sklearn Batch', detailing the model, accelerator, host, and linking to cost, version, and date metrics.
  • website/www/site/content/en/performance/tablerowinferencestreaming/_index.md
    • Added a new performance documentation page for 'Table Row Inference Sklearn Streaming', detailing the model, accelerator, host, and linking to cost, version, and date metrics.
  • website/www/site/data/performance.yaml
    • Configured new Looker dashboard IDs for 'tablerowinference' and 'tablerowinferencestreaming' performance metrics, including cost, date, and version tracking.
Ignored Files
  • Ignored by pattern: .github/workflows/** (3)
    • .github/workflows/beam_Inference_Python_Benchmarks_Dataflow.yml
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Table_Row_Inference_Batch.txt
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Table_Row_Inference_Stream.txt
Activity
  • The pull request introduces new ML pipeline examples, utilities, tests, and performance benchmarks. No specific human activity (comments, reviews, etc.) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@github-actions
Copy link
Contributor

Assigning reviewers:

R: @tvalentyn for label python.
R: @Abacn for label build.
R: @Abacn for label website.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 2.83019% with 103 lines in your changes missing coverage. Please review.
✅ Project coverage is 40.06%. Comparing base (600bd61) to head (331aa64).
⚠️ Report is 181 commits behind head on master.

Files with missing lines Patch % Lines
...beam/testing/load_tests/dataflow_cost_benchmark.py 0.00% 52 Missing ⚠️
...chmarks/inference/table_row_inference_benchmark.py 0.00% 46 Missing ⚠️
sdks/python/apache_beam/testing/test_pipeline.py 37.50% 5 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##             master   #37647       +/-   ##
=============================================
- Coverage     57.13%   40.06%   -17.07%     
  Complexity     3515     3515               
=============================================
  Files          1228     1225        -3     
  Lines        189092   188725      -367     
  Branches       3656     3656               
=============================================
- Hits         108039    75615    -32424     
- Misses        77637   109694    +32057     
  Partials       3416     3416               
Flag Coverage Δ
python 39.68% <2.83%> (-41.12%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant