Skip to content

Fixed the calib size bug#1178

Merged
kevalmorabia97 merged 1 commit intomainfrom
jingyux/diffusion-calib-size-bug
Apr 6, 2026
Merged

Fixed the calib size bug#1178
kevalmorabia97 merged 1 commit intomainfrom
jingyux/diffusion-calib-size-bug

Conversation

@jingyu-ml
Copy link
Copy Markdown
Contributor

@jingyu-ml jingyu-ml commented Apr 4, 2026

What does this PR do?

Type of change: Bug fix

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes
    • Fixed quantization configuration calibration batch counting to properly account for partial final batches during the calibration process.

Signed-off-by: Jingyu Xin <jingyux@nvidia.com>
@jingyu-ml jingyu-ml requested a review from a team as a code owner April 4, 2026 06:37
@jingyu-ml jingyu-ml requested a review from Edwardf0t1 April 4, 2026 06:37
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 4, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 4, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: bd0793df-4ffd-4cd7-90b6-c922c126c2ad

📥 Commits

Reviewing files that changed from the base of the PR and between df80a0f and 5b747a4.

📒 Files selected for processing (1)
  • examples/diffusers/quantization/quantize_config.py

📝 Walkthrough

Walkthrough

The change modifies the calibration batch count calculation in CalibrationConfig.num_batches to use ceiling division instead of floor division, ensuring that partial final batches are counted when computing the number of calibration batches required.

Changes

Cohort / File(s) Summary
Calibration Config Update
examples/diffusers/quantization/quantize_config.py
Added math import and updated num_batches property from floor division (//) to ceiling division (math.ceil()), ensuring partial final batches are included in the batch count.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Fixed the calib size bug' is vague and uses generic language ('bug') without clearly specifying what was actually changed. While it references 'calib size,' a reviewer cannot understand from the title alone that this involves changing floor division to ceiling division in CalibrationConfig.num_batches. Use a more specific title like 'Fix CalibrationConfig to use ceiling division for batch calculation' to clearly convey the technical change being made.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Security Anti-Patterns ✅ Passed PR modifies quantize_config.py with math import and ceiling division bug fix, with no security anti-patterns detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch jingyux/diffusion-calib-size-bug

Comment @coderabbitai help to get the list of available commands and usage tips.

@kevalmorabia97 kevalmorabia97 enabled auto-merge (squash) April 4, 2026 06:40
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

PR Preview Action v1.8.1
Preview removed because the pull request was closed.
2026-04-06 13:48 UTC

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 76.05%. Comparing base (df80a0f) to head (5b747a4).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1178      +/-   ##
==========================================
+ Coverage   74.77%   76.05%   +1.28%     
==========================================
  Files         351      351              
  Lines       40072    40072              
==========================================
+ Hits        29964    30478     +514     
+ Misses      10108     9594     -514     
Flag Coverage Δ
examples 43.85% <ø> (+3.63%) ⬆️
unit 54.74% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@Edwardf0t1 Edwardf0t1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kevalmorabia97
Copy link
Copy Markdown
Collaborator

/ok to test 5b747a4

@kevalmorabia97 kevalmorabia97 merged commit 4a5ef01 into main Apr 6, 2026
43 checks passed
@kevalmorabia97 kevalmorabia97 deleted the jingyux/diffusion-calib-size-bug branch April 6, 2026 13:46
kevalmorabia97 pushed a commit that referenced this pull request Apr 6, 2026
### What does this PR do?

Type of change: Bug fix <!-- Use one of the following: Bug fix, new
feature, new example, new tests, documentation. -->

<!-- Details about the change. -->

### Usage

```python
# Add a code snippet demonstrating how to use this
```

### Testing
<!-- Mention how have you tested your change if applicable. -->

### Before your PR is "*Ready for review*"

Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)
and your commits are signed (`git commit -s -S`).

Make sure you read and follow the [Security Best
Practices](https://github.com/NVIDIA/Model-Optimizer/blob/main/SECURITY.md#security-coding-practices-for-contributors)
(e.g. avoiding hardcoded `trust_remote_code=True`, `torch.load(...,
weights_only=False)`, `pickle`, etc.).

- Is this change backward compatible?: ✅ / ❌ / N/A <!--- If ❌, explain
why. -->
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: ✅ / ❌ / N/A
<!--- Mandatory -->
- Did you write any new necessary tests?: ✅ / ❌ / N/A <!--- Mandatory
for new features or examples. -->
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
✅ / ❌ / N/A <!--- Only for new features, API changes, critical bug fixes
or backward incompatible changes. -->

### Additional Information
<!-- E.g. related issue. -->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Fixed quantization configuration calibration batch counting to
properly account for partial final batches during the calibration
process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jingyu Xin <jingyux@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants