Skip to content

[WIP]: Fix mtp experts#4520

Open
RunningLeon wants to merge 8 commits intoInternLM:mainfrom
RunningLeon:fix-mtp-experts
Open

[WIP]: Fix mtp experts#4520
RunningLeon wants to merge 8 commits intoInternLM:mainfrom
RunningLeon:fix-mtp-experts

Conversation

@RunningLeon
Copy link
Copy Markdown
Collaborator

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

Copilot AI review requested due to automatic review settings April 13, 2026 03:04
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adjusts AR-speculative decoding to correctly track/trim routed MoE expert history and to fix stop-word masking behavior during stopping-criteria evaluation.

Changes:

  • Add SchedulerSequenceARSpec.routed_experts override to base routed-experts retrieval on num_valid_ids.
  • Resize routed-expert history when truncating a sequence via set_stop_pos.
  • Fix stop-criteria masking by using logical OR with stop_mask instead of XOR.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
lmdeploy/pytorch/strategies/ar_spec/sequence.py Adds routed-experts accessor for AR-spec sequences and attempts to keep routed-expert history aligned when truncating tokens.
lmdeploy/pytorch/strategies/ar_spec/model_agent.py Fixes stop-word masking logic in ARSpec stopping criteria.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread lmdeploy/pytorch/strategies/ar_spec/sequence.py Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants