Skip to content

fix(provider): restore parameter transparency in core LLM provider adapters#6934

Merged
Soulter merged 2 commits intoAstrBotDevs:masterfrom
SXP-Simon:fix/provider-kwargs
Mar 26, 2026
Merged

fix(provider): restore parameter transparency in core LLM provider adapters#6934
Soulter merged 2 commits intoAstrBotDevs:masterfrom
SXP-Simon:fix/provider-kwargs

Conversation

@SXP-Simon
Copy link
Copy Markdown
Contributor

@SXP-Simon SXP-Simon commented Mar 25, 2026

Modifications / 改动点

核心对话适配器(OpenAI, Anthropic, Gemini)在准备请求 Payload 时未对 kwargs 进行合并,导致插件层传入的自定义参数(如 max_tokens, temperature, timeout 等)失效,回退到提供商的保守默认值。本次修复确保了各主流模型适配器对请求参数的完整透传。

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Bug Fixes:

  • Restore propagation of caller-supplied kwargs (e.g., runtime parameters) into chat request payloads for OpenAI, Anthropic, and Gemini providers so they no longer fall back to provider defaults.

…apters

核心对话适配器(OpenAI, Anthropic, Gemini)在准备请求 Payload 时未对 kwargs 进行合并,导致插件层传入的自定义参数(如 max_tokens, temperature, timeout 等)失效,回退到提供商的保守默认值。本次修复确保了各主流模型适配器对请求参数的完整透传。
@auto-assign auto-assign bot requested review from Fridemn and anka-afk March 25, 2026 09:30
@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Mar 25, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此 PR 解决了核心 LLM 提供商适配器中一个关键问题,即自定义参数在请求中被忽略,导致模型行为不一致。通过确保正确合并 kwargs,此更改恢复了用户预期的灵活性和控制,使其能够有效地配置跨不同提供商的模型请求。

Highlights

  • 参数透传修复: 修复了核心 LLM 提供商适配器(OpenAI, Anthropic, Gemini)在准备请求 Payload 时未合并 kwargs 的问题,确保插件层传入的自定义参数(如 max_tokens, temperature, timeout)能够完整透传,避免回退到默认值。
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Merging kwargs directly into payloads will allow callers to overwrite core keys like model and messages; consider either documenting this explicitly as supported behavior or protecting those keys from being overridden.
  • For the new kwargs passthrough, it may be safer to validate or whitelist provider-specific parameters before adding them to payloads to avoid sending unsupported or malformed options to the upstream APIs.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Merging `kwargs` directly into `payloads` will allow callers to overwrite core keys like `model` and `messages`; consider either documenting this explicitly as supported behavior or protecting those keys from being overridden.
- For the new `kwargs` passthrough, it may be safer to validate or whitelist provider-specific parameters before adding them to `payloads` to avoid sending unsupported or malformed options to the upstream APIs.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the Anthropic, Gemini, and OpenAI chat functions by allowing additional keyword arguments (kwargs) to be passed directly into the API request payload. The review comments suggest an improvement to the merging logic for these kwargs, recommending the use of dictionary unpacking ({**kwargs, ...}) to ensure that internally generated messages and model parameters are not inadvertently overwritten by values present in kwargs, thus improving the robustness and predictability of the API calls.

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Mar 25, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 645c482f69

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 26, 2026
@Soulter Soulter merged commit 1ad7e10 into AstrBotDevs:master Mar 26, 2026
7 checks passed
@Soulter
Copy link
Copy Markdown
Member

Soulter commented Mar 26, 2026

Hi, this change has caused numerous incompatibilities. I will keep it reverted until a more thorough audit can be conducted.

@SXP-Simon
Copy link
Copy Markdown
Contributor Author

Hi, this change has caused numerous incompatibilities. I will keep it reverted until a more thorough audit can be conducted.

收到

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants