Conversation
Summary of ChangesHello @alcholiclg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the usability and configurability of the application. It resolves a common console display issue for Windows users by providing a dedicated startup script, ensuring a smoother experience. Concurrently, it introduces a comprehensive system for managing deep research configurations, allowing users to customize LLM parameters for individual agents and the search summarizer through both backend logic and a new, intuitive frontend interface. This change moves away from static configurations, offering greater flexibility and control. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces fixes for the WebUI on Windows and adds configuration capabilities for the deep research feature. The changes include a new PowerShell script to handle text encoding issues on Windows, updates to the README files, and the removal of a hardcoded URL in the deep research configuration. New backend APIs and a corresponding frontend UI in the settings dialog have been added to allow users to configure deep research agents. My review focuses on the new configuration logic and has identified a few areas for improvement, including a potential regression where multi-provider support was inadvertently removed, some code inconsistencies, and opportunities for refactoring to improve maintainability.
| if model: | ||
| llm_override['model'] = model | ||
|
|
||
| if llm_override['service'] == 'modelscope': | ||
| if api_key: | ||
| llm_override['modelscope_api_key'] = api_key | ||
| if base_url: | ||
| llm_override['modelscope_base_url'] = base_url | ||
| elif llm_override['service'] == 'anthropic': | ||
| if api_key: | ||
| llm_override['anthropic_api_key'] = api_key | ||
| if base_url: | ||
| llm_override['anthropic_base_url'] = base_url | ||
| else: | ||
| if api_key: | ||
| llm_override['openai_api_key'] = api_key | ||
| if base_url: | ||
| llm_override['openai_base_url'] = base_url | ||
| if api_key: | ||
| llm_override['openai_api_key'] = api_key | ||
| if base_url: | ||
| llm_override['openai_base_url'] = base_url |
There was a problem hiding this comment.
The logic for handling different LLM providers has been removed, and it now hardcodes openai_api_key and openai_base_url. This is a regression from the previous implementation and will cause issues if a provider other than OpenAI (or an OpenAI-compatible one) is configured in the global settings (e.g., 'modelscope', 'anthropic'). The provider-specific logic should be restored to ensure the correct API key and base URL parameter names are used in the configuration override.
if model:
llm_override['model'] = model
provider = (llm_config.get('provider') or 'openai').strip()
llm_override['service'] = provider
if provider == 'modelscope':
if api_key:
llm_override['modelscope_api_key'] = api_key
if base_url:
llm_override['modelscope_base_url'] = base_url
elif provider == 'anthropic':
if api_key:
llm_override['anthropic_api_key'] = api_key
if base_url:
llm_override['anthropic_base_url'] = base_url
else: # 'openai', 'dashscope', etc.
if api_key:
llm_override['openai_api_key'] = api_key
if base_url:
llm_override['openai_base_url'] = base_url| class DeepResearchAgentConfig(BaseModel): | ||
| model: Optional[str] = '' | ||
| api_key: Optional[str] = '' | ||
| base_url: Optional[str] = '' | ||
|
|
||
|
|
||
| class DeepResearchSearchConfig(BaseModel): | ||
| summarizer_model: Optional[str] = '' | ||
| summarizer_api_key: Optional[str] = '' | ||
| summarizer_base_url: Optional[str] = '' |
There was a problem hiding this comment.
For consistency with other Pydantic models in this file (e.g., LLMConfig) and to make the intent clearer, it's better to use None as the default value for optional fields instead of an empty string ''. This explicitly indicates that a value has not been set, whereas '' could be a valid (though empty) value.
| class DeepResearchAgentConfig(BaseModel): | |
| model: Optional[str] = '' | |
| api_key: Optional[str] = '' | |
| base_url: Optional[str] = '' | |
| class DeepResearchSearchConfig(BaseModel): | |
| summarizer_model: Optional[str] = '' | |
| summarizer_api_key: Optional[str] = '' | |
| summarizer_base_url: Optional[str] = '' | |
| class DeepResearchAgentConfig(BaseModel): | |
| model: Optional[str] = None | |
| api_key: Optional[str] = None | |
| base_url: Optional[str] = None | |
| class DeepResearchSearchConfig(BaseModel): | |
| summarizer_model: Optional[str] = None | |
| summarizer_api_key: Optional[str] = None | |
| summarizer_base_url: Optional[str] = None |
| 'deep_research': { | ||
| 'researcher': { | ||
| 'model': '', | ||
| 'api_key': '', | ||
| 'base_url': '' | ||
| }, | ||
| 'searcher': { | ||
| 'model': '', | ||
| 'api_key': '', | ||
| 'base_url': '' | ||
| }, | ||
| 'reporter': { | ||
| 'model': '', | ||
| 'api_key': '', | ||
| 'base_url': '' | ||
| }, | ||
| 'search': { | ||
| 'summarizer_model': '', | ||
| 'summarizer_api_key': '', | ||
| 'summarizer_base_url': '' | ||
| } |
There was a problem hiding this comment.
To align with the suggested change in api.py to use None as the default for optional config fields, the DEFAULT_CONFIG for deep_research should also be updated to use None instead of empty strings (''). This ensures consistency across the application.
'deep_research': {
'researcher': {
'model': None,
'api_key': None,
'base_url': None
},
'searcher': {
'model': None,
'api_key': None,
'base_url': None
},
'reporter': {
'model': None,
'api_key': None,
'base_url': None
},
'search': {
'summarizer_model': None,
'summarizer_api_key': None,
'summarizer_base_url': None
}
},| except Exception: | ||
| return {} |
| </TabPanel> | ||
|
|
||
| {/* Deep Research Tab */} | ||
| <TabPanel value={tabValue} index={3}> |
There was a problem hiding this comment.
The onChange handlers for the TextField components in the 'Deep Research' tab are very repetitive. To improve code maintainability and reduce duplication, consider creating a helper function to handle state updates for the deepResearchConfig.
For example, you could define a handler like this:
const handleDeepResearchChange = (
agent: keyof DeepResearchConfig,
field: keyof DeepResearchAgentConfig | keyof DeepResearchSearchConfig,
value: string
) => {
setDeepResearchConfig((prev) => ({
...prev,
[agent]: {
...(prev[agent] as any),
[field]: value,
},
}));
};Then, you can use it in your TextField components like this:
<TextField
...
onChange={(e) => handleDeepResearchChange('researcher', 'model', e.target.value)}
/>This will make the code cleaner and easier to manage.
Change Summary
Related issue number
Checklist
pre-commit installandpre-commit run --all-filesbefore git commit, and passed lint check.