Conversation
|
Thank you for your contribution to Dash! 🎉 This PR is exempt from requiring a linked issue due to its labels. |
c397a38 to
4532955
Compare
f3a2c9e to
0330817
Compare
e68a524 to
d9975e4
Compare
0330817 to
197e3ac
Compare
c87cb04 to
39f3502
Compare
7cdc31e to
f6cd30c
Compare
T4rk1n
left a comment
There was a problem hiding this comment.
Looks good, but need to make the tests in style of the rest of the repo.
- Single function, no class unless necessary.
- code for the tests eg: test_mscp00x_* would be for tests in mcp schema component proptypes.
- More tests files for separation and splitting.
|
|
||
| - name: Run lint | ||
| env: | ||
| PYLINT_EXTRA_ARGS: ${{ matrix.python-version == '3.8' && '--ignored-modules=mcp' || '' }} |
There was a problem hiding this comment.
Maybe out of scope of this PR, but are we really running 3.8? min should be 3.9
There was a problem hiding this comment.
yeah, all the CI tests are in 3.8 still.
We should definitely consider bumping the min version. The mcp package requires >=3.10 so if we can make that the min version, we would be able to run all tests without any conditions.
I also considered that out of scope for this PR, but maybe worth revisiting?
| tool = _user_tool(_tools_list(app)) | ||
| assert "Your Name" in _desc_for(tool) | ||
|
|
||
| def test_html_for_not_adjacent(self): |
There was a problem hiding this comment.
These tests need a code and number like the rest of the codebase. It makes the test easier to find and you can also filter easily which tests runs.
| # --------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| class TestDatePickerDescriptions: |
There was a problem hiding this comment.
I'd rather see different files and simple function for the tests instead of classes as sections.
More files == more evenly splitting in the ci.
f6cd30c to
c6187b5
Compare
…managed centrally
Summary
This PR turns Dash callbacks into fully-described MCP tools. Each tool has four main parts:
Tool description (
descriptions/)A human-readable summary of what the callback does. Built from pluggable sources that each contribute lines:
description_outputs.py— semantic summary of what the callback returns (e.g. "Returns chart/visualization data" for aGraph.figureoutput)description_docstring.py— the callback function's Python docstringEach source extracts what it needs from a given
CallbackAdapterin order to produce text descriptions. New sources can be added by appending to the_SOURCESlist.Input schema (
input_schemas/)JSON Schema that describes valid inputs for each callback parameter. Each schema is derived from:
def callback(metric: str):→{"type": "string"})Input("dropdown", "value")→{"type": { "anyOf": ["string", "number", "boolean", null]}})multi=True)Input description (
input_descriptions/)Each parameter also gets a text description assembled from various sources:
html.Labelthat is associated with the inputoptions,min/max,value, etc) and any chained callbacks that set values for this inputOutput schema (
output_schemas/)JSON Schema for the tool's
outputSchemafielddash-renderer.Manual testing
You can see the JSON that will be presented to LLMs by testing manually: