Conversation
License Check Results🚀 The license check job ran with the Bazel command: bazel run --lockfile_mode=error //src:license-checkStatus: Click to expand output |
|
The created documentation from the pull request is available at: docu-html |
AlexanderLanin
left a comment
There was a problem hiding this comment.
see discussion in #485
|
closed #485 @AlexanderLanin can you please check again? |
MaximilianSoerenPollak
left a comment
There was a problem hiding this comment.
Some comments for things I noticed.
Will take another look after lunch.
Overall looks like genuine good work that we can build upon.
| if isinstance(item, str) and item.strip(): | ||
| out.append(item.strip()) | ||
| return out | ||
| return [] |
There was a problem hiding this comment.
Since you are already inside sphinx (due to it being inside docs-as-code extensions) you could also just get the all sphinx-needs from sphinx itself.
You can do so via:
Needs_Data = SphinxNeedsData(app.env)
needs_data = set(x["id"] in Needs_Data.get_needs_view().values())If you only wanted to have the ID's in the end.
Though you then need to have this as a full extension. If you want I can convert this in a future PR to a full extension to enable this & other things.
| implemented = str(need.get("implemented", "")).upper().strip() | ||
| if implemented not in {"YES", "PARTIAL"}: |
There was a problem hiding this comment.
Implemented is tool_req specific. Unsure if this should be hidden / hardcoded in the code?
| def filter_requirements( | ||
| all_needs: Sequence[Any], | ||
| requirement_types: set[str], | ||
| include_not_implemented: bool, | ||
| ) -> list[Any]: | ||
| """Extract requirements by type and implementation state.""" | ||
| requirements: list[dict[str, Any]] = [] | ||
| for need in all_needs: | ||
| need_type = str(need.get("type", "")).strip() | ||
| if need_type not in requirement_types: | ||
| continue | ||
| if not include_not_implemented: | ||
| implemented = str(need.get("implemented", "")).upper().strip() | ||
| if implemented not in {"YES", "PARTIAL"}: | ||
| continue | ||
| requirements.append(need) | ||
| return requirements |
There was a problem hiding this comment.
If this was a sphinx-extension could also here make use of sphinx-needs internal functions etc. to get the needs list / objects for filtering etc.
As mentioned above I can take this PR (after merge) and convert it in a future one.
| def calculate_requirement_metrics( | ||
| requirements: Sequence[Any], | ||
| ) -> dict[str, Any]: | ||
| """Calculate requirement traceability statistics for links and completeness.""" | ||
| total = len(requirements) | ||
| with_code = sum( | ||
| 1 for need in requirements if is_non_empty(need.get("source_code_link")) | ||
| ) | ||
| with_test = sum(1 for need in requirements if is_non_empty(need.get("testlink"))) | ||
| fully_linked = sum( | ||
| 1 | ||
| for need in requirements | ||
| if is_non_empty(need.get("source_code_link")) | ||
| and is_non_empty(need.get("testlink")) | ||
| ) | ||
|
|
||
| missing_code_ids = [ | ||
| str(need.get("id", "")) | ||
| for need in requirements | ||
| if not is_non_empty(need.get("source_code_link")) and need.get("id") | ||
| ] | ||
| missing_test_ids = [ | ||
| str(need.get("id", "")) | ||
| for need in requirements | ||
| if not is_non_empty(need.get("testlink")) and need.get("id") | ||
| ] | ||
| not_fully_linked_ids = [ | ||
| str(need.get("id", "")) | ||
| for need in requirements | ||
| if ( | ||
| ( | ||
| not is_non_empty(need.get("source_code_link")) | ||
| or not is_non_empty(need.get("testlink")) | ||
| ) | ||
| and need.get("id") | ||
| ) | ||
| ] |
There was a problem hiding this comment.
Can be done in future PR:
I would like to have this at least a tiny bit more efficient.
Loops are one of the least efficient things in python, even though this already greatly improves due to use of set comprehension I think it should be possible with just one itteration through all requirements instead of 6.
But as said, future PR improvement, for me currently not a block as we don't have 10k+ reqs to look through.
| --needs-json bazel-bin/needs_json/_build/needs/needs.json \ | ||
| --test-types unit-test | ||
|
|
||
| Use lower thresholds during rollout and tighten towards 100% over time. |
There was a problem hiding this comment.
We can also achieve this '100%' thing with two further changes, at least in some repos.
Make both source_code_link & testlink mandatory option for each requirement (par some exceptions).
Make the pytest plugin or sphinx extension throw an error if a test does not have properties => therefore isn''t linked
|
|
||
| .. needpie:: Requirements Status | ||
| :labels: not implemented, implemented but not tested, implemented and tested | ||
| :labels: not implemented, implemented but incomplete docs, fully documented |
There was a problem hiding this comment.
If I read this correctly, this seams like it does something different than before?
statistics wise
| def _write_needs_json(tmp_path: Path) -> Path: | ||
| needs_json = tmp_path / "needs.json" | ||
| payload = { | ||
| "current_version": "main", |
There was a problem hiding this comment.
I don't think current_version can be a non ID (in a realistic scenario).
Though don't think it matters.
| "partially_verifies": "", | ||
| "fully_verifies": "", |
There was a problem hiding this comment.
This would not be allowed.
If a test does not have the right prperties (either fully or partially verifies & other things) the testcase need will not be generated.
| To enforce traceability in CI: | ||
|
|
||
| 1. Run tests. | ||
| 2. Generate ``needs.json``. |
There was a problem hiding this comment.
This is a bit flawed, as currently testcase needs are generated as external needs and those are excluded from the needs.json.
They can be included if we change the needs.json build filter which is possible so also external needs will be in there.
It's like this in conf.py or probably in the score_metamodel extension:
needs_builder_filter = '(is_external==True and need_type==testcase)''
The exact syntax we would have to figure out but something like this.
| | `bazel run //:docs` | Builds documentation | | ||
| | `bazel run //:docs_check` | Verifies documentation correctness | | ||
| | `bazel run //:docs_combo` | Builds combined documentation with all external dependencies included | | ||
| | `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met | |
There was a problem hiding this comment.
These commands are intended to work in any score repo. So it should be:
| | `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met | | |
| | `bazel run @score_docs_as_code//scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met | |
| .. code-block:: bash | ||
|
|
||
| bazel test //... | ||
| bazel build //:needs_json |
There was a problem hiding this comment.
Makes no sense to me to run this build manually. The coverage check can simply depend on it and Bazel takes care of the execution.
|
|
||
| .. code-block:: bash | ||
|
|
||
| bazel test //... |
There was a problem hiding this comment.
We need a way to "build" a test report and then a coverage check can depend on that.
| bazel test //... | ||
| bazel build //:needs_json | ||
| bazel run //scripts_bazel:traceability_coverage -- \ | ||
| --needs-json bazel-bin/needs_json/_build/needs/needs.json \ |
There was a problem hiding this comment.
Manually using files in bazel-bin is a red flag. Use Bazel dependencies instead.
| # SPDX-License-Identifier: Apache-2.0 | ||
| # ******************************************************************************* | ||
|
|
||
| """Compute requirement and test traceability coverage from sphinx-needs output.""" |
There was a problem hiding this comment.
I believe this script, a CI quality gate, (and related files) should actually be in tooling and not in docs-as-code.
|
|
||
| # bug: This file is created in repo root on test discovery. | ||
| /consumer_test.log | ||
| .clwb |
|
Also a general thing. If there is any 'substantial' or partial generation from AI please add the disclaimer in the header. Model name meaning like 'Gpt' or 'Copilot' or 'Claude' etc. |
Co-authored-by: Maximilian Sören Pollak <maximilian.pollak@qorix.com> Signed-off-by: Frank Scholter Peres(MBTI) <145544737+FScholPer@users.noreply.github.com>
I will add. But we should think about using tools like https://github.com/git-ai-project/git-ai in general |
Probably might be worth a look to see if we could use that. I can bring it up in infra. |
|
@FScholPer I would propose that I can take these Ideas & Proposals that you have made in this PR and re-implement them into Docs-As-Code so they fit a better architecturally and makes sense with how testcases etc. actually behave & are created inside the DaC frameworks that deal with it. I fear it would be a lot more work to merge this PR and then rebuild it / change it quiet heavily later when we want to integrate it. Do you think this would make sense and is a good way to move forward with this issue? @AlexanderLanin @a-zw please also advice here in your opinion what would be a good approach. |
-> see eclipse-score/score#2774
Frank Scholter Peres frank.scholter_peres@mercedes-benz.com, Mercedes-Benz Tech Innovation GmbH
Provider Information