-
Notifications
You must be signed in to change notification settings - Fork 24
Score 2774 traceability #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
fd1f3dd
59b1ced
9250faa
1677f66
499d02f
e1dbfe7
ee7b148
b98669f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -26,3 +26,4 @@ __pycache__/ | |
|
|
||
| # bug: This file is created in repo root on test discovery. | ||
| /consumer_test.log | ||
| .clwb | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -53,3 +53,43 @@ Limitations | |
| - Partial properties will lead to no Testlink creation. | ||
| If you want a test to be linked, please ensure all requirement properties are provided. | ||
| - Tests must be executed by Bazel first so `test.xml` files exist. | ||
|
|
||
|
|
||
| CI/CD Gate for Linkage Percentage | ||
|
FScholPer marked this conversation as resolved.
|
||
| --------------------------------- | ||
|
|
||
| To enforce traceability in CI: | ||
|
|
||
| 1. Run tests. | ||
| 2. Generate ``needs.json``. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a bit flawed, as currently testcase needs are generated as They can be included if we change the needs.json build filter which is possible so also external needs will be in there.
The exact syntax we would have to figure out but something like this. |
||
| 3. Execute the traceability checker. | ||
|
|
||
| .. code-block:: bash | ||
|
|
||
| bazel test //... | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We need a way to "build" a test report and then a coverage check can depend on that. |
||
| bazel build //:needs_json | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Makes no sense to me to run this build manually. The coverage check can simply depend on it and Bazel takes care of the execution. |
||
| bazel run //scripts_bazel:traceability_coverage -- \ | ||
| --needs-json bazel-bin/needs_json/_build/needs/needs.json \ | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Manually using files in bazel-bin is a red flag. Use Bazel dependencies instead. |
||
| --min-req-code 100 \ | ||
| --min-req-test 100 \ | ||
| --min-req-fully-linked 100 \ | ||
| --min-tests-linked 100 \ | ||
| --fail-on-broken-test-refs | ||
|
Comment on lines
+70
to
+77
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this work? Though question. Is there a specific |
||
|
|
||
| The checker reports: | ||
|
|
||
| - Percentage of implemented requirements with ``source_code_link`` | ||
| - Percentage of implemented requirements with ``testlink`` | ||
| - Percentage of implemented requirements with both links (fully linked) | ||
| - Percentage of test cases linked to at least one requirement | ||
| - Broken testcase references to unknown requirement IDs | ||
|
|
||
| To check only unit tests, filter testcase types: | ||
|
|
||
| .. code-block:: bash | ||
|
|
||
| bazel run //scripts_bazel:traceability_coverage -- \ | ||
| --needs-json bazel-bin/needs_json/_build/needs/needs.json \ | ||
| --test-types unit-test | ||
|
|
||
| Use lower thresholds during rollout and tighten towards 100% over time. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can also achieve this '100%' thing with two further changes, at least in some repos. Make both source_code_link & testlink mandatory option for each requirement (par some exceptions). Make the pytest plugin or sphinx extension throw an error if a test does not have properties => therefore isn''t linked |
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -20,12 +20,9 @@ Overview | |
| -------- | ||
|
|
||
| .. needpie:: Requirements Status | ||
| :labels: not implemented, implemented but not tested, implemented and tested | ||
| :labels: not implemented, implemented but incomplete docs, fully documented | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If I read this correctly, this seams like it does something different than before? |
||
| :colors: red,yellow, green | ||
|
|
||
| type == 'tool_req' and implemented == 'NO' | ||
| type == 'tool_req' and testlink == '' and (implemented == 'YES' or implemented == 'PARTIAL') | ||
| type == 'tool_req' and testlink != '' and (implemented == 'YES' or implemented == 'PARTIAL') | ||
| :filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_status(tool_req) | ||
|
|
||
| In Detail | ||
| --------- | ||
|
|
@@ -48,9 +45,7 @@ In Detail | |
| .. needpie:: Requirements with Codelinks | ||
| :labels: no codelink, with codelink | ||
| :colors: red, green | ||
|
|
||
| type == 'tool_req' and source_code_link == '' | ||
| type == 'tool_req' and source_code_link != '' | ||
| :filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_code_links(tool_req) | ||
|
|
||
| .. grid-item-card:: | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -5,6 +5,7 @@ | |||||
| | `bazel run //:docs` | Builds documentation | | ||||||
| | `bazel run //:docs_check` | Verifies documentation correctness | | ||||||
| | `bazel run //:docs_combo` | Builds combined documentation with all external dependencies included | | ||||||
| | `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met | | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These commands are intended to work in any score repo. So it should be:
Suggested change
|
||||||
| | `bazel run //:live_preview` | Creates a live_preview of the documentation viewable in a local server | | ||||||
| | `bazel run //:live_preview_combo_experimental` | Creates a live_preview of the full documentation with all dependencies viewable in a local server | | ||||||
| | `bazel run //:ide_support` | Sets up a Python venv for esbonio (Remember to restart VS Code!) | | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,233 @@ | ||
| # ******************************************************************************* | ||
| # Copyright (c) 2026 Contributors to the Eclipse Foundation | ||
| # | ||
| # See the NOTICE file(s) distributed with this work for additional | ||
| # information regarding copyright ownership. | ||
| # | ||
| # This program and the accompanying materials are made available under the | ||
| # terms of the Apache License Version 2.0 which is available at | ||
| # https://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # ******************************************************************************* | ||
|
|
||
| """Tests for traceability_coverage.py.""" | ||
|
|
||
| import json | ||
| import os | ||
| import subprocess | ||
| import sys | ||
| from pathlib import Path | ||
|
|
||
| _MY_PATH = Path(__file__).parent | ||
|
|
||
|
|
||
| def _write_needs_json(tmp_path: Path) -> Path: | ||
| needs_json = tmp_path / "needs.json" | ||
| payload = { | ||
| "current_version": "main", | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think |
||
| "versions": { | ||
| "main": { | ||
| "needs": { | ||
| "REQ_1": { | ||
| "id": "REQ_1", | ||
| "type": "tool_req", | ||
| "implemented": "YES", | ||
| "source_code_link": "src/foo.py:10", | ||
| "testlink": "", | ||
| }, | ||
| "REQ_2": { | ||
| "id": "REQ_2", | ||
| "type": "tool_req", | ||
| "implemented": "PARTIAL", | ||
| "source_code_link": "", | ||
| "testlink": "tests/test_foo.py::test_bar", | ||
| }, | ||
| "REQ_3": { | ||
| "id": "REQ_3", | ||
| "type": "tool_req", | ||
| "implemented": "NO", | ||
| "source_code_link": "", | ||
| "testlink": "", | ||
| }, | ||
| "TC_1": { | ||
| "id": "TC_1", | ||
| "type": "testcase", | ||
| "partially_verifies": "REQ_1, REQ_2", | ||
| "fully_verifies": "", | ||
| }, | ||
| "TC_2": { | ||
| "id": "TC_2", | ||
| "type": "testcase", | ||
| "partially_verifies": "", | ||
| "fully_verifies": "", | ||
|
Comment on lines
+62
to
+63
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This would not be allowed. If a test does not have the right prperties (either fully or partially verifies & other things) the testcase need will not be generated. |
||
| }, | ||
| "TC_3": { | ||
| "id": "TC_3", | ||
| "type": "testcase", | ||
| "partially_verifies": "", | ||
| "fully_verifies": "REQ_UNKNOWN", | ||
| }, | ||
| } | ||
| } | ||
| }, | ||
| } | ||
| needs_json.write_text(json.dumps(payload), encoding="utf-8") | ||
| return needs_json | ||
|
|
||
|
|
||
| def test_traceability_coverage_thresholds_pass(tmp_path: Path) -> None: | ||
| needs_json = _write_needs_json(tmp_path) | ||
| output_json = tmp_path / "summary.json" | ||
|
|
||
| result = subprocess.run( | ||
| [ | ||
| sys.executable, | ||
| _MY_PATH.parent / "traceability_coverage.py", | ||
| "--needs-json", | ||
| str(needs_json), | ||
| "--min-req-code", | ||
| "50", | ||
| "--min-req-test", | ||
| "50", | ||
| "--min-req-fully-linked", | ||
| "0", | ||
| "--min-tests-linked", | ||
| "60", | ||
| "--json-output", | ||
| str(output_json), | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| ) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert "Threshold check passed." in result.stdout | ||
| assert output_json.exists() | ||
|
|
||
| summary = json.loads(output_json.read_text(encoding="utf-8")) | ||
| assert summary["requirements"]["total"] == 2 | ||
| assert summary["requirements"]["with_code_link"] == 1 | ||
| assert summary["requirements"]["with_test_link"] == 1 | ||
| assert summary["requirements"]["fully_linked"] == 0 | ||
| assert summary["tests"]["total"] == 3 | ||
| assert summary["tests"]["linked_to_requirements"] == 2 | ||
| assert len(summary["tests"]["broken_references"]) == 1 | ||
|
|
||
|
|
||
| def test_traceability_coverage_thresholds_fail(tmp_path: Path) -> None: | ||
| needs_json = _write_needs_json(tmp_path) | ||
|
|
||
| result = subprocess.run( | ||
| [ | ||
| sys.executable, | ||
| _MY_PATH.parent / "traceability_coverage.py", | ||
| "--needs-json", | ||
| str(needs_json), | ||
| "--min-req-code", | ||
| "80", | ||
| "--min-req-test", | ||
| "80", | ||
| "--min-req-fully-linked", | ||
| "80", | ||
| "--min-tests-linked", | ||
| "80", | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| ) | ||
|
|
||
| assert result.returncode == 2 | ||
| assert "Threshold check failed:" in result.stdout | ||
|
|
||
|
|
||
| def test_traceability_coverage_fails_on_broken_refs(tmp_path: Path) -> None: | ||
| needs_json = _write_needs_json(tmp_path) | ||
|
|
||
| result = subprocess.run( | ||
| [ | ||
| sys.executable, | ||
| _MY_PATH.parent / "traceability_coverage.py", | ||
| "--needs-json", | ||
| str(needs_json), | ||
| "--min-req-code", | ||
| "0", | ||
| "--min-req-test", | ||
| "0", | ||
| "--min-req-fully-linked", | ||
| "0", | ||
| "--min-tests-linked", | ||
| "0", | ||
| "--fail-on-broken-test-refs", | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| ) | ||
|
|
||
| assert result.returncode == 2 | ||
| assert "broken testcase references found:" in result.stdout | ||
|
|
||
|
|
||
| def test_traceability_coverage_prints_unlinked_requirements(tmp_path: Path) -> None: | ||
| needs_json = _write_needs_json(tmp_path) | ||
|
|
||
| result = subprocess.run( | ||
| [ | ||
| sys.executable, | ||
| _MY_PATH.parent / "traceability_coverage.py", | ||
| "--needs-json", | ||
| str(needs_json), | ||
| "--min-req-code", | ||
| "0", | ||
| "--min-req-test", | ||
| "0", | ||
| "--min-req-fully-linked", | ||
| "0", | ||
| "--min-tests-linked", | ||
| "0", | ||
| "--print-unlinked-requirements", | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| ) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert "Unlinked requirement details:" in result.stdout | ||
| assert "Missing source_code_link: REQ_2" in result.stdout | ||
| assert "Missing testlink: REQ_1" in result.stdout | ||
| assert "Not fully linked: REQ_1, REQ_2" in result.stdout | ||
|
|
||
|
|
||
| def test_traceability_coverage_accepts_workspace_relative_needs_json( | ||
| tmp_path: Path, | ||
| ) -> None: | ||
| workspace = tmp_path / "workspace" | ||
| workspace.mkdir() | ||
| needs_json = _write_needs_json(workspace) | ||
|
|
||
| env = dict(os.environ) | ||
| env["BUILD_WORKSPACE_DIRECTORY"] = str(workspace) | ||
|
|
||
| result = subprocess.run( | ||
| [ | ||
| sys.executable, | ||
| _MY_PATH.parent / "traceability_coverage.py", | ||
| "--needs-json", | ||
| "needs.json", | ||
| "--min-req-code", | ||
| "0", | ||
| "--min-req-test", | ||
| "0", | ||
| "--min-req-fully-linked", | ||
| "0", | ||
| "--min-tests-linked", | ||
| "0", | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd=tmp_path, | ||
| env=env, | ||
| ) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert f"Traceability input: {needs_json}" in result.stdout | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are clwb files?