Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,4 @@ __pycache__/

# bug: This file is created in repo root on test discovery.
/consumer_test.log
.clwb
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are clwb files?

40 changes: 40 additions & 0 deletions docs/how-to/test_to_doc_links.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,3 +53,43 @@ Limitations
- Partial properties will lead to no Testlink creation.
If you want a test to be linked, please ensure all requirement properties are provided.
- Tests must be executed by Bazel first so `test.xml` files exist.


CI/CD Gate for Linkage Percentage
Comment thread
FScholPer marked this conversation as resolved.
---------------------------------

To enforce traceability in CI:

1. Run tests.
2. Generate ``needs.json``.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit flawed, as currently testcase needs are generated as external needs and those are excluded from the needs.json.

They can be included if we change the needs.json build filter which is possible so also external needs will be in there.
It's like this in conf.py or probably in the score_metamodel extension:

needs_builder_filter = '(is_external==True and need_type==testcase)''

The exact syntax we would have to figure out but something like this.

3. Execute the traceability checker.

.. code-block:: bash

bazel test //...
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a way to "build" a test report and then a coverage check can depend on that.

bazel build //:needs_json
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes no sense to me to run this build manually. The coverage check can simply depend on it and Bazel takes care of the execution.

bazel run //scripts_bazel:traceability_coverage -- \
--needs-json bazel-bin/needs_json/_build/needs/needs.json \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually using files in bazel-bin is a red flag. Use Bazel dependencies instead.

--min-req-code 100 \
--min-req-test 100 \
--min-req-fully-linked 100 \
--min-tests-linked 100 \
--fail-on-broken-test-refs
Comment on lines +70 to +77
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work?
I'm unsure if all our extensions run during bazel build //:needs_json if that is the case then ye this could work.

Though question. Is there a specific want to be able to execute this as a standalone so to speak?
Or would it be fine just running along side doc creation?


The checker reports:

- Percentage of implemented requirements with ``source_code_link``
- Percentage of implemented requirements with ``testlink``
- Percentage of implemented requirements with both links (fully linked)
- Percentage of test cases linked to at least one requirement
- Broken testcase references to unknown requirement IDs

To check only unit tests, filter testcase types:

.. code-block:: bash

bazel run //scripts_bazel:traceability_coverage -- \
--needs-json bazel-bin/needs_json/_build/needs/needs.json \
--test-types unit-test

Use lower thresholds during rollout and tighten towards 100% over time.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also achieve this '100%' thing with two further changes, at least in some repos.

Make both source_code_link & testlink mandatory option for each requirement (par some exceptions).

Make the pytest plugin or sphinx extension throw an error if a test does not have properties => therefore isn''t linked

11 changes: 3 additions & 8 deletions docs/internals/requirements/implementation_state.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,9 @@ Overview
--------

.. needpie:: Requirements Status
:labels: not implemented, implemented but not tested, implemented and tested
:labels: not implemented, implemented but incomplete docs, fully documented
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read this correctly, this seams like it does something different than before?
statistics wise

:colors: red,yellow, green

type == 'tool_req' and implemented == 'NO'
type == 'tool_req' and testlink == '' and (implemented == 'YES' or implemented == 'PARTIAL')
type == 'tool_req' and testlink != '' and (implemented == 'YES' or implemented == 'PARTIAL')
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_status(tool_req)

In Detail
---------
Expand All @@ -48,9 +45,7 @@ In Detail
.. needpie:: Requirements with Codelinks
:labels: no codelink, with codelink
:colors: red, green

type == 'tool_req' and source_code_link == ''
type == 'tool_req' and source_code_link != ''
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_code_links(tool_req)

.. grid-item-card::

Expand Down
1 change: 1 addition & 0 deletions docs/reference/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
| `bazel run //:docs` | Builds documentation |
| `bazel run //:docs_check` | Verifies documentation correctness |
| `bazel run //:docs_combo` | Builds combined documentation with all external dependencies included |
| `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These commands are intended to work in any score repo. So it should be:

Suggested change
| `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |
| `bazel run @score_docs_as_code//scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |

| `bazel run //:live_preview` | Creates a live_preview of the documentation viewable in a local server |
| `bazel run //:live_preview_combo_experimental` | Creates a live_preview of the full documentation with all dependencies viewable in a local server |
| `bazel run //:ide_support` | Sets up a Python venv for esbonio (Remember to restart VS Code!) |
Expand Down
8 changes: 8 additions & 0 deletions scripts_bazel/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,11 @@ py_binary(
main = "merge_sourcelinks.py",
visibility = ["//visibility:public"],
)

py_binary(
name = "traceability_coverage",
srcs = ["traceability_coverage.py"],
main = "traceability_coverage.py",
visibility = ["//visibility:public"],
deps = all_requirements + ["//src/extensions/score_metamodel:score_metamodel"],
)
9 changes: 9 additions & 0 deletions scripts_bazel/tests/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,12 @@ score_pytest(
] + all_requirements,
pytest_config = "//:pyproject.toml",
)

score_pytest(
name = "traceability_coverage_test",
srcs = ["traceability_coverage_test.py"],
deps = [
"//scripts_bazel:traceability_coverage",
] + all_requirements,
pytest_config = "//:pyproject.toml",
)
233 changes: 233 additions & 0 deletions scripts_bazel/tests/traceability_coverage_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,233 @@
# *******************************************************************************
# Copyright (c) 2026 Contributors to the Eclipse Foundation
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Apache License Version 2.0 which is available at
# https://www.apache.org/licenses/LICENSE-2.0
#
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

"""Tests for traceability_coverage.py."""

import json
import os
import subprocess
import sys
from pathlib import Path

_MY_PATH = Path(__file__).parent


def _write_needs_json(tmp_path: Path) -> Path:
needs_json = tmp_path / "needs.json"
payload = {
"current_version": "main",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think current_version can be a non ID (in a realistic scenario).
Though don't think it matters.

"versions": {
"main": {
"needs": {
"REQ_1": {
"id": "REQ_1",
"type": "tool_req",
"implemented": "YES",
"source_code_link": "src/foo.py:10",
"testlink": "",
},
"REQ_2": {
"id": "REQ_2",
"type": "tool_req",
"implemented": "PARTIAL",
"source_code_link": "",
"testlink": "tests/test_foo.py::test_bar",
},
"REQ_3": {
"id": "REQ_3",
"type": "tool_req",
"implemented": "NO",
"source_code_link": "",
"testlink": "",
},
"TC_1": {
"id": "TC_1",
"type": "testcase",
"partially_verifies": "REQ_1, REQ_2",
"fully_verifies": "",
},
"TC_2": {
"id": "TC_2",
"type": "testcase",
"partially_verifies": "",
"fully_verifies": "",
Comment on lines +62 to +63
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would not be allowed.

If a test does not have the right prperties (either fully or partially verifies & other things) the testcase need will not be generated.

},
"TC_3": {
"id": "TC_3",
"type": "testcase",
"partially_verifies": "",
"fully_verifies": "REQ_UNKNOWN",
},
}
}
},
}
needs_json.write_text(json.dumps(payload), encoding="utf-8")
return needs_json


def test_traceability_coverage_thresholds_pass(tmp_path: Path) -> None:
needs_json = _write_needs_json(tmp_path)
output_json = tmp_path / "summary.json"

result = subprocess.run(
[
sys.executable,
_MY_PATH.parent / "traceability_coverage.py",
"--needs-json",
str(needs_json),
"--min-req-code",
"50",
"--min-req-test",
"50",
"--min-req-fully-linked",
"0",
"--min-tests-linked",
"60",
"--json-output",
str(output_json),
],
capture_output=True,
text=True,
)

assert result.returncode == 0
assert "Threshold check passed." in result.stdout
assert output_json.exists()

summary = json.loads(output_json.read_text(encoding="utf-8"))
assert summary["requirements"]["total"] == 2
assert summary["requirements"]["with_code_link"] == 1
assert summary["requirements"]["with_test_link"] == 1
assert summary["requirements"]["fully_linked"] == 0
assert summary["tests"]["total"] == 3
assert summary["tests"]["linked_to_requirements"] == 2
assert len(summary["tests"]["broken_references"]) == 1


def test_traceability_coverage_thresholds_fail(tmp_path: Path) -> None:
needs_json = _write_needs_json(tmp_path)

result = subprocess.run(
[
sys.executable,
_MY_PATH.parent / "traceability_coverage.py",
"--needs-json",
str(needs_json),
"--min-req-code",
"80",
"--min-req-test",
"80",
"--min-req-fully-linked",
"80",
"--min-tests-linked",
"80",
],
capture_output=True,
text=True,
)

assert result.returncode == 2
assert "Threshold check failed:" in result.stdout


def test_traceability_coverage_fails_on_broken_refs(tmp_path: Path) -> None:
needs_json = _write_needs_json(tmp_path)

result = subprocess.run(
[
sys.executable,
_MY_PATH.parent / "traceability_coverage.py",
"--needs-json",
str(needs_json),
"--min-req-code",
"0",
"--min-req-test",
"0",
"--min-req-fully-linked",
"0",
"--min-tests-linked",
"0",
"--fail-on-broken-test-refs",
],
capture_output=True,
text=True,
)

assert result.returncode == 2
assert "broken testcase references found:" in result.stdout


def test_traceability_coverage_prints_unlinked_requirements(tmp_path: Path) -> None:
needs_json = _write_needs_json(tmp_path)

result = subprocess.run(
[
sys.executable,
_MY_PATH.parent / "traceability_coverage.py",
"--needs-json",
str(needs_json),
"--min-req-code",
"0",
"--min-req-test",
"0",
"--min-req-fully-linked",
"0",
"--min-tests-linked",
"0",
"--print-unlinked-requirements",
],
capture_output=True,
text=True,
)

assert result.returncode == 0
assert "Unlinked requirement details:" in result.stdout
assert "Missing source_code_link: REQ_2" in result.stdout
assert "Missing testlink: REQ_1" in result.stdout
assert "Not fully linked: REQ_1, REQ_2" in result.stdout


def test_traceability_coverage_accepts_workspace_relative_needs_json(
tmp_path: Path,
) -> None:
workspace = tmp_path / "workspace"
workspace.mkdir()
needs_json = _write_needs_json(workspace)

env = dict(os.environ)
env["BUILD_WORKSPACE_DIRECTORY"] = str(workspace)

result = subprocess.run(
[
sys.executable,
_MY_PATH.parent / "traceability_coverage.py",
"--needs-json",
"needs.json",
"--min-req-code",
"0",
"--min-req-test",
"0",
"--min-req-fully-linked",
"0",
"--min-tests-linked",
"0",
],
capture_output=True,
text=True,
cwd=tmp_path,
env=env,
)

assert result.returncode == 0
assert f"Traceability input: {needs_json}" in result.stdout
Loading
Loading