Skip to content

Score 2774 traceability#484

Open
FScholPer wants to merge 8 commits intomainfrom
score-2774-traceability
Open

Score 2774 traceability#484
FScholPer wants to merge 8 commits intomainfrom
score-2774-traceability

Conversation

@FScholPer
Copy link
Copy Markdown
Contributor

-> see eclipse-score/score#2774

Frank Scholter Peres frank.scholter_peres@mercedes-benz.com, Mercedes-Benz Tech Innovation GmbH
Provider Information

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 13, 2026

License Check Results

🚀 The license check job ran with the Bazel command:

bazel run --lockfile_mode=error //src:license-check

Status: ⚠️ Needs Review

Click to expand output
[License Check Output]
Extracting Bazel installation...
Starting local Bazel server (8.4.2) and connecting to it...
INFO: Invocation ID: 542302cb-aa12-4d8e-9587-9e3c8ba57f4f
Computing main repo mapping: 
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Target pattern parsing failed.
ERROR: Skipping '//src:license-check': no such target '//src:license-check': target 'license-check' not declared in package 'src' defined by /home/runner/work/docs-as-code/docs-as-code/src/BUILD
ERROR: no such target '//src:license-check': target 'license-check' not declared in package 'src' defined by /home/runner/work/docs-as-code/docs-as-code/src/BUILD
INFO: Elapsed time: 5.062s
INFO: 0 processes.
ERROR: Build did NOT complete successfully
ERROR: Build failed. Not running target

@github-actions
Copy link
Copy Markdown

The created documentation from the pull request is available at: docu-html

Comment thread scripts_bazel/traceability_coverage.py
Comment thread docs/how-to/test_to_doc_links.rst
Comment thread scripts_bazel/traceability_coverage.py
Comment thread scripts_bazel/traceability_coverage.py
Copy link
Copy Markdown
Member

@AlexanderLanin AlexanderLanin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see discussion in #485

@FScholPer
Copy link
Copy Markdown
Contributor Author

closed #485 @AlexanderLanin can you please check again?

Copy link
Copy Markdown
Contributor

@MaximilianSoerenPollak MaximilianSoerenPollak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments for things I noticed.

Will take another look after lunch.

Overall looks like genuine good work that we can build upon.

Comment thread src/extensions/score_metamodel/traceability_metrics.py Outdated
if isinstance(item, str) and item.strip():
out.append(item.strip())
return out
return []
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you are already inside sphinx (due to it being inside docs-as-code extensions) you could also just get the all sphinx-needs from sphinx itself.

You can do so via:

 Needs_Data = SphinxNeedsData(app.env)
 needs_data = set(x["id"] in Needs_Data.get_needs_view().values())

If you only wanted to have the ID's in the end.

Though you then need to have this as a full extension. If you want I can convert this in a future PR to a full extension to enable this & other things.

Comment on lines +67 to +68
implemented = str(need.get("implemented", "")).upper().strip()
if implemented not in {"YES", "PARTIAL"}:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented is tool_req specific. Unsure if this should be hidden / hardcoded in the code?

Comment on lines +55 to +71
def filter_requirements(
all_needs: Sequence[Any],
requirement_types: set[str],
include_not_implemented: bool,
) -> list[Any]:
"""Extract requirements by type and implementation state."""
requirements: list[dict[str, Any]] = []
for need in all_needs:
need_type = str(need.get("type", "")).strip()
if need_type not in requirement_types:
continue
if not include_not_implemented:
implemented = str(need.get("implemented", "")).upper().strip()
if implemented not in {"YES", "PARTIAL"}:
continue
requirements.append(need)
return requirements
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this was a sphinx-extension could also here make use of sphinx-needs internal functions etc. to get the needs list / objects for filtering etc.
As mentioned above I can take this PR (after merge) and convert it in a future one.

Comment on lines +74 to +110
def calculate_requirement_metrics(
requirements: Sequence[Any],
) -> dict[str, Any]:
"""Calculate requirement traceability statistics for links and completeness."""
total = len(requirements)
with_code = sum(
1 for need in requirements if is_non_empty(need.get("source_code_link"))
)
with_test = sum(1 for need in requirements if is_non_empty(need.get("testlink")))
fully_linked = sum(
1
for need in requirements
if is_non_empty(need.get("source_code_link"))
and is_non_empty(need.get("testlink"))
)

missing_code_ids = [
str(need.get("id", ""))
for need in requirements
if not is_non_empty(need.get("source_code_link")) and need.get("id")
]
missing_test_ids = [
str(need.get("id", ""))
for need in requirements
if not is_non_empty(need.get("testlink")) and need.get("id")
]
not_fully_linked_ids = [
str(need.get("id", ""))
for need in requirements
if (
(
not is_non_empty(need.get("source_code_link"))
or not is_non_empty(need.get("testlink"))
)
and need.get("id")
)
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be done in future PR:

I would like to have this at least a tiny bit more efficient.
Loops are one of the least efficient things in python, even though this already greatly improves due to use of set comprehension I think it should be possible with just one itteration through all requirements instead of 6.

But as said, future PR improvement, for me currently not a block as we don't have 10k+ reqs to look through.

--needs-json bazel-bin/needs_json/_build/needs/needs.json \
--test-types unit-test

Use lower thresholds during rollout and tighten towards 100% over time.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also achieve this '100%' thing with two further changes, at least in some repos.

Make both source_code_link & testlink mandatory option for each requirement (par some exceptions).

Make the pytest plugin or sphinx extension throw an error if a test does not have properties => therefore isn''t linked


.. needpie:: Requirements Status
:labels: not implemented, implemented but not tested, implemented and tested
:labels: not implemented, implemented but incomplete docs, fully documented
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read this correctly, this seams like it does something different than before?
statistics wise

def _write_needs_json(tmp_path: Path) -> Path:
needs_json = tmp_path / "needs.json"
payload = {
"current_version": "main",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think current_version can be a non ID (in a realistic scenario).
Though don't think it matters.

Comment on lines +62 to +63
"partially_verifies": "",
"fully_verifies": "",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would not be allowed.

If a test does not have the right prperties (either fully or partially verifies & other things) the testcase need will not be generated.

To enforce traceability in CI:

1. Run tests.
2. Generate ``needs.json``.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit flawed, as currently testcase needs are generated as external needs and those are excluded from the needs.json.

They can be included if we change the needs.json build filter which is possible so also external needs will be in there.
It's like this in conf.py or probably in the score_metamodel extension:

needs_builder_filter = '(is_external==True and need_type==testcase)''

The exact syntax we would have to figure out but something like this.

| `bazel run //:docs` | Builds documentation |
| `bazel run //:docs_check` | Verifies documentation correctness |
| `bazel run //:docs_combo` | Builds combined documentation with all external dependencies included |
| `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These commands are intended to work in any score repo. So it should be:

Suggested change
| `bazel run //scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |
| `bazel run @score_docs_as_code//scripts_bazel:traceability_coverage -- --needs-json bazel-bin/needs_json/needs.json --min-req-code 100 --min-req-test 100 --min-req-fully-linked 100 --min-tests-linked 100 --fail-on-broken-test-refs` | Calculates requirement/test traceability percentages and fails if thresholds are not met |

.. code-block:: bash

bazel test //...
bazel build //:needs_json
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes no sense to me to run this build manually. The coverage check can simply depend on it and Bazel takes care of the execution.


.. code-block:: bash

bazel test //...
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a way to "build" a test report and then a coverage check can depend on that.

bazel test //...
bazel build //:needs_json
bazel run //scripts_bazel:traceability_coverage -- \
--needs-json bazel-bin/needs_json/_build/needs/needs.json \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually using files in bazel-bin is a red flag. Use Bazel dependencies instead.

# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

"""Compute requirement and test traceability coverage from sphinx-needs output."""
Copy link
Copy Markdown
Contributor

@a-zw a-zw Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this script, a CI quality gate, (and related files) should actually be in tooling and not in docs-as-code.

Comment thread .gitignore

# bug: This file is created in repo root on test discovery.
/consumer_test.log
.clwb
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are clwb files?

@MaximilianSoerenPollak
Copy link
Copy Markdown
Contributor

Also a general thing.

If there is any 'substantial' or partial generation from AI please add the disclaimer in the header.
According to 'eclipse' something to the lines of this should be fine:

# ╓                                                          ╖
# ║ Some portions generated by <AI Model Name>               ║
# ╙                                                          ╜

Model name meaning like 'Gpt' or 'Copilot' or 'Claude' etc.

Co-authored-by: Maximilian Sören Pollak <maximilian.pollak@qorix.com>
Signed-off-by: Frank Scholter Peres(MBTI) <145544737+FScholPer@users.noreply.github.com>
@FScholPer
Copy link
Copy Markdown
Contributor Author

Also a general thing.

If there is any 'substantial' or partial generation from AI please add the disclaimer in the header. According to 'eclipse' something to the lines of this should be fine:

# ╓                                                          ╖
# ║ Some portions generated by <AI Model Name>               ║
# ╙                                                          ╜

Model name meaning like 'Gpt' or 'Copilot' or 'Claude' etc.

I will add. But we should think about using tools like https://github.com/git-ai-project/git-ai in general

@MaximilianSoerenPollak
Copy link
Copy Markdown
Contributor

I will add. But we should think about using tools like https://github.com/git-ai-project/git-ai in general

Probably might be worth a look to see if we could use that. I can bring it up in infra.

@MaximilianSoerenPollak
Copy link
Copy Markdown
Contributor

@FScholPer I would propose that I can take these Ideas & Proposals that you have made in this PR and re-implement them into Docs-As-Code so they fit a better architecturally and makes sense with how testcases etc. actually behave & are created inside the DaC frameworks that deal with it.

I fear it would be a lot more work to merge this PR and then rebuild it / change it quiet heavily later when we want to integrate it.

Do you think this would make sense and is a good way to move forward with this issue?

@AlexanderLanin @a-zw please also advice here in your opinion what would be a good approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Backlog

Development

Successfully merging this pull request may close these issues.

Improvement: Introduce traceability coverage checker and CI reporting mode

4 participants