Skip to content

Show child process cpu usage in dtop#1880

Open
aclauer wants to merge 6 commits intodevfrom
andrew/feat/dtop-subprocess-cpu-usage
Open

Show child process cpu usage in dtop#1880
aclauer wants to merge 6 commits intodevfrom
andrew/feat/dtop-subprocess-cpu-usage

Conversation

@aclauer
Copy link
Copy Markdown
Collaborator

@aclauer aclauer commented Apr 18, 2026

Problem

dtop only shows cpu usage for Python workers spawned by DimOS. Any native modules spawned by that worker don't show up in the cpu statistics. Also add --log flag to log dtop statistics and dtop-plot to generate plots of cpu usage.

Closes DIM-XXX

Solution

Read the pids of any processes spawned and include their cpu usage in a drop down of the main worker.

dtop

Breaking Changes

None

How to Test

dimos --dtop --replay --replay-db=go2_bigoffice run unitree-go2

and

dtop

When dimos spawns the viewer, it will show up as a subprocess of the rerun bridge worker.

Contributor License Agreement

  • I have read and approved the CLA.

@aclauer aclauer changed the title Initial subprocess display Show child process cpu usage in dtop Apr 18, 2026
Comment on lines +130 to +142
try:
proc = _get_process(pid)
for child in proc.children(recursive=False):
child_proc = _get_process(child.pid)
try:
name = child_proc.name()
cpu = child_proc.cpu_percent(interval=None)
result.append(ChildProcessStats(pid=child.pid, name=name, cpu_percent=cpu))
except (psutil.NoSuchProcess, psutil.AccessDenied):
pass
except (psutil.NoSuchProcess, psutil.AccessDenied):
pass
return result
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably _get_process raises (psutil.NoSuchProcess, psutil.AccessDenied). Then surround just that function call with try-except. Nesting the try-excepts is confusing.

Comment thread dimos/utils/cli/dtop.py Outdated
parser.add_argument(
"--log",
nargs="?",
const=f"dtop_{time.strftime('%Y%m%d_%H%M%S')}.jsonl",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's automatically git-ignored.

Suggested change
const=f"dtop_{time.strftime('%Y%m%d_%H%M%S')}.jsonl",
const=f"dtop_{time.strftime('%Y%m%d_%H%M%S')}.ignore.jsonl",

@aclauer aclauer marked this pull request as ready for review April 24, 2026 21:13
@aclauer aclauer requested a review from paul-nechifor April 24, 2026 21:13
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 24, 2026

Greptile Summary

This PR extends dtop to collect and display CPU usage for direct child processes of each Python worker, adds a --log flag to write stats to a JSONL file, and introduces a new dtop-plot CLI tool to visualize those logs with matplotlib. The core stats/monitor changes are clean; all P2 findings are in the new logging and display code.

Confidence Score: 4/5

Safe to merge; all findings are non-blocking P2 quality/robustness issues.

No P0 or P1 issues found. The four P2 findings (unflushed log writes, file handle leak in init, pid variable shadowing, missing KeyError guard in dtop-plot) are style/robustness concerns that don't affect core dtop functionality.

dimos/utils/cli/dtop.py — log file flush and handle lifecycle; dimos/utils/cli/dtop_plot.py — robustness of _load against malformed lines.

Important Files Changed

Filename Overview
dimos/core/resource_monitor/stats.py Adds ChildProcessStats dataclass and collect_children_stats function; extends WorkerStats with a children field. Uses the existing _get_process cache and cpu_percent(interval=None) pattern correctly.
dimos/core/resource_monitor/monitor.py Calls collect_children_stats and adds children CPU to the parent's cpu_percent before constructing WorkerStats. Logic is correct.
dimos/utils/cli/dtop.py Adds child process display rows, JSON logging via --log, and replaces manual sys.argv parsing with argparse. Three P2 issues: log file never flushed, file handle can leak on init failure, and pid shadowed by child loop.
dimos/utils/cli/dtop_plot.py New dtop-plot CLI tool that loads JSONL logs and renders metric plots via matplotlib. One P2: msg[_COORDINATOR] raises KeyError on truncated/malformed log lines.
pyproject.toml Registers the new dtop-plot console script entry point.

Sequence Diagram

sequenceDiagram
    participant SM as StatsMonitor (daemon thread)
    participant PS as psutil
    participant RL as ResourceLogger (LCM)
    participant DT as dtop TUI
    participant LF as JSONL log file

    SM->>PS: collect_process_stats(worker_pid)
    SM->>PS: collect_children_stats(worker_pid)
    PS-->>SM: ChildProcessStats[]
    SM->>SM: add children cpu_percent to parent
    SM->>RL: log_stats(coordinator, [WorkerStats+children])
    RL-->>DT: LCM message (topic /dimos/resource_stats)
    DT->>LF: write JSON line (--log)
    DT->>DT: _refresh(): render worker rows + child rows
    note over LF: dtop-plot reads JSONL → matplotlib plot
Loading

Reviews (1): Last reviewed commit: "Fixes" | Re-trigger Greptile

Comment thread dimos/utils/cli/dtop.py
self._latest = msg
self._last_msg_time = time.monotonic()
if self._log_file:
self._log_file.write(json.dumps({"ts": time.time(), **msg}) + "\n")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Log file not flushed between writes

Each _on_msg call writes a line to _log_file but never calls flush(). Because Python's file I/O is buffered by default, lines written near a crash or SIGKILL will silently stay in the OS/Python buffer and never reach disk. Adding a flush() after the write ensures each message is durable.

Suggested change
self._log_file.write(json.dumps({"ts": time.time(), **msg}) + "\n")
self._log_file.write(json.dumps({"ts": time.time(), **msg}) + "\n")
self._log_file.flush()

Comment thread dimos/utils/cli/dtop.py
) -> None:
super().__init__()
self._topic_name = topic_name
self._log_file = open(log_path, "a") if log_path else None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 File handle leak if __init__ raises after open()

_log_file is opened before autoconf, PickleLCM(), and subscribe(). If any of those subsequent calls throw, on_unmount is never called and the file handle is leaked. A try/except (or opening the file later, e.g. in on_mount) would prevent this.

Comment thread dimos/utils/cli/dtop.py
parts.append(Rule(title=title, style=border_style))
parts.extend(self._make_lines(d, stale, ranges, self._cpu_history[role]))
for child in d.get("children", []):
pid = child.get("pid", 0)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 pid variable shadowed by inner loop

The outer for tuple unpacks pid as a string (worker pid for display), but this inner assignment overwrites it with an integer child pid. The outer pid is rebound at the start of each outer iteration so there's no runtime bug, but the shadowing is confusing and could easily introduce a bug if code is added between the inner loop and the next outer iteration.

Suggested change
pid = child.get("pid", 0)
child_pid = child.get("pid", 0)
if child_pid not in self._child_cpu_history:
self._child_cpu_history[child_pid] = deque(maxlen=_SPARK_WIDTH * 2)
if not stale:
self._child_cpu_history[child_pid].append(child.get("cpu_percent", 0.0))
parts.append(self._make_child_line(child, stale, self._child_cpu_history[child_pid]))

rows = []
for _, msg in raw.iterrows():
ts = msg["ts"]
rows.append({"ts": ts, "role": _COORDINATOR, **msg[_COORDINATOR]})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 KeyError on malformed log lines

msg[_COORDINATOR] raises KeyError if any line in the JSONL file is missing the "coordinator" key (e.g., a truncated line written during an unclean shutdown). Wrapping the row processing in a try/except KeyError and skipping bad rows would make the tool more robust.

Suggested change
rows.append({"ts": ts, "role": _COORDINATOR, **msg[_COORDINATOR]})
try:
ts = msg["ts"]
rows.append({"ts": ts, "role": _COORDINATOR, **msg[_COORDINATOR]})
except (KeyError, TypeError):
continue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants