Skip to content

feat(pt): support spin virial (Rebased and updated version of #4545 by @iProzd.)#5156

Draft
OutisLi wants to merge 5 commits intodeepmodeling:masterfrom
OutisLi:pr-4545
Draft

feat(pt): support spin virial (Rebased and updated version of #4545 by @iProzd.)#5156
OutisLi wants to merge 5 commits intodeepmodeling:masterfrom
OutisLi:pr-4545

Conversation

@OutisLi
Copy link
Collaborator

@OutisLi OutisLi commented Jan 15, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added virial (stress tensor) calculation support for spin-enabled models
    • Introduced virial loss metrics (RMSE_v, MAE_v) for spin model training
  • Bug Fixes

    • Enabled virial data propagation through PyTorch and TensorFlow backends
    • Fixed virial output handling in C API for spin models
  • Documentation

    • Updated training documentation to include virial data format specifications for spin models

Copilot AI review requested due to automatic review settings January 15, 2026 11:13
@dosubot dosubot bot added the new feature label Jan 15, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds support for spin virial calculations in the PyTorch backend by implementing virial corrections for spin models. The changes enable the computation of virial tensors when dealing with spin systems by introducing coordinate corrections that account for the virtual atoms used in spin representations.

Changes:

  • Added virial correction mechanism for spin models through coordinate corrections
  • Updated C/C++ API to enable virial output for spin models
  • Extended test coverage to include spin virial validation

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated no comments.

Show a summary per file
File Description
deepmd/pt/model/model/spin_model.py Core implementation of spin virial correction through coordinate corrections returned by process_spin_input methods
deepmd/pt/model/model/transform_output.py Added extended_coord_corr parameter to apply virial corrections
deepmd/pt/model/model/make_model.py Propagated coord_corr_for_virial parameter through forward methods
deepmd/pt/loss/ener_spin.py Added virial loss computation for spin models
source/api_cc/src/DeepSpinPT.cc Uncommented virial computation code to enable spin virial output
source/api_c/src/c_api.cc Uncommented virial assignment code
source/api_c/include/deepmd.hpp Uncommented virial indexing loops
source/tests/universal/pt/model/test_model.py Enabled spin virial testing for PT backend
source/tests/universal/common/cases/model/utils.py Updated test logic to conditionally test spin virial
source/tests/pt/model/test_autodiff.py Added spin virial test class and updated test to include spin
source/tests/pt/model/test_ener_spin_model.py Updated test to handle new return value from process_spin_input
deepmd/pt/model/network/utils.py Added Optional import (unused in shown diff)
deepmd/pt/model/descriptor/repformers.py Code formatting changes (no functional change)
deepmd/pt/model/descriptor/repflow_layer.py Code formatting changes (no functional change)
source/tests/pt/model/test_nosel.py New test file for DPA3 descriptor with dynamic selection
Comments suppressed due to low confidence (2)

deepmd/pt/model/model/spin_model.py:56

  • The return type annotation is incorrect. The method now returns three values (coord_spin, atype_spin, coord_corr) on line 69, but the annotation only specifies two return values. Update to tuple[torch.Tensor, torch.Tensor, torch.Tensor].
    def process_spin_input(
        self, coord: torch.Tensor, atype: torch.Tensor, spin: torch.Tensor
    ) -> tuple[torch.Tensor, torch.Tensor]:

deepmd/pt/model/model/spin_model.py:78

  • The return type annotation is incorrect. The method now returns five values on lines 111-117 (extended_coord_updated, extended_atype_updated, nlist_updated, mapping_updated, extended_coord_corr), but the annotation only specifies four return values. Update to tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None, torch.Tensor].
    def process_spin_input_lower(
        self,
        extended_coord: torch.Tensor,
        extended_atype: torch.Tensor,
        extended_spin: torch.Tensor,
        nlist: torch.Tensor,
        mapping: torch.Tensor | None = None,
    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None]:

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 15, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR introduces virial (stress tensor) support for spin-enabled molecular dynamics models across PyTorch and TensorFlow frameworks. Changes include: coordinate corrections for virial calculations, virial loss computation, conditional virial output handling in C/C++ APIs, and comprehensive test coverage validating virial predictions.

Changes

Cohort / File(s) Summary
PyTorch Spin Model Virial Support
deepmd/pt/loss/ener_spin.py, deepmd/pt/model/model/spin_model.py
Added virial loss computation in EnergySpinLoss and extended process_spin_input/process_spin_input_lower to compute and return coordinate corrections for virial calculations.
PyTorch Model Architecture
deepmd/pt/model/model/make_model.py, deepmd/pt/model/model/transform_output.py
Added coord_corr_for_virial parameter propagation through model forward pass and applied coordinate correction to derivative computation when available.
C/C++ API Virial Handling
source/api_c/include/deepmd.hpp, source/api_c/src/c_api.cc, source/api_cc/src/DeepSpinPT.cc, source/api_cc/src/DeepSpinTF.cc
Enabled virial data population in C API and added conditional virial/atom_virial output handling in PyTorch and TensorFlow C++ APIs based on output availability.
TensorFlow Spin Support
deepmd/tf/descriptor/se_a.py
Added spin-aware coordinate difference computation (natoms_match, natoms_not_match) for descriptor calculations in spin-enabled configurations.
PyTorch Test Updates
source/tests/pt/model/test_autodiff.py, source/tests/pt/model/test_ener_spin_model.py, source/tests/pt/model/test_nosel.py
Added finite-difference virial validation functions, updated spin model tests to unpack coordinate corrections, and added descriptor consistency test with no selection reduction.
C++ Test Updates
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc, source/api_cc/tests/test_deeppot_tf_spin.cc, source/api_cc/tests/CMakeLists.txt
Added virial output validation, updated expected velocity/virial data structures, and set MLIR environment variable for test execution.
TensorFlow Test Updates
source/tests/tf/test_deeppot_spin.py, source/tests/tf/test_init_frz_model_spin.json, source/tests/tf/test_init_frz_model_spin.py, source/tests/tf/test_model_spin.json, source/tests/tf/test_model_spin.py
Updated expected virial data values, added rmse_v assertion to frozen model tests, and updated loss configuration preferences for virial.
LAMMPS Test Updates
source/lmp/tests/test_lammps_spin.py, source/lmp/tests/test_lammps_spin_nopbc.py, source/lmp/tests/test_lammps_spin_pt.py, source/tests/universal/common/cases/model/utils.py, source/tests/universal/pt/model/test_model.py
Added expected velocity arrays, implemented conditional virial population and validation, extended model deviation tests with velocity metrics, and enabled virial flag for spin configurations.
Documentation
doc/model/train-energy-spin.md
Removed unsupported virial note and added virial.npy files to spin data format specifications for both TensorFlow and PyTorch frameworks.

Sequence Diagram

sequenceDiagram
    participant Input as Input Processing
    participant Model as Spin Model
    participant Backbone as Backbone Model
    participant Transform as Output Transform
    participant Loss as Loss Computation
    
    Input->>Input: process_spin_input: Extract coord_corr<br/>(zeros + (-spin_dist))
    Input-->>Model: Returns: coord_updated,<br/>atype_updated, coord_corr
    
    Model->>Backbone: forward_common(coord, atype,<br/>coord_corr_for_virial)
    
    Backbone->>Transform: fit_output_to_model_output<br/>(extended_coord_corr)
    Transform->>Transform: Compute dc correction:<br/>dc += extended_coord_corr @ dc
    Transform-->>Backbone: Returns: corrected derivatives
    
    Backbone-->>Model: Returns: energy, virial,<br/>energy derivatives
    
    Model->>Loss: Forward pass with<br/>predicted virial
    Loss->>Loss: Compute l2_virial_loss<br/>= MSE(label_virial, pred_virial)
    Loss->>Loss: total_loss += pref_v *<br/>l2_virial_loss
    Loss-->>Loss: Compute rmse_v, mae_v
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • iProzd
  • njzjz
  • wanghan-iapcm
🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.78% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main feature being added: spin virial support in the PyTorch implementation, providing specific context about what was implemented.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into master

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
source/api_c/include/deepmd.hpp (1)

2695-2711: Consider explicitly documenting/handling spin atom_virial (still disabled).

This overload still resizes atom_virial[i] but does not populate it (block remains commented). If this is intentionally unsupported for spin models, consider documenting it in the API comment (or explicitly zeroing/clearing) to avoid consumers assuming it’s meaningful.

deepmd/pt/model/model/spin_model.py (1)

71-78: Update return type annotation.

The return type annotation still declares a 4-tuple, but the function now returns 5 values (including extended_coord_corr).

Proposed fix
     def process_spin_input_lower(
         self,
         extended_coord: torch.Tensor,
         extended_atype: torch.Tensor,
         extended_spin: torch.Tensor,
         nlist: torch.Tensor,
         mapping: torch.Tensor | None = None,
-    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None]:
+    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None, torch.Tensor]:
🤖 Fix all issues with AI agents
In `@deepmd/pt/model/descriptor/repformers.py`:
- Around line 435-437: The assertion uses an undefined attribute self.n_dim in
DescrptBlockRepformers; change the check to use the correct embedding size
attribute self.g1_dim: after slicing extended_atype_embd into atype_embd
(variable names extended_atype_embd and atype_embd), update the assertion that
compares atype_embd.shape to expect [nframes, nloc, self.g1_dim] instead of
self.n_dim so it matches the class's defined embedding dimension.
♻️ Duplicate comments (1)
source/api_cc/src/DeepSpinPT.cc (1)

414-436: Same "virial" key-availability risk in standalone module.forward(...) path.

Same comment as the forward_lower path: if "virial" isn’t guaranteed, guard and error clearly.

🧹 Nitpick comments (7)
source/tests/pt/model/test_nosel.py (4)

31-31: Module-level dtype is unused and shadowed.

This variable is reassigned inside the test loop at line 66, making this module-level assignment dead code. Consider removing it to avoid confusion.

Suggested fix
-dtype = env.GLOBAL_PT_FLOAT_PRECISION
-
-
 class TestDescrptDPA3Nosel(unittest.TestCase, TestCaseSingleFrameWithNlist):

42-42: Prefix unused unpacked variables with underscore.

nf and nloc are unpacked but never used. Per Python convention, prefix them with _ to indicate they're intentionally unused.

Suggested fix
-        nf, nloc, nnei = self.nlist.shape
+        _nf, _nloc, nnei = self.nlist.shape

105-117: Mutating shared repflow object is fragile.

After passing repflow to dd0, mutating repflow.use_dynamic_sel = True before creating dd1 relies on DescrptDPA3 copying values during __init__ rather than holding a reference. This pattern is fragile and could break if the descriptor's implementation changes.

Consider creating separate RepFlowArgs instances for clarity and robustness:

Suggested approach
             # dpa3 new impl
             dd0 = DescrptDPA3(
                 self.nt,
                 repflow=repflow,
                 # kwargs for descriptor
                 exclude_types=[],
                 precision=prec,
                 use_econf_tebd=ect,
                 type_map=["O", "H"] if ect else None,
                 seed=GLOBAL_SEED,
             ).to(env.DEVICE)

-            repflow.use_dynamic_sel = True
+            repflow_dynamic = RepFlowArgs(
+                n_dim=20,
+                e_dim=10,
+                a_dim=10,
+                nlayers=3,
+                e_rcut=self.rcut,
+                e_rcut_smth=self.rcut_smth,
+                e_sel=nnei,
+                a_rcut=self.rcut - 0.1,
+                a_rcut_smth=self.rcut_smth,
+                a_sel=nnei,
+                a_compress_rate=acr,
+                n_multi_edge_message=nme,
+                axis_neuron=4,
+                update_angle=ua,
+                update_style=rus,
+                update_residual_init=ruri,
+                optim_update=optim,
+                smooth_edge_update=True,
+                sel_reduce_factor=1.0,
+                use_dynamic_sel=True,
+            )

             # dpa3 new impl
             dd1 = DescrptDPA3(
                 self.nt,
-                repflow=repflow,
+                repflow=repflow_dynamic,
                 # kwargs for descriptor

Alternatively, use copy.deepcopy(repflow) and then set the attribute on the copy.


143-205: Consider removing or documenting the commented-out test.

This large block of commented-out code for test_jit lacks an explanation for why it's disabled. If it's a work-in-progress, add a TODO comment explaining the intent and when it should be enabled. If it's no longer needed, consider removing it to reduce code clutter.

deepmd/pt/model/network/utils.py (1)

2-4: Unused Optional import.

The Optional import is added but not used in this file. The existing code uses PEP 604 union syntax (int | None on line 14) rather than Optional[int]. Consider removing this unused import unless it's needed for future changes in this PR.

🧹 Suggested fix
 # SPDX-License-Identifier: LGPL-3.0-or-later
-from typing import (
-    Optional,
-)
-
 import torch
source/api_cc/src/DeepSpinPT.cc (1)

252-274: Guard missing "virial" key (avoid runtime outputs.at() throw).

If a TorchScript spin model doesn’t emit "virial" (older checkpoints / config-dependent outputs), outputs.at("virial") will throw. Consider checking presence and raising a clearer error.

Proposed hardening (illustrative — please verify c10::Dict API)
-  c10::IValue virial_ = outputs.at("virial");
+  // NOTE: verify the correct c10::Dict presence-check API for your libtorch version.
+  // The goal is to fail with a clearer message than an unhandled `at()` exception.
+  if (!outputs.contains("virial")) {
+    throw deepmd::deepmd_exception(
+        "Spin model output dict is missing key 'virial' (model may not support virial).");
+  }
+  c10::IValue virial_ = outputs.at("virial");
deepmd/pt/loss/ener_spin.py (1)

282-297: Make virial loss shape/dtype handling more robust.

Right now it assumes label["virial"] is already (-1, 9) and that dtypes won’t conflict with the in-place loss += .... Reshaping label virial and casting the increment avoids brittle failures.

Proposed patch
         if self.has_v and "virial" in model_pred and "virial" in label:
             find_virial = label.get("find_virial", 0.0)
             pref_v = pref_v * find_virial
-            diff_v = label["virial"] - model_pred["virial"].reshape(-1, 9)
+            virial_label = label["virial"].reshape(-1, 9)
+            virial_pred = model_pred["virial"].reshape(-1, 9)
+            diff_v = virial_label - virial_pred
             l2_virial_loss = torch.mean(torch.square(diff_v))
             if not self.inference:
                 more_loss["l2_virial_loss"] = self.display_if_exist(
                     l2_virial_loss.detach(), find_virial
                 )
-            loss += atom_norm * (pref_v * l2_virial_loss)
+            loss += (atom_norm * (pref_v * l2_virial_loss)).to(GLOBAL_PT_FLOAT_PRECISION)
             rmse_v = l2_virial_loss.sqrt() * atom_norm
             more_loss["rmse_v"] = self.display_if_exist(rmse_v.detach(), find_virial)
             if mae:
                 mae_v = torch.mean(torch.abs(diff_v)) * atom_norm
                 more_loss["mae_v"] = self.display_if_exist(mae_v.detach(), find_virial)
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9553e6e and ac0ebb9.

📒 Files selected for processing (15)
  • deepmd/pt/loss/ener_spin.py
  • deepmd/pt/model/descriptor/repflow_layer.py
  • deepmd/pt/model/descriptor/repformers.py
  • deepmd/pt/model/model/make_model.py
  • deepmd/pt/model/model/spin_model.py
  • deepmd/pt/model/model/transform_output.py
  • deepmd/pt/model/network/utils.py
  • source/api_c/include/deepmd.hpp
  • source/api_c/src/c_api.cc
  • source/api_cc/src/DeepSpinPT.cc
  • source/tests/pt/model/test_autodiff.py
  • source/tests/pt/model/test_ener_spin_model.py
  • source/tests/pt/model/test_nosel.py
  • source/tests/universal/common/cases/model/utils.py
  • source/tests/universal/pt/model/test_model.py
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/pt/loss/ener_hess.py:341-348
Timestamp: 2024-10-08T15:32:11.479Z
Learning: In `deepmd/pt/loss/ener_hess.py`, the `label` uses the key `"atom_ener"` intentionally to maintain consistency with the forked version.

Applied to files:

  • deepmd/pt/loss/ener_spin.py
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4160
File: deepmd/dpmodel/utils/env_mat.py:52-64
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Negative indices in `nlist` are properly handled by masking later in the computation, so they do not cause issues in indexing operations.

Applied to files:

  • deepmd/pt/model/descriptor/repflow_layer.py
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.

Applied to files:

  • source/tests/pt/model/test_nosel.py
  • source/tests/universal/pt/model/test_model.py
📚 Learning: 2025-12-12T13:40:14.334Z
Learnt from: CR
Repo: deepmodeling/deepmd-kit PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-12T13:40:14.334Z
Learning: Run core tests with `pytest source/tests/tf/test_dp_test.py::TestDPTestEner::test_1frame -v` to validate basic functionality

Applied to files:

  • source/tests/pt/model/test_nosel.py
🧬 Code graph analysis (6)
source/api_c/src/c_api.cc (1)
data/raw/copy_raw.py (1)
  • copy (11-71)
deepmd/pt/loss/ener_spin.py (3)
deepmd/driver.py (1)
  • label (42-75)
deepmd/pt/loss/loss.py (1)
  • display_if_exist (44-54)
deepmd/entrypoints/test.py (1)
  • mae (247-260)
deepmd/pt/model/descriptor/repflow_layer.py (2)
deepmd/pt/model/network/utils.py (1)
  • aggregate (10-50)
deepmd/pt/model/descriptor/repformer_layer.py (2)
  • list_update_res_residual (1292-1309)
  • list_update (1312-1322)
source/tests/pt/model/test_nosel.py (1)
deepmd/dpmodel/descriptor/dpa3.py (1)
  • RepFlowArgs (59-250)
deepmd/pt/model/model/spin_model.py (2)
deepmd/dpmodel/model/spin_model.py (2)
  • concat_switch_virtual (190-208)
  • process_spin_input (43-53)
deepmd/pt/utils/spin.py (1)
  • concat_switch_virtual (6-30)
source/tests/pt/model/test_ener_spin_model.py (1)
deepmd/pt/model/model/spin_model.py (1)
  • process_spin_input (54-69)
🪛 Ruff (0.14.11)
source/tests/pt/model/test_nosel.py

42-42: Unpacked variable nf is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


42-42: Unpacked variable nloc is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (42)
  • GitHub Check: Agent
  • GitHub Check: CodeQL analysis (python)
  • GitHub Check: Test Python (12, 3.13)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Test Python (4, 3.13)
  • GitHub Check: Test Python (12, 3.10)
  • GitHub Check: Test Python (2, 3.13)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Test Python (10, 3.13)
  • GitHub Check: Test Python (5, 3.13)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Test Python (10, 3.10)
  • GitHub Check: Test Python (11, 3.13)
  • GitHub Check: Test Python (6, 3.13)
  • GitHub Check: Test Python (3, 3.10)
  • GitHub Check: Test Python (9, 3.10)
  • GitHub Check: Test Python (8, 3.13)
  • GitHub Check: Test Python (7, 3.10)
  • GitHub Check: Test Python (6, 3.10)
  • GitHub Check: Test Python (1, 3.10)
  • GitHub Check: Test Python (8, 3.10)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Test Python (11, 3.10)
  • GitHub Check: Test Python (3, 3.13)
  • GitHub Check: Test Python (1, 3.13)
  • GitHub Check: Test Python (9, 3.13)
  • GitHub Check: Test Python (5, 3.10)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test Python (7, 3.13)
  • GitHub Check: Test Python (2, 3.10)
  • GitHub Check: Test Python (4, 3.10)
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (false, false, false, true)
  • GitHub Check: Test C++ (true, false, false, true)
  • GitHub Check: Test C++ (true, true, true, false)
  • GitHub Check: Test C++ (false, true, true, false)
🔇 Additional comments (23)
deepmd/pt/model/model/transform_output.py (1)

152-206: LGTM! Virial correction logic is well-integrated.

The extended_coord_corr parameter is properly gated with a None check and only applied when vdef.c_differentiable is True. The tensor operations compute the virial correction by combining the force derivative with the coordinate correction.

deepmd/pt/model/descriptor/repflow_layer.py (1)

1-1243: LGTM - Formatting-only changes.

All modifications in this file are style/formatting changes that collapse multi-line expressions into single lines. No semantic or functional changes are introduced. The logic remains identical to the previous implementation.

source/tests/universal/pt/model/test_model.py (1)

747-748: LGTM - Appropriate test flag for spin virial support.

The test_spin_virial = True flag enables virial testing specifically for the PyTorch spin energy model. The accompanying comment clearly documents that this flag can be removed once other backends implement spin virial support, which is good for future maintainability.

source/api_c/include/deepmd.hpp (1)

2594-2607: Virial reshape now consistent with other outputs (good).

Copying virial_flat into per-model virial[i] aligns this path with energy/force/force_mag and avoids surprising “all zeros” virials in the model-deviation API.

source/api_c/src/c_api.cc (1)

865-869: Virial now actually returned from DP_DeepSpinModelDeviCompute_variant (good).

This brings the virial behavior in line with force/force_mag flattening and fixes the previously “silently empty” virial output when requested.

source/tests/pt/model/test_ener_spin_model.py (2)

118-120: LGTM!

The test correctly adapts to the updated process_spin_input signature that now returns a third element (coord_corr). Using _ to discard the unused value is appropriate here since this test focuses on coordinate and type transformations rather than virial corrections.


172-180: LGTM!

The test correctly handles the extended return signature of process_spin_input_lower, which now returns five values including extended_coord_corr. The underscore appropriately discards the virial correction tensor that isn't relevant to this particular test case.

source/tests/universal/common/cases/model/utils.py (3)

918-921: LGTM!

The test_spin_virial flag provides a clean mechanism to incrementally enable virial testing for spin models. The condition if not test_spin or test_spin_virial correctly maintains backward compatibility while allowing virial tests to run for spin configurations when the flag is explicitly set.


931-933: LGTM!

Correctly includes spin in the virial calculation input when test_spin is enabled. This ensures the virial finite difference test properly exercises the spin-virial code path.


952-954: LGTM!

Consistently passes spin to the model when test_spin is enabled for the actual virial computation, matching the finite difference setup above.

source/tests/pt/model/test_autodiff.py (3)

144-154: LGTM!

The VirialTest class is properly extended to support spin models. The spin tensor generation and test_keys logic mirror the pattern already established in ForceTest, ensuring consistent behavior across both test types.


166-167: LGTM!

The spin tensor is correctly passed to eval_model in the virial inference function, enabling proper virial computation for spin-enabled models.


261-268: LGTM!

The new TestEnergyModelSpinSeAVirial class properly mirrors TestEnergyModelSpinSeAForce, completing the autodiff test coverage for spin virial functionality. The test_spin = True flag correctly enables spin-aware virial testing.

deepmd/pt/model/model/make_model.py (3)

141-142: LGTM!

The new coord_corr_for_virial parameter is properly added as an optional argument with None default, maintaining backward compatibility with existing callers.


190-196: LGTM!

The coordinate correction is correctly extended to the neighbor list region using torch.gather with the mapping tensor. The dtype conversion to cc.dtype ensures consistency with the coordinate precision. This follows the same extension pattern used elsewhere in the codebase.


263-264: LGTM!

The extended_coord_corr parameter is properly propagated through the lower interface with appropriate documentation. The parameter is correctly forwarded to fit_output_to_model_output where it will be used for virial correction.

Also applies to: 290-291, 324-324

deepmd/pt/model/model/spin_model.py (7)

62-69: LGTM!

The virial correction computation is correctly implemented. The coord_corr tensor with [zeros_like(coord), -spin_dist] properly represents that real atoms have zero correction while virtual atoms need -spin_dist correction to account for their artificial displacement in virial calculations.


89-100: LGTM!

The extended coordinate correction is correctly computed using concat_switch_virtual with torch.zeros_like(extended_coord) for real atoms and -extended_spin_dist for virtual atoms, maintaining consistency with process_spin_input.


419-432: LGTM!

The forward_common method correctly unpacks the new 3-tuple return from process_spin_input and passes coord_corr_for_virial to the backbone model's forward_common method.


473-495: LGTM!

The forward_common_lower method correctly unpacks the new 5-tuple return from process_spin_input_lower and passes extended_coord_corr_for_virial to the backbone model's forward_common_lower.


567-572: LGTM!

The translated_output_def properly includes virial and atom_virial output definitions when do_grad_c("energy") is true, enabling virial outputs for spin models.


600-604: LGTM!

The forward method correctly outputs virial and conditionally atom_virial based on do_grad_c and do_atomic_virial flags, consistent with non-spin energy models.


640-645: LGTM!

The forward_lower method correctly outputs virial and conditionally extended_virial (matching the naming convention for extended outputs in lower interfaces).

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@codecov
Copy link

codecov bot commented Jan 16, 2026

Codecov Report

❌ Patch coverage is 97.53086% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.31%. Comparing base (4f182bc) to head (2739439).

Files with missing lines Patch % Lines
deepmd/pt/loss/ener_spin.py 84.61% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5156      +/-   ##
==========================================
+ Coverage   82.07%   82.31%   +0.24%     
==========================================
  Files         732      604     -128     
  Lines       73974    53960   -20014     
  Branches     3615      978    -2637     
==========================================
- Hits        60711    44419   -16292     
+ Misses      12100     8978    -3122     
+ Partials     1163      563     -600     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 193-195: The test is indexing atom_vir (and expected_atom_v)
without verifying their lengths; add size assertions before the loop that
compares elements: check that atom_vir.size() == natoms*9 and
expected_atom_v.size() == natoms*9 (or the expected length expression used in
this test) so the subsequent for-loop comparing fabs(atom_vir[ii] -
expected_atom_v[ii]) is safe; apply the same pre-checks wherever the test later
iterates over atom_vir (the second comparison block that mirrors this loop).
🧹 Nitpick comments (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)

46-49: Include per‑atom virial (av) in the regen snippet.

Keeps the comment aligned with the new av expectations so values can be regenerated consistently.

📝 Suggested tweak
-// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {fm.ravel()=}
-// {ae.ravel()=}")
+// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {av.ravel()=} {fm.ravel()=}
+// {ae.ravel()=}")
@@
-// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {fm.ravel()=}
-// {ae.ravel()=}")
+// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {av.ravel()=} {fm.ravel()=}
+// {ae.ravel()=}")

Also applies to: 223-227

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f261d19 and 71cc203.

📒 Files selected for processing (1)
  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.

Applied to files:

  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧬 Code graph analysis (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (2)
source/api_cc/tests/test_deeppot_a.cc (2)
  • ener (150-156)
  • ener (150-154)
source/api_cc/tests/test_deeppot_pt.cc (2)
  • ener (132-138)
  • ener (132-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (40)
  • GitHub Check: Test Python (9, 3.13)
  • GitHub Check: Test Python (11, 3.13)
  • GitHub Check: Test Python (3, 3.13)
  • GitHub Check: Test Python (10, 3.10)
  • GitHub Check: Test Python (1, 3.13)
  • GitHub Check: Test Python (12, 3.10)
  • GitHub Check: Test Python (9, 3.10)
  • GitHub Check: Test Python (8, 3.13)
  • GitHub Check: Test Python (2, 3.13)
  • GitHub Check: Test Python (3, 3.10)
  • GitHub Check: Test Python (4, 3.13)
  • GitHub Check: Test Python (7, 3.10)
  • GitHub Check: Test Python (4, 3.10)
  • GitHub Check: Test Python (6, 3.10)
  • GitHub Check: Test Python (11, 3.10)
  • GitHub Check: Test Python (10, 3.13)
  • GitHub Check: Test Python (12, 3.13)
  • GitHub Check: Test Python (7, 3.13)
  • GitHub Check: Test Python (6, 3.13)
  • GitHub Check: Test Python (2, 3.10)
  • GitHub Check: Test Python (8, 3.10)
  • GitHub Check: Test Python (5, 3.13)
  • GitHub Check: Test Python (5, 3.10)
  • GitHub Check: Test Python (1, 3.10)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Test C++ (false, true, true, false)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test C++ (true, true, true, false)
  • GitHub Check: Test C++ (true, false, false, true)
  • GitHub Check: Test C++ (false, false, false, true)

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 54-60: The test's expected_f vector is missing one force component
(contains 17 values instead of natoms*3 == 18) causing the SetUp assertion to
fail; regenerate the full 18-value expected_f using the Python snippet in the
test comments (or the same calculation used to produce the other values),
replace the existing std::vector<VALUETYPE> expected_f in
test_deeppot_dpa_pt_spin.cc with the complete 18-element list, and ensure the
final element(s) correspond to the same ordering and precision used elsewhere in
the test so EXPECT_EQ(natoms * 3, expected_f.size()) passes.
🧹 Nitpick comments (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)

115-119: Add expected_atom_v size assertion for consistency with TestInferDeepSpinDpaPtNopbc.

The TestInferDeepSpinDpaPtNopbc::SetUp validates both expected_tot_v.size() and expected_atom_v.size() (lines 297-298), but this class only validates expected_tot_v.size(). Add the missing check for consistency and early error detection.

🔧 Suggested fix
     EXPECT_EQ(9, expected_tot_v.size());
+    EXPECT_EQ(natoms * 9, expected_atom_v.size());
     expected_tot_e = 0.;
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71cc203 and 8d9f6e0.

📒 Files selected for processing (3)
  • source/api_cc/src/DeepSpinPT.cc
  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
  • source/tests/pt/model/test_autodiff.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-09-19T04:25:12.408Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-09-19T04:25:12.408Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.

Applied to files:

  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧬 Code graph analysis (3)
source/api_cc/src/DeepSpinPT.cc (2)
source/api_c/include/deepmd.hpp (2)
  • select_map (3508-3522)
  • select_map (3508-3511)
source/api_cc/src/common.cc (12)
  • select_map (935-961)
  • select_map (935-941)
  • select_map (964-982)
  • select_map (964-970)
  • select_map (1038-1044)
  • select_map (1046-1053)
  • select_map (1077-1083)
  • select_map (1085-1092)
  • select_map (1116-1122)
  • select_map (1124-1131)
  • select_map (1154-1161)
  • select_map (1163-1170)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (2)
source/api_cc/tests/test_deeppot_a.cc (2)
  • ener (150-156)
  • ener (150-154)
source/api_cc/tests/test_deeppot_pt.cc (2)
  • ener (132-138)
  • ener (132-136)
source/tests/pt/model/test_autodiff.py (2)
source/tests/universal/common/cases/model/utils.py (1)
  • stretch_box (838-843)
source/tests/pt/common.py (1)
  • eval_model (48-307)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (40)
  • GitHub Check: Test Python (4, 3.13)
  • GitHub Check: Test Python (10, 3.10)
  • GitHub Check: Test Python (8, 3.10)
  • GitHub Check: Test Python (12, 3.10)
  • GitHub Check: Test Python (1, 3.10)
  • GitHub Check: Test Python (4, 3.10)
  • GitHub Check: Test Python (12, 3.13)
  • GitHub Check: Test Python (5, 3.10)
  • GitHub Check: Test Python (10, 3.13)
  • GitHub Check: Test Python (1, 3.13)
  • GitHub Check: Test Python (11, 3.10)
  • GitHub Check: Test Python (2, 3.13)
  • GitHub Check: Test Python (7, 3.13)
  • GitHub Check: Test Python (8, 3.13)
  • GitHub Check: Test Python (6, 3.13)
  • GitHub Check: Test Python (5, 3.13)
  • GitHub Check: Test Python (11, 3.13)
  • GitHub Check: Test Python (9, 3.10)
  • GitHub Check: Test Python (3, 3.10)
  • GitHub Check: Test Python (9, 3.13)
  • GitHub Check: Test Python (7, 3.10)
  • GitHub Check: Test Python (2, 3.10)
  • GitHub Check: Test Python (6, 3.10)
  • GitHub Check: Test Python (3, 3.13)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Analyze (python)
  • GitHub Check: Test C++ (true, false, false, true)
  • GitHub Check: Test C++ (false, false, false, true)
  • GitHub Check: Test C++ (true, true, true, false)
  • GitHub Check: Test C++ (false, true, true, false)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
🔇 Additional comments (14)
source/api_cc/src/DeepSpinPT.cc (4)

254-278: LGTM! Conditional virial handling follows correct patterns.

The check-then-access pattern using outputs.contains() before outputs.at() is safe, and clearing the vector when virial is absent prevents stale data issues.


300-313: LGTM! Extended virial handling with proper atom mapping.

Using "extended_virial" key is correct for the extended-atom compute path, and select_map with stride 9 properly handles the 3×3 per-atom virial tensor mapping back to original indices.


424-447: LGTM! Virial handling in standalone compute path.

The conditional virial extraction follows the same safe pattern as the first compute method.


456-466: LGTM! Correct use of "atom_virial" key for non-extended compute path.

Using "atom_virial" (instead of "extended_virial") is correct here since this compute method doesn't involve ghost/extended atoms and doesn't require index remapping via select_map.

source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (6)

151-157: Conditional virial handling pattern looks good.

The early return when virial.empty() is a reasonable approach for incremental feature enablement. The size assertion before the comparison loop protects against out-of-bounds access.


188-204: Virial and atomic virial validation looks correct.

The size assertions at lines 191 and 201 properly guard against out-of-bounds access before the comparison loops. The early return pattern allows graceful handling when virial data is not yet available.


238-281: Expected value arrays are correctly sized.

All expected arrays (expected_e, expected_f, expected_fm, expected_tot_v, expected_atom_v) have the correct element counts for 6 atoms.


297-298: SetUp validation is complete.

Both expected_tot_v and expected_atom_v size checks are present, which is the correct pattern.


423-429: Virial handling in LMP nlist test is consistent.

The same conditional pattern is applied correctly here.


468-484: Atomic virial handling is consistent with other tests.

The size assertions at lines 471 and 481 properly guard the comparison loops.

source/tests/pt/model/test_autodiff.py (4)

60-90: Solid finite-difference helper for cell derivatives.
Clear central-difference implementation with good shape documentation.


296-302: Spin virial coverage added cleanly.
The setup mirrors existing spin tests and is easy to follow.


304-347: Shear-based finite-difference virial test looks solid.
Good coverage for spin-enabled virial via cell perturbations.


177-200: No action needed—eval_model already properly guards spin inputs by model capability.

The eval_model function determines whether to pass spins to the model based on the model's has_spin attribute (line 133-135), not on whether the spins parameter is provided. Even if spins is passed for a non-spin model, the spin kwarg will not be added to the model call due to the guard on line 175-176 (if has_spin: input_dict["spin"] = batch_spin). The suggested conditional guard is redundant.

Likely an incorrect or invalid review comment.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 112-116: Add a size guard for expected_atom_v before
cpu_build_nlist_atomic is called: check that expected_atom_v.size() equals
natoms * 3 (same pattern used for expected_f/expected_fm) to prevent
out-of-bounds indexing later in the test; insert an EXPECT_EQ(natoms * 3,
expected_atom_v.size()) immediately after natoms is set (near where
expected_f/expected_fm checks occur) so the nopbc-like assertion ensures safe
indexing in cpu_build_nlist_atomic.

@njzjz njzjz linked an issue Jan 20, 2026 that may be closed by this pull request
Copy link
Member

@njzjz njzjz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LAMMPS tests can be recovered if they have passed

def test_pair_deepmd_virial(lammps) -> None:
lammps.pair_style(f"deepspin {pb_file.resolve()}")
lammps.pair_coeff("* *")
lammps.compute("peatom all pe/atom pair")
lammps.compute("pressure all pressure NULL pair")
lammps.compute("virial all centroid/stress/atom NULL pair")
lammps.variable("eatom atom c_peatom")
# for ii in range(9):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# lammps.variable(f"pressure{jj} equal c_pressure[{ii+1}]")
# for ii in range(9):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# lammps.variable(f"virial{jj} atom c_virial[{ii+1}]")
# lammps.dump(
# "1 all custom 1 dump id " + " ".join([f"v_virial{ii}" for ii in range(9)])
# )
lammps.run(0)
assert lammps.eval("pe") == pytest.approx(expected_e)
for ii in range(4):
assert lammps.atoms[ii].force == pytest.approx(
expected_f[lammps.atoms[ii].id - 1]
)
idx_map = lammps.lmp.numpy.extract_atom("id")[: coord.shape[0]] - 1
assert np.array(lammps.variables["eatom"].value) == pytest.approx(
expected_ae[idx_map]
)
# vol = box[1] * box[3] * box[5]
# for ii in range(6):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# assert np.array(
# lammps.variables[f"pressure{jj}"].value
# ) / constants.nktv2p == pytest.approx(
# -expected_v[idx_map, jj].sum(axis=0) / vol
# )
# for ii in range(9):
# assert np.array(
# lammps.variables[f"virial{ii}"].value
# ) / constants.nktv2p == pytest.approx(expected_v[idx_map, ii])

@iProzd
Copy link
Collaborator

iProzd commented Jan 21, 2026

We might mark this PR as draft before modifications in ut and tf finished.

@iProzd iProzd marked this pull request as draft January 21, 2026 11:37
@OutisLi OutisLi marked this pull request as ready for review January 23, 2026 09:53
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
source/lmp/tests/test_lammps_spin.py (1)

283-311: Restrict pressure variable loop to 6 components; compute pressure only outputs 6 components, not 9.

The compute pressure command outputs a global vector of 6 components ([1]=Pxx, [2]=Pyy, [3]=Pzz, [4]=Pxy, [5]=Pxz, [6]=Pyz), so accessing indices 7–9 via c_pressure[7], c_pressure[8], c_pressure[9] is invalid and will cause LAMMPS to error.

The virial loop (lines 286–288) correctly uses range(9) since compute centroid/stress/atom outputs 9 components. The pressure loop (lines 283–285) must be changed to range(6):

Fix for pressure loop
-    for ii in range(9):
-        jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
-        lammps.variable(f"pressure{jj} equal c_pressure[{ii + 1}]")
+    for ii in range(6):
+        jj = [0, 4, 8, 3, 6, 7][ii]
+        lammps.variable(f"pressure{jj} equal c_pressure[{ii + 1}]")
🤖 Fix all issues with AI agents
In `@deepmd/tf/descriptor/se_a.py`:
- Around line 1399-1429: The current natoms_not_match implementation misuses
tf.unique_with_counts on ghost_atype which returns counts only for present types
and leads to misaligned or out-of-range indexing when some types have zero
ghosts; replace the unique_with_counts approach with a fixed-length per-type
count (length self.ntypes) — e.g. compute ghost_natoms as
tf.math.bincount(ghost_atype, minlength=self.ntypes) or by summing boolean
equality masks per type — then build ghost_natoms_index = tf.concat([[0],
tf.cumsum(ghost_natoms)], axis=0) and proceed to slice coord using ghost_natoms
and ghost_natoms_index (keeping references to natoms_not_match, natoms_match,
self.ntypes, self.spin.use_spin, ghost_atype, ghost_natoms, ghost_natoms_index
to locate changes).
🧹 Nitpick comments (3)
source/lmp/tests/test_lammps_spin_nopbc.py (2)

74-191: Consider centralizing large expected_v fixtures.

These large inline arrays are duplicated across spin test files; extracting to a shared fixture or data file would reduce maintenance drift.


295-301: Avoid hard‑coding the atom count in velocity deviation.

Using the actual atom count makes this test resilient to future fixture changes.

♻️ Proposed fix
-    expected_md_v = (
-        np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0)
-        / 4
-    )
+    atom_count = coord.shape[0]
+    expected_md_v = (
+        np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0)
+        / atom_count
+    )
source/lmp/tests/test_lammps_spin.py (1)

335-341: Avoid hard‑coding the atom count in velocity deviation.

Use the actual atom count so the test stays correct if fixtures change.

♻️ Proposed fix
-    expected_md_v = (
-        np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0)
-        / 4
-    )
+    atom_count = coord.shape[0]
+    expected_md_v = (
+        np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0)
+        / atom_count
+    )

Copy link
Collaborator

@iProzd iProzd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc/model/train-energy-spin.md should be updated

@njzjz
Copy link
Member

njzjz commented Jan 26, 2026

The LAMMPS tests can be recovered if they have passed

def test_pair_deepmd_virial(lammps) -> None:
lammps.pair_style(f"deepspin {pb_file.resolve()}")
lammps.pair_coeff("* *")
lammps.compute("peatom all pe/atom pair")
lammps.compute("pressure all pressure NULL pair")
lammps.compute("virial all centroid/stress/atom NULL pair")
lammps.variable("eatom atom c_peatom")
# for ii in range(9):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# lammps.variable(f"pressure{jj} equal c_pressure[{ii+1}]")
# for ii in range(9):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# lammps.variable(f"virial{jj} atom c_virial[{ii+1}]")
# lammps.dump(
# "1 all custom 1 dump id " + " ".join([f"v_virial{ii}" for ii in range(9)])
# )
lammps.run(0)
assert lammps.eval("pe") == pytest.approx(expected_e)
for ii in range(4):
assert lammps.atoms[ii].force == pytest.approx(
expected_f[lammps.atoms[ii].id - 1]
)
idx_map = lammps.lmp.numpy.extract_atom("id")[: coord.shape[0]] - 1
assert np.array(lammps.variables["eatom"].value) == pytest.approx(
expected_ae[idx_map]
)
# vol = box[1] * box[3] * box[5]
# for ii in range(6):
# jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii]
# assert np.array(
# lammps.variables[f"pressure{jj}"].value
# ) / constants.nktv2p == pytest.approx(
# -expected_v[idx_map, jj].sum(axis=0) / vol
# )
# for ii in range(9):
# assert np.array(
# lammps.variables[f"virial{ii}"].value
# ) / constants.nktv2p == pytest.approx(expected_v[idx_map, ii])

The file which I commented hasn't been updated

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
source/api_cc/src/DeepSpinTF.cc (1)

811-828: Critical bug: Multi-frame data loss in virial copy loop

The copy loop (lines 812–828) only handles frame 0. Although datom_virial_ is allocated for nframes * nall * 9 elements and select_map() populates datom_virial_tmp for all frames (line 804), the loop accesses only indices for frame 0:

  • Reads: datom_virial_tmp[new_idx * 9 + dd]
  • Writes: datom_virial_[ii * 9 + dd]

Frames 1 through nframes–1 remain zero-initialized, causing data loss for multi-frame backward passes. The loop must iterate over frames with proper index offsets, following the pattern used elsewhere in the file (lines 179–189):

for (int ff = 0; ff < nframes; ++ff) {
  for (int ii = 0; ii < nall; ++ii) {
    int new_idx = new_idx_map[ii];
    // ... existing logic with frame offset: 
    // datom_virial_[ff * nall * 9 + ii * 9 + dd] = datom_virial_tmp[ff * ... * 9 + new_idx * 9 + dd];
  }
}

The same issue affects dforce_ and dforce_mag_ assignments (lines 813–820).

🤖 Fix all issues with AI agents
In `@log.lammps`:
- Around line 1-23: Remove the accidental LAMMPS log file (log.lammps) from the
commit, stop tracking it, and add a rule to .gitignore to prevent future commits
of similar log files; specifically, delete log.lammps from the PR/commit, run
git rm --cached if it's already tracked so it is no longer versioned, and add an
entry (e.g., patterns matching *.lammps, *.log, or the specific filename) to
.gitignore to exclude subsequent local logs like the one showing "ERROR:
Unrecognized pair style 'deepmd'" and local paths such as
"/home/outisli/Research/dpmd/...".

In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 152-158: The test currently silently returns when virial (or
atom_vir) is empty which allows missing outputs to pass despite
expected_tot_v/expected_atom_v being populated; update each block (the one using
virial, atom_vir, expected_tot_v, expected_atom_v, EPSILON) to assert
non‑emptiness before comparing (e.g., ASSERT_FALSE(virial.empty()) or
ASSERT_FALSE(atom_vir.empty()) with a clear message) or explicitly call
GTEST_SKIP() when virial support is unavailable, then proceed with the
EXPECT_EQ/EXPECT_LT comparisons; apply this pattern to all similar occurrences
listed in the comment.

In `@source/api_cc/tests/test_deeppot_tf_spin.cc`:
- Around line 382-399: The test still assumes atom_vir length is natoms * 9 but
expected_v now contains (natoms + 2) * 9 entries; update the assertions/loops to
validate the extended layout. Specifically, change checks referencing atom_vir
and the loop bounds (the EXPECT_EQ/for loops that use natoms * 9 and the final
loop validating atom_vir) to use (natoms + 2) * 9 (or split expected_v into a
real-atom slice and a virtual-atom slice and assert each separately) so that
atom_vir, expected_v and their comparisons align; adjust any related EXPECT_EQ
for virial sizing if needed to reflect the new layout in the test functions
around atom_vir, expected_v, natoms, and virial.

In `@source/lmp/tests/test_lammps_spin.py`:
- Around line 308-311: The test is using the loop index ii to access
lammps.variables[f"virial{ii}"] but the variables were defined with the mapped
index (virial{jj}), causing a mismatch against expected_v[idx_map, ii]; change
the access so the virial name uses the mapped index (e.g.,
lammps.variables[f"virial{idx_map[ii]}"] or iterate using jj from idx_map) and
keep the comparison against the corresponding expected_v entry to ensure names
and indices align (references: lammps.variables, virial{jj}, ii, idx_map,
expected_v).
♻️ Duplicate comments (2)
source/tests/pt/model/test_nosel.py (1)

1-205: Reviewer requested this file be removed.

A previous reviewer (iProzd) indicated that this file should be removed. Please confirm whether this test file should be included in this PR or removed as requested.

,

deepmd/tf/descriptor/se_a.py (1)

1399-1429: Fix ghost type counting to avoid misaligned indices.

tf.unique_with_counts returns counts only for types that actually appear in ghost_atype, and in appearance order rather than type order. When some types have zero ghost atoms or ghosts are not sorted by type, indexing ghost_natoms[i] by type id will produce incorrect results or cause out-of-range errors.

Use tf.math.bincount with a fixed length to ensure proper per-type counts:

🔧 Suggested fix
     def natoms_not_match(self, coord, natoms, atype):
         diff_coord_loc = self.natoms_match(coord, natoms)
         diff_coord_ghost = []
         aatype = atype[0, :]
         ghost_atype = aatype[natoms[0] :]
-        _, _, ghost_natoms = tf.unique_with_counts(ghost_atype)
+        ghost_natoms = tf.math.bincount(
+            ghost_atype,
+            minlength=self.ntypes,
+            maxlength=self.ntypes,
+            dtype=natoms.dtype,
+        )
         ghost_natoms_index = tf.concat([[0], tf.cumsum(ghost_natoms)], axis=0)
         ghost_natoms_index += natoms[0]

Note: The same bug pattern exists in deepmd/tf/model/ener.py:natoms_not_match (around line 453-454) and should be fixed there as well.

🧹 Nitpick comments (3)
source/tests/pt/model/test_nosel.py (3)

42-42: Prefix unused variables with underscore.

nf and nloc are unpacked but never used. Use underscore prefix to indicate they're intentionally unused.

Suggested fix
-        nf, nloc, nnei = self.nlist.shape
+        _nf, _nloc, nnei = self.nlist.shape

31-31: Unused module-level dtype variable.

This module-level dtype is shadowed by the loop variable on line 66 and is never actually used. Consider removing it.

Suggested fix
-dtype = env.GLOBAL_PT_FLOAT_PRECISION
-
-
 class TestDescrptDPA3Nosel(unittest.TestCase, TestCaseSingleFrameWithNlist):

143-205: Remove or enable commented-out test code.

Large blocks of commented-out code should either be removed or enabled. If this test_jit method is intended for future implementation, consider tracking it via an issue or a TODO comment rather than leaving the full implementation commented out.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@deepmd/pt/model/model/transform_output.py`:
- Around line 196-201: The current dr.squeeze(-2) in the transform_output logic
can remove the vdef dimension only when it equals 1 and will break for
vector/tensor outputs (e.g., dipole with vdef.shape=[3]); update the code around
dr, extended_coord_corr, dc_corr and dc so you either (a) assert the feature
dimension is 1 before squeezing (e.g., check dr.shape[-2]==1) or (b)
reshape/expand dr explicitly to align with extended_coord_corr (use
view/unsqueeze/expand to produce the correct [..., 1, v]×[..., v, 9] matmul
shapes) and add a clear assertion/error message if shapes are incompatible;
ensure dc_corr is computed with the correct broadcasting and that dc = dc +
dc_corr works for both scalar and non-scalar vdef outputs.

In `@deepmd/tf/descriptor/se_a.py`:
- Line 1421: The line aatype = atype[0, :] discards all but the first frame's
ghost-type info—either iterate per-frame or enforce homogeneity; change the code
to operate on the full atype array (not just atype[0]) or add a runtime
assertion that all rows of atype are identical before using aatype, and update
any downstream uses (e.g., diff_coord_ghost slicing) to accept per-frame atype
or the validated single-row aatype; locate the usage by searching for the
variables atype, aatype and diff_coord_ghost and implement the per-frame loop or
the equality assertion there.
🧹 Nitpick comments (8)
source/tests/pt/model/test_nosel.py (3)

42-42: Unused unpacked variables nf and nloc.

Only nnei is used from the unpacked shape. Prefix unused variables with _ to satisfy linters.

Suggested fix
-        nf, nloc, nnei = self.nlist.shape
+        _nf, _nloc, nnei = self.nlist.shape

105-117: In-place mutation of repflow shared between dd0 and dd1 is fragile.

repflow is the same object passed to dd0 on line 94. Mutating use_dynamic_sel in-place at line 105 before constructing dd1 is safe only if DescrptDPA3 copies the args eagerly at construction time. If it holds a reference (or if future refactors introduce lazy reads), dd0's behavior could silently change, invalidating the test.

Consider creating a separate RepFlowArgs for dd1 or using copy.deepcopy:

Suggested fix
+            import copy
+            repflow_dyn = copy.deepcopy(repflow)
+            repflow_dyn.use_dynamic_sel = True
-            repflow.use_dynamic_sel = True

             # dpa3 new impl
             dd1 = DescrptDPA3(
                 self.nt,
-                repflow=repflow,
+                repflow=repflow_dyn,

143-205: Large commented-out test_jit block.

Consider removing this or converting it to a skipped test (@unittest.skip("TODO: enable JIT test")) so it's tracked and discoverable rather than being silent dead code.

source/lmp/tests/test_lammps_spin_nopbc.py (1)

295-301: Unnecessary [:] slice — inconsistent with the rest of the file.

Line 296 uses expected_v[:] and expected_v2[:], while line 354 in the new test_pair_deepmd_model_devi_atomic_relative_v test (and the reference tests in test_lammps_spin.py) uses expected_v / expected_v2 without the slice. The [:] is a no-op here and can be removed for consistency.

Suggested fix
     expected_md_v = (
-        np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0)
+        np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0)
         / 4
     )
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)

261-276: Nopbc expected_atom_v uses fewer significant digits than the PBC counterpart.

PBC fixture (line 86) uses scientific notation with ~8 significant figures (e.g., 1.56445561e-03), while this nopbc fixture uses fixed notation with ~6 significant figures (e.g., 0.00223849). Both are adequate for the EPSILON of 1e-7, but aligning precision style would improve consistency and reduce risk of borderline failures.

source/lmp/tests/test_lammps_spin.py (2)

302-307: Volume calculation relies on box origin being zero.

vol = box[1] * box[3] * box[5] computes xhi * yhi * zhi, which equals the box volume only because xlo = ylo = zlo = 0. This is fine for this fixed test data, just noting the implicit assumption.


336-342: Hardcoded atom count 4 in virial normalization.

The divisor 4 on lines 338 and 395 is the number of real atoms. Consider using coord.shape[0] or a named constant (e.g., natoms = coord.shape[0]) for clarity and to avoid silent breakage if the test data changes.

source/tests/pt/model/test_ener_spin_model.py (1)

114-180: Consider adding assertions for the new coord_corr and extended_coord_corr return values.

test_input_output_process verifies every other output of process_spin_input and process_spin_input_lower, but the new virial correction tensors are discarded with _. Adding a basic shape/value check (e.g., verifying that coord_corr[:, :nloc] is zeros and coord_corr[:, nloc:] equals -spin_dist) would strengthen coverage of the new code path.

@OutisLi OutisLi marked this pull request as draft February 13, 2026 09:01
iProzd and others added 5 commits February 14, 2026 12:33
- Remove accidentally committed log.lammps file
- Fix virial validation to assert non-empty instead of silently skipping
- Fix atom_vir sizing to use (natoms + 2) * 9 layout consistently
- Fix virial variable name/index mismatch in LAMMPS spin test

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Add runtime check to ensure all frames have the same atype vector,
preventing silent errors when ghost-atom layouts differ across frames.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] pt: support virial for the spin model

3 participants