feat(pt): support spin virial (Rebased and updated version of #4545 by @iProzd.)#5156
feat(pt): support spin virial (Rebased and updated version of #4545 by @iProzd.)#5156OutisLi wants to merge 5 commits intodeepmodeling:masterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This pull request adds support for spin virial calculations in the PyTorch backend by implementing virial corrections for spin models. The changes enable the computation of virial tensors when dealing with spin systems by introducing coordinate corrections that account for the virtual atoms used in spin representations.
Changes:
- Added virial correction mechanism for spin models through coordinate corrections
- Updated C/C++ API to enable virial output for spin models
- Extended test coverage to include spin virial validation
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| deepmd/pt/model/model/spin_model.py | Core implementation of spin virial correction through coordinate corrections returned by process_spin_input methods |
| deepmd/pt/model/model/transform_output.py | Added extended_coord_corr parameter to apply virial corrections |
| deepmd/pt/model/model/make_model.py | Propagated coord_corr_for_virial parameter through forward methods |
| deepmd/pt/loss/ener_spin.py | Added virial loss computation for spin models |
| source/api_cc/src/DeepSpinPT.cc | Uncommented virial computation code to enable spin virial output |
| source/api_c/src/c_api.cc | Uncommented virial assignment code |
| source/api_c/include/deepmd.hpp | Uncommented virial indexing loops |
| source/tests/universal/pt/model/test_model.py | Enabled spin virial testing for PT backend |
| source/tests/universal/common/cases/model/utils.py | Updated test logic to conditionally test spin virial |
| source/tests/pt/model/test_autodiff.py | Added spin virial test class and updated test to include spin |
| source/tests/pt/model/test_ener_spin_model.py | Updated test to handle new return value from process_spin_input |
| deepmd/pt/model/network/utils.py | Added Optional import (unused in shown diff) |
| deepmd/pt/model/descriptor/repformers.py | Code formatting changes (no functional change) |
| deepmd/pt/model/descriptor/repflow_layer.py | Code formatting changes (no functional change) |
| source/tests/pt/model/test_nosel.py | New test file for DPA3 descriptor with dynamic selection |
Comments suppressed due to low confidence (2)
deepmd/pt/model/model/spin_model.py:56
- The return type annotation is incorrect. The method now returns three values (coord_spin, atype_spin, coord_corr) on line 69, but the annotation only specifies two return values. Update to
tuple[torch.Tensor, torch.Tensor, torch.Tensor].
def process_spin_input(
self, coord: torch.Tensor, atype: torch.Tensor, spin: torch.Tensor
) -> tuple[torch.Tensor, torch.Tensor]:
deepmd/pt/model/model/spin_model.py:78
- The return type annotation is incorrect. The method now returns five values on lines 111-117 (extended_coord_updated, extended_atype_updated, nlist_updated, mapping_updated, extended_coord_corr), but the annotation only specifies four return values. Update to
tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None, torch.Tensor].
def process_spin_input_lower(
self,
extended_coord: torch.Tensor,
extended_atype: torch.Tensor,
extended_spin: torch.Tensor,
nlist: torch.Tensor,
mapping: torch.Tensor | None = None,
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None]:
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR introduces virial (stress tensor) support for spin-enabled molecular dynamics models across PyTorch and TensorFlow frameworks. Changes include: coordinate corrections for virial calculations, virial loss computation, conditional virial output handling in C/C++ APIs, and comprehensive test coverage validating virial predictions. Changes
Sequence DiagramsequenceDiagram
participant Input as Input Processing
participant Model as Spin Model
participant Backbone as Backbone Model
participant Transform as Output Transform
participant Loss as Loss Computation
Input->>Input: process_spin_input: Extract coord_corr<br/>(zeros + (-spin_dist))
Input-->>Model: Returns: coord_updated,<br/>atype_updated, coord_corr
Model->>Backbone: forward_common(coord, atype,<br/>coord_corr_for_virial)
Backbone->>Transform: fit_output_to_model_output<br/>(extended_coord_corr)
Transform->>Transform: Compute dc correction:<br/>dc += extended_coord_corr @ dc
Transform-->>Backbone: Returns: corrected derivatives
Backbone-->>Model: Returns: energy, virial,<br/>energy derivatives
Model->>Loss: Forward pass with<br/>predicted virial
Loss->>Loss: Compute l2_virial_loss<br/>= MSE(label_virial, pred_virial)
Loss->>Loss: total_loss += pref_v *<br/>l2_virial_loss
Loss-->>Loss: Compute rmse_v, mae_v
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
source/api_c/include/deepmd.hpp (1)
2695-2711: Consider explicitly documenting/handling spinatom_virial(still disabled).This overload still resizes
atom_virial[i]but does not populate it (block remains commented). If this is intentionally unsupported for spin models, consider documenting it in the API comment (or explicitly zeroing/clearing) to avoid consumers assuming it’s meaningful.deepmd/pt/model/model/spin_model.py (1)
71-78: Update return type annotation.The return type annotation still declares a 4-tuple, but the function now returns 5 values (including
extended_coord_corr).Proposed fix
def process_spin_input_lower( self, extended_coord: torch.Tensor, extended_atype: torch.Tensor, extended_spin: torch.Tensor, nlist: torch.Tensor, mapping: torch.Tensor | None = None, - ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None]: + ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor | None, torch.Tensor]:
🤖 Fix all issues with AI agents
In `@deepmd/pt/model/descriptor/repformers.py`:
- Around line 435-437: The assertion uses an undefined attribute self.n_dim in
DescrptBlockRepformers; change the check to use the correct embedding size
attribute self.g1_dim: after slicing extended_atype_embd into atype_embd
(variable names extended_atype_embd and atype_embd), update the assertion that
compares atype_embd.shape to expect [nframes, nloc, self.g1_dim] instead of
self.n_dim so it matches the class's defined embedding dimension.
♻️ Duplicate comments (1)
source/api_cc/src/DeepSpinPT.cc (1)
414-436: Same"virial"key-availability risk in standalonemodule.forward(...)path.Same comment as the
forward_lowerpath: if"virial"isn’t guaranteed, guard and error clearly.
🧹 Nitpick comments (7)
source/tests/pt/model/test_nosel.py (4)
31-31: Module-leveldtypeis unused and shadowed.This variable is reassigned inside the test loop at line 66, making this module-level assignment dead code. Consider removing it to avoid confusion.
Suggested fix
-dtype = env.GLOBAL_PT_FLOAT_PRECISION - - class TestDescrptDPA3Nosel(unittest.TestCase, TestCaseSingleFrameWithNlist):
42-42: Prefix unused unpacked variables with underscore.
nfandnlocare unpacked but never used. Per Python convention, prefix them with_to indicate they're intentionally unused.Suggested fix
- nf, nloc, nnei = self.nlist.shape + _nf, _nloc, nnei = self.nlist.shape
105-117: Mutating sharedrepflowobject is fragile.After passing
repflowtodd0, mutatingrepflow.use_dynamic_sel = Truebefore creatingdd1relies onDescrptDPA3copying values during__init__rather than holding a reference. This pattern is fragile and could break if the descriptor's implementation changes.Consider creating separate
RepFlowArgsinstances for clarity and robustness:Suggested approach
# dpa3 new impl dd0 = DescrptDPA3( self.nt, repflow=repflow, # kwargs for descriptor exclude_types=[], precision=prec, use_econf_tebd=ect, type_map=["O", "H"] if ect else None, seed=GLOBAL_SEED, ).to(env.DEVICE) - repflow.use_dynamic_sel = True + repflow_dynamic = RepFlowArgs( + n_dim=20, + e_dim=10, + a_dim=10, + nlayers=3, + e_rcut=self.rcut, + e_rcut_smth=self.rcut_smth, + e_sel=nnei, + a_rcut=self.rcut - 0.1, + a_rcut_smth=self.rcut_smth, + a_sel=nnei, + a_compress_rate=acr, + n_multi_edge_message=nme, + axis_neuron=4, + update_angle=ua, + update_style=rus, + update_residual_init=ruri, + optim_update=optim, + smooth_edge_update=True, + sel_reduce_factor=1.0, + use_dynamic_sel=True, + ) # dpa3 new impl dd1 = DescrptDPA3( self.nt, - repflow=repflow, + repflow=repflow_dynamic, # kwargs for descriptorAlternatively, use
copy.deepcopy(repflow)and then set the attribute on the copy.
143-205: Consider removing or documenting the commented-out test.This large block of commented-out code for
test_jitlacks an explanation for why it's disabled. If it's a work-in-progress, add aTODOcomment explaining the intent and when it should be enabled. If it's no longer needed, consider removing it to reduce code clutter.deepmd/pt/model/network/utils.py (1)
2-4: UnusedOptionalimport.The
Optionalimport is added but not used in this file. The existing code uses PEP 604 union syntax (int | Noneon line 14) rather thanOptional[int]. Consider removing this unused import unless it's needed for future changes in this PR.🧹 Suggested fix
# SPDX-License-Identifier: LGPL-3.0-or-later -from typing import ( - Optional, -) - import torchsource/api_cc/src/DeepSpinPT.cc (1)
252-274: Guard missing"virial"key (avoid runtimeoutputs.at()throw).If a TorchScript spin model doesn’t emit
"virial"(older checkpoints / config-dependent outputs),outputs.at("virial")will throw. Consider checking presence and raising a clearer error.Proposed hardening (illustrative — please verify c10::Dict API)
- c10::IValue virial_ = outputs.at("virial"); + // NOTE: verify the correct c10::Dict presence-check API for your libtorch version. + // The goal is to fail with a clearer message than an unhandled `at()` exception. + if (!outputs.contains("virial")) { + throw deepmd::deepmd_exception( + "Spin model output dict is missing key 'virial' (model may not support virial)."); + } + c10::IValue virial_ = outputs.at("virial");deepmd/pt/loss/ener_spin.py (1)
282-297: Make virial loss shape/dtype handling more robust.Right now it assumes
label["virial"]is already(-1, 9)and that dtypes won’t conflict with the in-placeloss += .... Reshaping label virial and casting the increment avoids brittle failures.Proposed patch
if self.has_v and "virial" in model_pred and "virial" in label: find_virial = label.get("find_virial", 0.0) pref_v = pref_v * find_virial - diff_v = label["virial"] - model_pred["virial"].reshape(-1, 9) + virial_label = label["virial"].reshape(-1, 9) + virial_pred = model_pred["virial"].reshape(-1, 9) + diff_v = virial_label - virial_pred l2_virial_loss = torch.mean(torch.square(diff_v)) if not self.inference: more_loss["l2_virial_loss"] = self.display_if_exist( l2_virial_loss.detach(), find_virial ) - loss += atom_norm * (pref_v * l2_virial_loss) + loss += (atom_norm * (pref_v * l2_virial_loss)).to(GLOBAL_PT_FLOAT_PRECISION) rmse_v = l2_virial_loss.sqrt() * atom_norm more_loss["rmse_v"] = self.display_if_exist(rmse_v.detach(), find_virial) if mae: mae_v = torch.mean(torch.abs(diff_v)) * atom_norm more_loss["mae_v"] = self.display_if_exist(mae_v.detach(), find_virial)
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
deepmd/pt/loss/ener_spin.pydeepmd/pt/model/descriptor/repflow_layer.pydeepmd/pt/model/descriptor/repformers.pydeepmd/pt/model/model/make_model.pydeepmd/pt/model/model/spin_model.pydeepmd/pt/model/model/transform_output.pydeepmd/pt/model/network/utils.pysource/api_c/include/deepmd.hppsource/api_c/src/c_api.ccsource/api_cc/src/DeepSpinPT.ccsource/tests/pt/model/test_autodiff.pysource/tests/pt/model/test_ener_spin_model.pysource/tests/pt/model/test_nosel.pysource/tests/universal/common/cases/model/utils.pysource/tests/universal/pt/model/test_model.py
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/pt/loss/ener_hess.py:341-348
Timestamp: 2024-10-08T15:32:11.479Z
Learning: In `deepmd/pt/loss/ener_hess.py`, the `label` uses the key `"atom_ener"` intentionally to maintain consistency with the forked version.
Applied to files:
deepmd/pt/loss/ener_spin.py
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4160
File: deepmd/dpmodel/utils/env_mat.py:52-64
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Negative indices in `nlist` are properly handled by masking later in the computation, so they do not cause issues in indexing operations.
Applied to files:
deepmd/pt/model/descriptor/repflow_layer.py
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.
Applied to files:
source/tests/pt/model/test_nosel.pysource/tests/universal/pt/model/test_model.py
📚 Learning: 2025-12-12T13:40:14.334Z
Learnt from: CR
Repo: deepmodeling/deepmd-kit PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-12T13:40:14.334Z
Learning: Run core tests with `pytest source/tests/tf/test_dp_test.py::TestDPTestEner::test_1frame -v` to validate basic functionality
Applied to files:
source/tests/pt/model/test_nosel.py
🧬 Code graph analysis (6)
source/api_c/src/c_api.cc (1)
data/raw/copy_raw.py (1)
copy(11-71)
deepmd/pt/loss/ener_spin.py (3)
deepmd/driver.py (1)
label(42-75)deepmd/pt/loss/loss.py (1)
display_if_exist(44-54)deepmd/entrypoints/test.py (1)
mae(247-260)
deepmd/pt/model/descriptor/repflow_layer.py (2)
deepmd/pt/model/network/utils.py (1)
aggregate(10-50)deepmd/pt/model/descriptor/repformer_layer.py (2)
list_update_res_residual(1292-1309)list_update(1312-1322)
source/tests/pt/model/test_nosel.py (1)
deepmd/dpmodel/descriptor/dpa3.py (1)
RepFlowArgs(59-250)
deepmd/pt/model/model/spin_model.py (2)
deepmd/dpmodel/model/spin_model.py (2)
concat_switch_virtual(190-208)process_spin_input(43-53)deepmd/pt/utils/spin.py (1)
concat_switch_virtual(6-30)
source/tests/pt/model/test_ener_spin_model.py (1)
deepmd/pt/model/model/spin_model.py (1)
process_spin_input(54-69)
🪛 Ruff (0.14.11)
source/tests/pt/model/test_nosel.py
42-42: Unpacked variable nf is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
42-42: Unpacked variable nloc is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (42)
- GitHub Check: Agent
- GitHub Check: CodeQL analysis (python)
- GitHub Check: Test Python (12, 3.13)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Test Python (4, 3.13)
- GitHub Check: Test Python (12, 3.10)
- GitHub Check: Test Python (2, 3.13)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Test Python (10, 3.13)
- GitHub Check: Test Python (5, 3.13)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Test Python (10, 3.10)
- GitHub Check: Test Python (11, 3.13)
- GitHub Check: Test Python (6, 3.13)
- GitHub Check: Test Python (3, 3.10)
- GitHub Check: Test Python (9, 3.10)
- GitHub Check: Test Python (8, 3.13)
- GitHub Check: Test Python (7, 3.10)
- GitHub Check: Test Python (6, 3.10)
- GitHub Check: Test Python (1, 3.10)
- GitHub Check: Test Python (8, 3.10)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Test Python (11, 3.10)
- GitHub Check: Test Python (3, 3.13)
- GitHub Check: Test Python (1, 3.13)
- GitHub Check: Test Python (9, 3.13)
- GitHub Check: Test Python (5, 3.10)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test Python (7, 3.13)
- GitHub Check: Test Python (2, 3.10)
- GitHub Check: Test Python (4, 3.10)
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Test C++ (false, false, false, true)
- GitHub Check: Test C++ (true, false, false, true)
- GitHub Check: Test C++ (true, true, true, false)
- GitHub Check: Test C++ (false, true, true, false)
🔇 Additional comments (23)
deepmd/pt/model/model/transform_output.py (1)
152-206: LGTM! Virial correction logic is well-integrated.The
extended_coord_corrparameter is properly gated with a None check and only applied whenvdef.c_differentiableis True. The tensor operations compute the virial correction by combining the force derivative with the coordinate correction.deepmd/pt/model/descriptor/repflow_layer.py (1)
1-1243: LGTM - Formatting-only changes.All modifications in this file are style/formatting changes that collapse multi-line expressions into single lines. No semantic or functional changes are introduced. The logic remains identical to the previous implementation.
source/tests/universal/pt/model/test_model.py (1)
747-748: LGTM - Appropriate test flag for spin virial support.The
test_spin_virial = Trueflag enables virial testing specifically for the PyTorch spin energy model. The accompanying comment clearly documents that this flag can be removed once other backends implement spin virial support, which is good for future maintainability.source/api_c/include/deepmd.hpp (1)
2594-2607: Virial reshape now consistent with other outputs (good).Copying
virial_flatinto per-modelvirial[i]aligns this path with energy/force/force_mag and avoids surprising “all zeros” virials in the model-deviation API.source/api_c/src/c_api.cc (1)
865-869: Virial now actually returned fromDP_DeepSpinModelDeviCompute_variant(good).This brings the virial behavior in line with force/force_mag flattening and fixes the previously “silently empty” virial output when requested.
source/tests/pt/model/test_ener_spin_model.py (2)
118-120: LGTM!The test correctly adapts to the updated
process_spin_inputsignature that now returns a third element (coord_corr). Using_to discard the unused value is appropriate here since this test focuses on coordinate and type transformations rather than virial corrections.
172-180: LGTM!The test correctly handles the extended return signature of
process_spin_input_lower, which now returns five values includingextended_coord_corr. The underscore appropriately discards the virial correction tensor that isn't relevant to this particular test case.source/tests/universal/common/cases/model/utils.py (3)
918-921: LGTM!The
test_spin_virialflag provides a clean mechanism to incrementally enable virial testing for spin models. The conditionif not test_spin or test_spin_virialcorrectly maintains backward compatibility while allowing virial tests to run for spin configurations when the flag is explicitly set.
931-933: LGTM!Correctly includes spin in the virial calculation input when
test_spinis enabled. This ensures the virial finite difference test properly exercises the spin-virial code path.
952-954: LGTM!Consistently passes spin to the model when
test_spinis enabled for the actual virial computation, matching the finite difference setup above.source/tests/pt/model/test_autodiff.py (3)
144-154: LGTM!The
VirialTestclass is properly extended to support spin models. The spin tensor generation andtest_keyslogic mirror the pattern already established inForceTest, ensuring consistent behavior across both test types.
166-167: LGTM!The spin tensor is correctly passed to
eval_modelin the virial inference function, enabling proper virial computation for spin-enabled models.
261-268: LGTM!The new
TestEnergyModelSpinSeAVirialclass properly mirrorsTestEnergyModelSpinSeAForce, completing the autodiff test coverage for spin virial functionality. Thetest_spin = Trueflag correctly enables spin-aware virial testing.deepmd/pt/model/model/make_model.py (3)
141-142: LGTM!The new
coord_corr_for_virialparameter is properly added as an optional argument withNonedefault, maintaining backward compatibility with existing callers.
190-196: LGTM!The coordinate correction is correctly extended to the neighbor list region using
torch.gatherwith the mapping tensor. The dtype conversion tocc.dtypeensures consistency with the coordinate precision. This follows the same extension pattern used elsewhere in the codebase.
263-264: LGTM!The
extended_coord_corrparameter is properly propagated through the lower interface with appropriate documentation. The parameter is correctly forwarded tofit_output_to_model_outputwhere it will be used for virial correction.Also applies to: 290-291, 324-324
deepmd/pt/model/model/spin_model.py (7)
62-69: LGTM!The virial correction computation is correctly implemented. The
coord_corrtensor with[zeros_like(coord), -spin_dist]properly represents that real atoms have zero correction while virtual atoms need-spin_distcorrection to account for their artificial displacement in virial calculations.
89-100: LGTM!The extended coordinate correction is correctly computed using
concat_switch_virtualwithtorch.zeros_like(extended_coord)for real atoms and-extended_spin_distfor virtual atoms, maintaining consistency withprocess_spin_input.
419-432: LGTM!The
forward_commonmethod correctly unpacks the new 3-tuple return fromprocess_spin_inputand passescoord_corr_for_virialto the backbone model'sforward_commonmethod.
473-495: LGTM!The
forward_common_lowermethod correctly unpacks the new 5-tuple return fromprocess_spin_input_lowerand passesextended_coord_corr_for_virialto the backbone model'sforward_common_lower.
567-572: LGTM!The
translated_output_defproperly includes virial and atom_virial output definitions whendo_grad_c("energy")is true, enabling virial outputs for spin models.
600-604: LGTM!The forward method correctly outputs virial and conditionally atom_virial based on
do_grad_canddo_atomic_virialflags, consistent with non-spin energy models.
640-645: LGTM!The
forward_lowermethod correctly outputs virial and conditionallyextended_virial(matching the naming convention for extended outputs in lower interfaces).
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5156 +/- ##
==========================================
+ Coverage 82.07% 82.31% +0.24%
==========================================
Files 732 604 -128
Lines 73974 53960 -20014
Branches 3615 978 -2637
==========================================
- Hits 60711 44419 -16292
+ Misses 12100 8978 -3122
+ Partials 1163 563 -600 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 193-195: The test is indexing atom_vir (and expected_atom_v)
without verifying their lengths; add size assertions before the loop that
compares elements: check that atom_vir.size() == natoms*9 and
expected_atom_v.size() == natoms*9 (or the expected length expression used in
this test) so the subsequent for-loop comparing fabs(atom_vir[ii] -
expected_atom_v[ii]) is safe; apply the same pre-checks wherever the test later
iterates over atom_vir (the second comparison block that mirrors this loop).
🧹 Nitpick comments (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)
46-49: Include per‑atom virial (av) in the regen snippet.Keeps the comment aligned with the new
avexpectations so values can be regenerated consistently.📝 Suggested tweak
-// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {fm.ravel()=} -// {ae.ravel()=}") +// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {av.ravel()=} {fm.ravel()=} +// {ae.ravel()=}") @@ -// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {fm.ravel()=} -// {ae.ravel()=}") +// print(f"{e.ravel()=} {f.ravel()=} {v.ravel()=} {av.ravel()=} {fm.ravel()=} +// {ae.ravel()=}")Also applies to: 223-227
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-10-08T15:32:11.479Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.
Applied to files:
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧬 Code graph analysis (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (2)
source/api_cc/tests/test_deeppot_a.cc (2)
ener(150-156)ener(150-154)source/api_cc/tests/test_deeppot_pt.cc (2)
ener(132-138)ener(132-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (40)
- GitHub Check: Test Python (9, 3.13)
- GitHub Check: Test Python (11, 3.13)
- GitHub Check: Test Python (3, 3.13)
- GitHub Check: Test Python (10, 3.10)
- GitHub Check: Test Python (1, 3.13)
- GitHub Check: Test Python (12, 3.10)
- GitHub Check: Test Python (9, 3.10)
- GitHub Check: Test Python (8, 3.13)
- GitHub Check: Test Python (2, 3.13)
- GitHub Check: Test Python (3, 3.10)
- GitHub Check: Test Python (4, 3.13)
- GitHub Check: Test Python (7, 3.10)
- GitHub Check: Test Python (4, 3.10)
- GitHub Check: Test Python (6, 3.10)
- GitHub Check: Test Python (11, 3.10)
- GitHub Check: Test Python (10, 3.13)
- GitHub Check: Test Python (12, 3.13)
- GitHub Check: Test Python (7, 3.13)
- GitHub Check: Test Python (6, 3.13)
- GitHub Check: Test Python (2, 3.10)
- GitHub Check: Test Python (8, 3.10)
- GitHub Check: Test Python (5, 3.13)
- GitHub Check: Test Python (5, 3.10)
- GitHub Check: Test Python (1, 3.10)
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Test C++ (false, true, true, false)
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test C++ (true, true, true, false)
- GitHub Check: Test C++ (true, false, false, true)
- GitHub Check: Test C++ (false, false, false, true)
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 54-60: The test's expected_f vector is missing one force component
(contains 17 values instead of natoms*3 == 18) causing the SetUp assertion to
fail; regenerate the full 18-value expected_f using the Python snippet in the
test comments (or the same calculation used to produce the other values),
replace the existing std::vector<VALUETYPE> expected_f in
test_deeppot_dpa_pt_spin.cc with the complete 18-element list, and ensure the
final element(s) correspond to the same ordering and precision used elsewhere in
the test so EXPECT_EQ(natoms * 3, expected_f.size()) passes.
🧹 Nitpick comments (1)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)
115-119: Addexpected_atom_vsize assertion for consistency withTestInferDeepSpinDpaPtNopbc.The
TestInferDeepSpinDpaPtNopbc::SetUpvalidates bothexpected_tot_v.size()andexpected_atom_v.size()(lines 297-298), but this class only validatesexpected_tot_v.size(). Add the missing check for consistency and early error detection.🔧 Suggested fix
EXPECT_EQ(9, expected_tot_v.size()); + EXPECT_EQ(natoms * 9, expected_atom_v.size()); expected_tot_e = 0.;
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
source/api_cc/src/DeepSpinPT.ccsource/api_cc/tests/test_deeppot_dpa_pt_spin.ccsource/tests/pt/model/test_autodiff.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-09-19T04:25:12.408Z
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4144
File: source/api_cc/tests/test_deeppot_dpa_pt.cc:166-246
Timestamp: 2024-09-19T04:25:12.408Z
Learning: Refactoring between test classes `TestInferDeepPotDpaPt` and `TestInferDeepPotDpaPtNopbc` is addressed in PR `#3905`.
Applied to files:
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
🧬 Code graph analysis (3)
source/api_cc/src/DeepSpinPT.cc (2)
source/api_c/include/deepmd.hpp (2)
select_map(3508-3522)select_map(3508-3511)source/api_cc/src/common.cc (12)
select_map(935-961)select_map(935-941)select_map(964-982)select_map(964-970)select_map(1038-1044)select_map(1046-1053)select_map(1077-1083)select_map(1085-1092)select_map(1116-1122)select_map(1124-1131)select_map(1154-1161)select_map(1163-1170)
source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (2)
source/api_cc/tests/test_deeppot_a.cc (2)
ener(150-156)ener(150-154)source/api_cc/tests/test_deeppot_pt.cc (2)
ener(132-138)ener(132-136)
source/tests/pt/model/test_autodiff.py (2)
source/tests/universal/common/cases/model/utils.py (1)
stretch_box(838-843)source/tests/pt/common.py (1)
eval_model(48-307)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (40)
- GitHub Check: Test Python (4, 3.13)
- GitHub Check: Test Python (10, 3.10)
- GitHub Check: Test Python (8, 3.10)
- GitHub Check: Test Python (12, 3.10)
- GitHub Check: Test Python (1, 3.10)
- GitHub Check: Test Python (4, 3.10)
- GitHub Check: Test Python (12, 3.13)
- GitHub Check: Test Python (5, 3.10)
- GitHub Check: Test Python (10, 3.13)
- GitHub Check: Test Python (1, 3.13)
- GitHub Check: Test Python (11, 3.10)
- GitHub Check: Test Python (2, 3.13)
- GitHub Check: Test Python (7, 3.13)
- GitHub Check: Test Python (8, 3.13)
- GitHub Check: Test Python (6, 3.13)
- GitHub Check: Test Python (5, 3.13)
- GitHub Check: Test Python (11, 3.13)
- GitHub Check: Test Python (9, 3.10)
- GitHub Check: Test Python (3, 3.10)
- GitHub Check: Test Python (9, 3.13)
- GitHub Check: Test Python (7, 3.10)
- GitHub Check: Test Python (2, 3.10)
- GitHub Check: Test Python (6, 3.10)
- GitHub Check: Test Python (3, 3.13)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Analyze (python)
- GitHub Check: Test C++ (true, false, false, true)
- GitHub Check: Test C++ (false, false, false, true)
- GitHub Check: Test C++ (true, true, true, false)
- GitHub Check: Test C++ (false, true, true, false)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
🔇 Additional comments (14)
source/api_cc/src/DeepSpinPT.cc (4)
254-278: LGTM! Conditional virial handling follows correct patterns.The check-then-access pattern using
outputs.contains()beforeoutputs.at()is safe, and clearing the vector when virial is absent prevents stale data issues.
300-313: LGTM! Extended virial handling with proper atom mapping.Using
"extended_virial"key is correct for the extended-atom compute path, andselect_mapwith stride 9 properly handles the 3×3 per-atom virial tensor mapping back to original indices.
424-447: LGTM! Virial handling in standalone compute path.The conditional virial extraction follows the same safe pattern as the first compute method.
456-466: LGTM! Correct use of"atom_virial"key for non-extended compute path.Using
"atom_virial"(instead of"extended_virial") is correct here since this compute method doesn't involve ghost/extended atoms and doesn't require index remapping viaselect_map.source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (6)
151-157: Conditional virial handling pattern looks good.The early return when
virial.empty()is a reasonable approach for incremental feature enablement. The size assertion before the comparison loop protects against out-of-bounds access.
188-204: Virial and atomic virial validation looks correct.The size assertions at lines 191 and 201 properly guard against out-of-bounds access before the comparison loops. The early return pattern allows graceful handling when virial data is not yet available.
238-281: Expected value arrays are correctly sized.All expected arrays (
expected_e,expected_f,expected_fm,expected_tot_v,expected_atom_v) have the correct element counts for 6 atoms.
297-298: SetUp validation is complete.Both
expected_tot_vandexpected_atom_vsize checks are present, which is the correct pattern.
423-429: Virial handling in LMP nlist test is consistent.The same conditional pattern is applied correctly here.
468-484: Atomic virial handling is consistent with other tests.The size assertions at lines 471 and 481 properly guard the comparison loops.
source/tests/pt/model/test_autodiff.py (4)
60-90: Solid finite-difference helper for cell derivatives.
Clear central-difference implementation with good shape documentation.
296-302: Spin virial coverage added cleanly.
The setup mirrors existing spin tests and is easy to follow.
304-347: Shear-based finite-difference virial test looks solid.
Good coverage for spin-enabled virial via cell perturbations.
177-200: No action needed—eval_modelalready properly guards spin inputs by model capability.The
eval_modelfunction determines whether to passspinsto the model based on the model'shas_spinattribute (line 133-135), not on whether thespinsparameter is provided. Even ifspinsis passed for a non-spin model, the spin kwarg will not be added to the model call due to the guard on line 175-176 (if has_spin: input_dict["spin"] = batch_spin). The suggested conditional guard is redundant.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 112-116: Add a size guard for expected_atom_v before
cpu_build_nlist_atomic is called: check that expected_atom_v.size() equals
natoms * 3 (same pattern used for expected_f/expected_fm) to prevent
out-of-bounds indexing later in the test; insert an EXPECT_EQ(natoms * 3,
expected_atom_v.size()) immediately after natoms is set (near where
expected_f/expected_fm checks occur) so the nopbc-like assertion ensures safe
indexing in cpu_build_nlist_atomic.
njzjz
left a comment
There was a problem hiding this comment.
The LAMMPS tests can be recovered if they have passed
deepmd-kit/source/lmp/tests/test_lammps_spin_pt.py
Lines 153 to 190 in 6c91e99
|
We might mark this PR as draft before modifications in ut and tf finished. |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
source/lmp/tests/test_lammps_spin.py (1)
283-311: Restrict pressure variable loop to 6 components; compute pressure only outputs 6 components, not 9.The
compute pressurecommand outputs a global vector of 6 components ([1]=Pxx, [2]=Pyy, [3]=Pzz, [4]=Pxy, [5]=Pxz, [6]=Pyz), so accessing indices 7–9 viac_pressure[7],c_pressure[8],c_pressure[9]is invalid and will cause LAMMPS to error.The virial loop (lines 286–288) correctly uses
range(9)sincecompute centroid/stress/atomoutputs 9 components. The pressure loop (lines 283–285) must be changed torange(6):Fix for pressure loop
- for ii in range(9): - jj = [0, 4, 8, 3, 6, 7, 1, 2, 5][ii] - lammps.variable(f"pressure{jj} equal c_pressure[{ii + 1}]") + for ii in range(6): + jj = [0, 4, 8, 3, 6, 7][ii] + lammps.variable(f"pressure{jj} equal c_pressure[{ii + 1}]")
🤖 Fix all issues with AI agents
In `@deepmd/tf/descriptor/se_a.py`:
- Around line 1399-1429: The current natoms_not_match implementation misuses
tf.unique_with_counts on ghost_atype which returns counts only for present types
and leads to misaligned or out-of-range indexing when some types have zero
ghosts; replace the unique_with_counts approach with a fixed-length per-type
count (length self.ntypes) — e.g. compute ghost_natoms as
tf.math.bincount(ghost_atype, minlength=self.ntypes) or by summing boolean
equality masks per type — then build ghost_natoms_index = tf.concat([[0],
tf.cumsum(ghost_natoms)], axis=0) and proceed to slice coord using ghost_natoms
and ghost_natoms_index (keeping references to natoms_not_match, natoms_match,
self.ntypes, self.spin.use_spin, ghost_atype, ghost_natoms, ghost_natoms_index
to locate changes).
🧹 Nitpick comments (3)
source/lmp/tests/test_lammps_spin_nopbc.py (2)
74-191: Consider centralizing largeexpected_vfixtures.These large inline arrays are duplicated across spin test files; extracting to a shared fixture or data file would reduce maintenance drift.
295-301: Avoid hard‑coding the atom count in velocity deviation.Using the actual atom count makes this test resilient to future fixture changes.
♻️ Proposed fix
- expected_md_v = ( - np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0) - / 4 - ) + atom_count = coord.shape[0] + expected_md_v = ( + np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0) + / atom_count + )source/lmp/tests/test_lammps_spin.py (1)
335-341: Avoid hard‑coding the atom count in velocity deviation.Use the actual atom count so the test stays correct if fixtures change.
♻️ Proposed fix
- expected_md_v = ( - np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0) - / 4 - ) + atom_count = coord.shape[0] + expected_md_v = ( + np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0) + / atom_count + )
iProzd
left a comment
There was a problem hiding this comment.
doc/model/train-energy-spin.md should be updated
The file which I commented hasn't been updated |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
source/api_cc/src/DeepSpinTF.cc (1)
811-828: Critical bug: Multi-frame data loss in virial copy loopThe copy loop (lines 812–828) only handles frame 0. Although
datom_virial_is allocated fornframes * nall * 9elements andselect_map()populatesdatom_virial_tmpfor all frames (line 804), the loop accesses only indices for frame 0:
- Reads:
datom_virial_tmp[new_idx * 9 + dd]- Writes:
datom_virial_[ii * 9 + dd]Frames 1 through nframes–1 remain zero-initialized, causing data loss for multi-frame backward passes. The loop must iterate over frames with proper index offsets, following the pattern used elsewhere in the file (lines 179–189):
for (int ff = 0; ff < nframes; ++ff) { for (int ii = 0; ii < nall; ++ii) { int new_idx = new_idx_map[ii]; // ... existing logic with frame offset: // datom_virial_[ff * nall * 9 + ii * 9 + dd] = datom_virial_tmp[ff * ... * 9 + new_idx * 9 + dd]; } }The same issue affects
dforce_anddforce_mag_assignments (lines 813–820).
🤖 Fix all issues with AI agents
In `@log.lammps`:
- Around line 1-23: Remove the accidental LAMMPS log file (log.lammps) from the
commit, stop tracking it, and add a rule to .gitignore to prevent future commits
of similar log files; specifically, delete log.lammps from the PR/commit, run
git rm --cached if it's already tracked so it is no longer versioned, and add an
entry (e.g., patterns matching *.lammps, *.log, or the specific filename) to
.gitignore to exclude subsequent local logs like the one showing "ERROR:
Unrecognized pair style 'deepmd'" and local paths such as
"/home/outisli/Research/dpmd/...".
In `@source/api_cc/tests/test_deeppot_dpa_pt_spin.cc`:
- Around line 152-158: The test currently silently returns when virial (or
atom_vir) is empty which allows missing outputs to pass despite
expected_tot_v/expected_atom_v being populated; update each block (the one using
virial, atom_vir, expected_tot_v, expected_atom_v, EPSILON) to assert
non‑emptiness before comparing (e.g., ASSERT_FALSE(virial.empty()) or
ASSERT_FALSE(atom_vir.empty()) with a clear message) or explicitly call
GTEST_SKIP() when virial support is unavailable, then proceed with the
EXPECT_EQ/EXPECT_LT comparisons; apply this pattern to all similar occurrences
listed in the comment.
In `@source/api_cc/tests/test_deeppot_tf_spin.cc`:
- Around line 382-399: The test still assumes atom_vir length is natoms * 9 but
expected_v now contains (natoms + 2) * 9 entries; update the assertions/loops to
validate the extended layout. Specifically, change checks referencing atom_vir
and the loop bounds (the EXPECT_EQ/for loops that use natoms * 9 and the final
loop validating atom_vir) to use (natoms + 2) * 9 (or split expected_v into a
real-atom slice and a virtual-atom slice and assert each separately) so that
atom_vir, expected_v and their comparisons align; adjust any related EXPECT_EQ
for virial sizing if needed to reflect the new layout in the test functions
around atom_vir, expected_v, natoms, and virial.
In `@source/lmp/tests/test_lammps_spin.py`:
- Around line 308-311: The test is using the loop index ii to access
lammps.variables[f"virial{ii}"] but the variables were defined with the mapped
index (virial{jj}), causing a mismatch against expected_v[idx_map, ii]; change
the access so the virial name uses the mapped index (e.g.,
lammps.variables[f"virial{idx_map[ii]}"] or iterate using jj from idx_map) and
keep the comparison against the corresponding expected_v entry to ensure names
and indices align (references: lammps.variables, virial{jj}, ii, idx_map,
expected_v).
♻️ Duplicate comments (2)
source/tests/pt/model/test_nosel.py (1)
1-205: Reviewer requested this file be removed.A previous reviewer (iProzd) indicated that this file should be removed. Please confirm whether this test file should be included in this PR or removed as requested.
,
deepmd/tf/descriptor/se_a.py (1)
1399-1429: Fix ghost type counting to avoid misaligned indices.
tf.unique_with_countsreturns counts only for types that actually appear inghost_atype, and in appearance order rather than type order. When some types have zero ghost atoms or ghosts are not sorted by type, indexingghost_natoms[i]by type id will produce incorrect results or cause out-of-range errors.Use
tf.math.bincountwith a fixed length to ensure proper per-type counts:🔧 Suggested fix
def natoms_not_match(self, coord, natoms, atype): diff_coord_loc = self.natoms_match(coord, natoms) diff_coord_ghost = [] aatype = atype[0, :] ghost_atype = aatype[natoms[0] :] - _, _, ghost_natoms = tf.unique_with_counts(ghost_atype) + ghost_natoms = tf.math.bincount( + ghost_atype, + minlength=self.ntypes, + maxlength=self.ntypes, + dtype=natoms.dtype, + ) ghost_natoms_index = tf.concat([[0], tf.cumsum(ghost_natoms)], axis=0) ghost_natoms_index += natoms[0]Note: The same bug pattern exists in
deepmd/tf/model/ener.py:natoms_not_match(around line 453-454) and should be fixed there as well.
🧹 Nitpick comments (3)
source/tests/pt/model/test_nosel.py (3)
42-42: Prefix unused variables with underscore.
nfandnlocare unpacked but never used. Use underscore prefix to indicate they're intentionally unused.Suggested fix
- nf, nloc, nnei = self.nlist.shape + _nf, _nloc, nnei = self.nlist.shape
31-31: Unused module-leveldtypevariable.This module-level
dtypeis shadowed by the loop variable on line 66 and is never actually used. Consider removing it.Suggested fix
-dtype = env.GLOBAL_PT_FLOAT_PRECISION - - class TestDescrptDPA3Nosel(unittest.TestCase, TestCaseSingleFrameWithNlist):
143-205: Remove or enable commented-out test code.Large blocks of commented-out code should either be removed or enabled. If this
test_jitmethod is intended for future implementation, consider tracking it via an issue or a TODO comment rather than leaving the full implementation commented out.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@deepmd/pt/model/model/transform_output.py`:
- Around line 196-201: The current dr.squeeze(-2) in the transform_output logic
can remove the vdef dimension only when it equals 1 and will break for
vector/tensor outputs (e.g., dipole with vdef.shape=[3]); update the code around
dr, extended_coord_corr, dc_corr and dc so you either (a) assert the feature
dimension is 1 before squeezing (e.g., check dr.shape[-2]==1) or (b)
reshape/expand dr explicitly to align with extended_coord_corr (use
view/unsqueeze/expand to produce the correct [..., 1, v]×[..., v, 9] matmul
shapes) and add a clear assertion/error message if shapes are incompatible;
ensure dc_corr is computed with the correct broadcasting and that dc = dc +
dc_corr works for both scalar and non-scalar vdef outputs.
In `@deepmd/tf/descriptor/se_a.py`:
- Line 1421: The line aatype = atype[0, :] discards all but the first frame's
ghost-type info—either iterate per-frame or enforce homogeneity; change the code
to operate on the full atype array (not just atype[0]) or add a runtime
assertion that all rows of atype are identical before using aatype, and update
any downstream uses (e.g., diff_coord_ghost slicing) to accept per-frame atype
or the validated single-row aatype; locate the usage by searching for the
variables atype, aatype and diff_coord_ghost and implement the per-frame loop or
the equality assertion there.
🧹 Nitpick comments (8)
source/tests/pt/model/test_nosel.py (3)
42-42: Unused unpacked variablesnfandnloc.Only
nneiis used from the unpacked shape. Prefix unused variables with_to satisfy linters.Suggested fix
- nf, nloc, nnei = self.nlist.shape + _nf, _nloc, nnei = self.nlist.shape
105-117: In-place mutation ofrepflowshared betweendd0anddd1is fragile.
repflowis the same object passed todd0on line 94. Mutatinguse_dynamic_selin-place at line 105 before constructingdd1is safe only ifDescrptDPA3copies the args eagerly at construction time. If it holds a reference (or if future refactors introduce lazy reads),dd0's behavior could silently change, invalidating the test.Consider creating a separate
RepFlowArgsfordd1or usingcopy.deepcopy:Suggested fix
+ import copy + repflow_dyn = copy.deepcopy(repflow) + repflow_dyn.use_dynamic_sel = True - repflow.use_dynamic_sel = True # dpa3 new impl dd1 = DescrptDPA3( self.nt, - repflow=repflow, + repflow=repflow_dyn,
143-205: Large commented-outtest_jitblock.Consider removing this or converting it to a skipped test (
@unittest.skip("TODO: enable JIT test")) so it's tracked and discoverable rather than being silent dead code.source/lmp/tests/test_lammps_spin_nopbc.py (1)
295-301: Unnecessary[:]slice — inconsistent with the rest of the file.Line 296 uses
expected_v[:]andexpected_v2[:], while line 354 in the newtest_pair_deepmd_model_devi_atomic_relative_vtest (and the reference tests intest_lammps_spin.py) usesexpected_v/expected_v2without the slice. The[:]is a no-op here and can be removed for consistency.Suggested fix
expected_md_v = ( - np.std([np.sum(expected_v[:], axis=0), np.sum(expected_v2[:], axis=0)], axis=0) + np.std([np.sum(expected_v, axis=0), np.sum(expected_v2, axis=0)], axis=0) / 4 )source/api_cc/tests/test_deeppot_dpa_pt_spin.cc (1)
261-276: Nopbcexpected_atom_vuses fewer significant digits than the PBC counterpart.PBC fixture (line 86) uses scientific notation with ~8 significant figures (e.g.,
1.56445561e-03), while this nopbc fixture uses fixed notation with ~6 significant figures (e.g.,0.00223849). Both are adequate for theEPSILONof 1e-7, but aligning precision style would improve consistency and reduce risk of borderline failures.source/lmp/tests/test_lammps_spin.py (2)
302-307: Volume calculation relies on box origin being zero.
vol = box[1] * box[3] * box[5]computesxhi * yhi * zhi, which equals the box volume only becausexlo = ylo = zlo = 0. This is fine for this fixed test data, just noting the implicit assumption.
336-342: Hardcoded atom count4in virial normalization.The divisor
4on lines 338 and 395 is the number of real atoms. Consider usingcoord.shape[0]or a named constant (e.g.,natoms = coord.shape[0]) for clarity and to avoid silent breakage if the test data changes.source/tests/pt/model/test_ener_spin_model.py (1)
114-180: Consider adding assertions for the newcoord_corrandextended_coord_corrreturn values.
test_input_output_processverifies every other output ofprocess_spin_inputandprocess_spin_input_lower, but the new virial correction tensors are discarded with_. Adding a basic shape/value check (e.g., verifying thatcoord_corr[:, :nloc]is zeros andcoord_corr[:, nloc:]equals-spin_dist) would strengthen coverage of the new code path.
- Remove accidentally committed log.lammps file - Fix virial validation to assert non-empty instead of silently skipping - Fix atom_vir sizing to use (natoms + 2) * 9 layout consistently - Fix virial variable name/index mismatch in LAMMPS spin test Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Add runtime check to ensure all frames have the same atype vector, preventing silent errors when ghost-atom layouts differ across frames.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation