-
Notifications
You must be signed in to change notification settings - Fork 95
Open
Description
Summary
CEBRA.load(...) fails with RuntimeError: No CUDA GPUs are available when loading a checkpoint that was saved on a CUDA device from an environment where no GPU is available (e.g. CPU-only machine or notebook, or PyTorch built without CUDA).
Environment
- OS: macOS 15.2 (Darwin 25.2.0, arm64)
- CEBRA version: 0.6.0
- PyTorch version: 2.10.0
- Device: CPU-only (CUDA not available)
Steps to Reproduce
- Train and save a CEBRA model on a machine with CUDA:
import cebra import numpy as np X = np.random.uniform(0, 1, (100, 5)) model = cebra.CEBRA(max_iterations=10, device='cuda').fit(X) model.save('checkpoint.pt')
- On a CPU-only machine (or with CUDA unavailable), load the model:
from cebra import CEBRA model = CEBRA.load('checkpoint.pt')
- Observe error:
RuntimeError: No CUDA GPUs are available
Root Cause
- The checkpoint stores the training device in
state['device_'](e.g., 'cuda' or 'cuda:0') _load_cebra_with_sklearn_backend()calls.to(state['device_'])on model, criterion, and solver- This triggers PyTorch CUDA initialization; if no GPU available, it raises the error
map_location=torch.device('cpu')intorch.load()only affects tensor loading, not subsequent.to()calls
Expected Behavior
Loading should succeed on CPU-only environments: the model should gracefully fall back to CPU when CUDA is not available, regardless of the device used when saving.
Related
A fix for this issue has been prepared and will be submitted as a PR shortly.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels