Hello, I have a 2GB GPU and it's not enough for training the model and I get CUDA out of memory error every time (when running model.Ir_find ()). Is there any way to force Pytorch to use only CPU? For some reasons I can't clone the default Python environment either and update the ArcGIS API to see I'll get an error in other versions or not. I'm using ArcGIS API 1.8.3.
Solved! Go to Solution.
Try this:
import torch
torch.cuda.is_available = lambda : False
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
It's definitely using CPU on my system as shown in screenshot.
BTW, I am also getting an error trying to update the Python API, details here - is it the same for you?
Unable to install\upgrade via conda: InvalidSpecError: Invalid spec: >=
Obviously I've done that before and none of the solutions worked and that's why I posted my question here. For instance, I tried
Try this:
import torch
torch.cuda.is_available = lambda : False
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
It's definitely using CPU on my system as shown in screenshot.
BTW, I am also getting an error trying to update the Python API, details here - is it the same for you?
Unable to install\upgrade via conda: InvalidSpecError: Invalid spec: >=
Thank you so much! It worked.
I tried upgrading packages on another laptop using a cloned environment and it worked but on this laptop, I couldn't even clone the default environment.