How force Pytorch to use CPU instead of GPU?

19363
3
Jump to solution
04-13-2021 10:40 PM
MaryamBarzegar
New Contributor III

Hello, I have a 2GB GPU and it's not enough for training the model and I get CUDA out of memory error every time (when running model.Ir_find ()). Is there any way to force Pytorch to use only CPU? For some reasons I can't clone the default Python environment either and update the ArcGIS API to see I'll get an error in other versions or not. I'm using ArcGIS API 1.8.3. 

0 Kudos
1 Solution

Accepted Solutions
Tim_McGinnes
Occasional Contributor III

Try this:

import torch
torch.cuda.is_available = lambda : False
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

It's definitely using CPU on my system as shown in screenshot.

 

BTW, I am also getting an error trying to update the Python API, details here - is it the same for you?

Unable to install\upgrade via conda: InvalidSpecError: Invalid spec: >= 

View solution in original post

0 Kudos
3 Replies
MaryamBarzegar
New Contributor III

Obviously I've done that before and none of the solutions worked and that's why I posted my question here. For instance, I tried 

device = torch.device("cpu")    
and,
os.environ["CUDA_VISIBLE_DEVICES"] =""
 
both didn't work. 
0 Kudos
Tim_McGinnes
Occasional Contributor III

Try this:

import torch
torch.cuda.is_available = lambda : False
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

It's definitely using CPU on my system as shown in screenshot.

 

BTW, I am also getting an error trying to update the Python API, details here - is it the same for you?

Unable to install\upgrade via conda: InvalidSpecError: Invalid spec: >= 

0 Kudos
MaryamBarzegar
New Contributor III

Thank you so much! It worked.

I tried upgrading packages on another laptop using a cloned environment and it worked but on this laptop, I couldn't even clone the default environment. 

0 Kudos