I'm using ArcGIS Pro version 2.9. We have installed 6 GPUs in one machine but when I train deep learning model, there is no space to specify the IDs of the GPUs; only one space is available. However, it seems to randomly select which GPU it uses. Is this normal?
I'm not using Python. I'm just using Image Analyst to run the deep learning.
Any help is appreciated.
Solved! Go to Solution.
Further research: I really need to use Python codes for now; not only that, I need to use distributed data parallelism to be able to use multiple GPUs. This seems to be quite complicated. I'm still researching and will test whatever I find in literature. This is not a solution but I'll end this thread here.
GPU ID (Environment setting)—ArcGIS Pro | Documentation
does the tool you are using support the GPU environment setting?
Yes. The deep learning library has Environment parameters that allow you to specify processor type. But it only has one slot for GPU processor. I'm wondering if you can place comma for each ID, like in python.
Since I can't get this to work in Image Analyst, I'm switching to arcgis.learn using Jupyter Notebook interface in ArcGIS Pro.
Does anybody have sample Python codes to train model using multiple GPUs and using Data Parallelism?
@JadedEarth did you see the sample code here? I'm not sure when it was posted relative to your question, but I believe it was available last year: Train arcgis.learn models on multiple GPUs | ArcGIS API for Python
Further research: I really need to use Python codes for now; not only that, I need to use distributed data parallelism to be able to use multiple GPUs. This seems to be quite complicated. I'm still researching and will test whatever I find in literature. This is not a solution but I'll end this thread here.
At upcoming v3.2, by default, the Train Deep Learning tool is planned to use all available GPUs on a single machine when the Model Type is set to one of the following:
To use a specific GPU, you can use the GPU ID environment.
between 2.8 and 3.1, the tool could use multiple GPUs by default but the feature was not certified and thus not documented.You could use API: https://developers.arcgis.com/python/guide/utilize-multiple-gpus-to-train-model/ if ArcGIS Pro's current support does not meet your need.
We hope to support multiple GPUs on single machine for inferencing as well in upcoming releases.
A side note: We have an ArcGIS Image Analyst dedicated community and here you might get response sooner.
Cheers!
Pavan Yadav | Product Engineer - Imagery and AI
Esri | 380 New York | Redlands, 92373 | USA
https://www.linkedin.com/in/pavan-yadav-1846606/
I need the HED edge detection model. This is the only one that works with field boundary extraction. Is this not going to be included in this update? If not, then I'm still out of luck. I've tried all the other models you mentioned for my work but only HED model comes up with reasonable (not perfect) results.
I do hope you include HED among the multiple GPU supports since both USDA and EPA really need this.
I appreciate your assistance.