<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Train deep learning model in multiple GPUs for Python API in ArcGIS Image Analyst Questions</title>
    <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281665#M451</link>
    <description>&lt;P&gt;Contact Technical Support.&amp;nbsp; Esri staff doesn't normally follow Community to answer technical support issues.&lt;/P&gt;</description>
    <pubDate>Sun, 23 Apr 2023 01:53:30 GMT</pubDate>
    <dc:creator>DanPatterson</dc:creator>
    <dc:date>2023-04-23T01:53:30Z</dc:date>
    <item>
      <title>Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1233511#M389</link>
      <description>&lt;P&gt;I'm using ArcGIS Pro v2.9.2 with Image Analyst license.&amp;nbsp; I've trying to train a MaskRCNN model using multiple GPUs on a single machine but I can't seem to find a sample code.&amp;nbsp; Most are distributed codes (multiple machines with multiple GPUs).&lt;/P&gt;&lt;P&gt;Would someone show me codes for training models for a single machine, multiple GPUs in a Python Window environment on ArcGIS Pro.&amp;nbsp;&amp;nbsp;I have my own training data generated using Image Analyst's Export training Data.&lt;/P&gt;&lt;P&gt;Appreciate any help.&lt;/P&gt;</description>
      <pubDate>Sat, 19 Nov 2022 11:03:43 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1233511#M389</guid>
      <dc:creator>JadedEarth</dc:creator>
      <dc:date>2022-11-19T11:03:43Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1233567#M390</link>
      <description>&lt;P&gt;I will move this thread to&amp;nbsp;&lt;STRONG&gt;Imagery and Remote Sensing Questions&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;since your previous, closed thread went unresolved.&amp;nbsp; Perhaps someone here from the Imagery team will have a solution&lt;/P&gt;</description>
      <pubDate>Sun, 20 Nov 2022 02:41:58 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1233567#M390</guid>
      <dc:creator>DanPatterson</dc:creator>
      <dc:date>2022-11-20T02:41:58Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1280392#M446</link>
      <description>&lt;P&gt;I can't find where this thread went in Imagery and Remote Sensing Questions.&amp;nbsp; However, I have a follow-up with regards to using multiple GPUs in a single machine.&lt;/P&gt;&lt;P&gt;&amp;nbsp; I'm using ArcGIS Pro version 3.1.4 with 7 GPUs.&amp;nbsp; I have an ArcGIS Pro Online, named-instance license and an Image Analyst license.&lt;/P&gt;&lt;P&gt;From "Train Deep Learning Model" Image Analyst tool (aka tool.script.execute.py script), there is this script at the start:&lt;/P&gt;&lt;P&gt;#--------------------------------------------------------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;if arcpy.env.processorType == "GPU" and torch.cuda.is_available() and arcpy.env.gpuId:&lt;BR /&gt;# use specific gpu if gpuId is specified, use all available gpus if no gpuID is specified&lt;/P&gt;&lt;P&gt;elif not arcpy.env.processorType:&lt;BR /&gt;# use all available gpus if processor type is not specified(default), gpuID is ignored in this case&lt;BR /&gt;arcgis.env._processorType = "GPU"&lt;/P&gt;&lt;P&gt;else:&lt;BR /&gt;arcgis.env._processorType = arcpy.env.processorType&lt;/P&gt;&lt;P&gt;#--------------------------------------------------------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;It seems like the intent was that, if I specify the processor = "GPU" and I leave GPU ID blank, then I can use all the available GPUs in my machine.&amp;nbsp; Here are the results of my trial settings for CPU/GPU on my machine:&lt;/P&gt;&lt;P&gt;Machine 1:&amp;nbsp;&amp;nbsp;2 CPUs @ 28-core each; logical processors=56 cpus; 7 GPUs (1 RTX A6000 ID=4; and 6 RTX A4000 ID=0,1,2,3,5,6)&lt;/P&gt;&lt;P&gt;Settings 1:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Processor = Blank&lt;BR /&gt;&amp;nbsp; &amp;nbsp; GPU ID = Blank&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual CPUs Used = 100% logical processors running (56 cores)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual GPU Used = 1 GPU Used (RTX A6000)&lt;/P&gt;&lt;P&gt;Settings 2:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Processor = GPU&lt;BR /&gt;&amp;nbsp; &amp;nbsp; GPU ID = Blank&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual CPUs Used = 100% logical processors running (56 cores)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual GPU Used = 1 GPU Used (RTX A6000)&lt;/P&gt;&lt;P&gt;Settings 3:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Processor = GPU&lt;BR /&gt;&amp;nbsp; &amp;nbsp; GPU ID = 4&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual CPUs Used = 100% logical processors running (56 cores)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual GPU Used = 1 GPU USED ID=1 (RTS A4000; Not the specified ID)&lt;/P&gt;&lt;P&gt;Settings 4:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Processor = CPU&lt;BR /&gt;&amp;nbsp; &amp;nbsp; GPU ID = Blank&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual CPUs Used = 100% logical processors running (56 cores)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual GPU Used = None&lt;/P&gt;&lt;P&gt;Settings 5:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Processor = CPU&lt;BR /&gt;&amp;nbsp; &amp;nbsp; GPU ID = 4&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual CPUs Used = 100% logical processors running (56 cores)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Actual GPU Used = None&lt;/P&gt;&lt;P&gt;Can someone at ESRI check why this is?&amp;nbsp; Is this just a bug, or was this the intent for ArcGIS Pro Online licenses?&lt;/P&gt;&lt;P&gt;Appreciate any help.&lt;/P&gt;</description>
      <pubDate>Wed, 19 Apr 2023 19:29:18 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1280392#M446</guid>
      <dc:creator>JadedEarth</dc:creator>
      <dc:date>2023-04-19T19:29:18Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281615#M450</link>
      <description>&lt;P&gt;Is anybody there?&amp;nbsp; How can I get any technical help?&amp;nbsp; I feel like I'm talking to myself here.&amp;nbsp; Do I have to wait a year before someone from ESRI replies?&amp;nbsp; And why is there only a reply button instead of "Post" button?&lt;/P&gt;</description>
      <pubDate>Sat, 22 Apr 2023 10:29:13 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281615#M450</guid>
      <dc:creator>JadedEarth</dc:creator>
      <dc:date>2023-04-22T10:29:13Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281665#M451</link>
      <description>&lt;P&gt;Contact Technical Support.&amp;nbsp; Esri staff doesn't normally follow Community to answer technical support issues.&lt;/P&gt;</description>
      <pubDate>Sun, 23 Apr 2023 01:53:30 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281665#M451</guid>
      <dc:creator>DanPatterson</dc:creator>
      <dc:date>2023-04-23T01:53:30Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281809#M452</link>
      <description>&lt;P&gt;They just don't make it easy.&amp;nbsp; I submitted a request, which means it goes to HQ in DC and hopefully I get a response after a week or two.&amp;nbsp; Sigh...&lt;/P&gt;</description>
      <pubDate>Mon, 24 Apr 2023 13:20:47 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1281809#M452</guid>
      <dc:creator>JadedEarth</dc:creator>
      <dc:date>2023-04-24T13:20:47Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1292534#M467</link>
      <description>&lt;P&gt;Finally got through ESRI Tech support--after twists and turns.&amp;nbsp; It turns out, this might be a bug in the system.&amp;nbsp; They've placed this on their to-do list but couldn't tell me if this will be included in the next patch.&amp;nbsp; This issue exists in version 2.9, 2.9.5, and 3.1.&lt;/P&gt;&lt;P&gt;Just so you know.&lt;/P&gt;</description>
      <pubDate>Wed, 24 May 2023 13:42:11 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1292534#M467</guid>
      <dc:creator>JadedEarth</dc:creator>
      <dc:date>2023-05-24T13:42:11Z</dc:date>
    </item>
    <item>
      <title>Re: Train deep learning model in multiple GPUs for Python API</title>
      <link>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1331714#M489</link>
      <description>&lt;P&gt;please see my response on&amp;nbsp;&lt;A href="https://community.esri.com/t5/arcgis-pro-questions/train-deep-learning-model-using-multiple-gpus-on/m-p/1221102" target="_blank" rel="noopener"&gt;https://community.esri.com/t5/arcgis-pro-questions/train-deep-learning-model-using-multiple-gpus-on/m-p/1221102&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Cheers!&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Pavan Yadav | Product Engineer - Imagery and AI&lt;BR /&gt;Esri | 380 New York | Redlands, 92373 | USA&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://www.linkedin.com/in/pavan-yadav-1846606/" target="_blank" rel="noopener"&gt;https://www.linkedin.com/in/pavan-yadav-1846606/&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 22 Sep 2023 22:15:18 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-image-analyst-questions/train-deep-learning-model-in-multiple-gpus-for/m-p/1331714#M489</guid>
      <dc:creator>PavanYadav</dc:creator>
      <dc:date>2023-09-22T22:15:18Z</dc:date>
    </item>
  </channel>
</rss>

