<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Deep Learning error using the 2nd GPU. Bug? in ArcGIS API for Python Questions</title>
    <link>https://community.esri.com/t5/arcgis-api-for-python-questions/deep-learning-error-using-the-2nd-gpu-bug/m-p/1005952#M5260</link>
    <description>&lt;P&gt;Ok, I reply to myself:&lt;/P&gt;&lt;P&gt;Seems to be a problem when you specify the GPU number in Pro and you have set the environmental variable &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Some time ago I used the command Viewshed2 that uses CUDA processing for speeding up the visibility calculations using full geodetic solution. For harnessing the power of one of my GPUs I set the env &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt;=0&lt;/P&gt;&lt;P&gt;This env variable seems to override whatever GPU you indicate in the Pro GUI, so if you have several GPUs on your system don't forget to leave the GPU number in Pro always as 0, and use the &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt; variable to point to the GPU that you want to use. At least this has solved the issue for me.&lt;/P&gt;&lt;P&gt;Prior to this, I double checked that my CUDA drivers were properly installed, that the bandwith with my GPU was healthy, and so on. This is so complex but it works.&lt;/P&gt;</description>
    <pubDate>Fri, 04 Dec 2020 16:37:25 GMT</pubDate>
    <dc:creator>MrG</dc:creator>
    <dc:date>2020-12-04T16:37:25Z</dc:date>
    <item>
      <title>Deep Learning error using the 2nd GPU. Bug?</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/deep-learning-error-using-the-2nd-gpu-bug/m-p/1005594#M5255</link>
      <description>&lt;P&gt;Hi everyone.&lt;/P&gt;&lt;P&gt;I have 2 GPUs on my system: #0, configured for display, and #1 which is a high-end NVidia Titan RTX for AI processing.&lt;/P&gt;&lt;P&gt;I am running ArcGIS Pro 2.6 with the Deep Learning frameworks installed using the installer provided by Esri here:&amp;nbsp;&lt;A href="https://github.com/Esri/deep-learning-frameworks" target="_self"&gt;https://github.com/Esri/deep-learning-frameworks/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;I can train a model and detect objects using the GPU #0, so with the windows default GPU it works&lt;/STRONG&gt;&lt;/U&gt;.That being said, when I ask Pro to detect objects using GPU #1 it says:&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" color="#FF0000"&gt;&lt;STRONG&gt;&lt;U&gt;ERROR 999999&lt;/U&gt;:&lt;/STRONG&gt; Something unexpected caused the tool to fail. Contact Esri Technical Support (&lt;A href="http://esriurl.com/support" target="_blank" rel="noopener"&gt;http://esriurl.com/support&lt;/A&gt;) to Report a Bug, and refer to the error help for potential solutions or workarounds.&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" color="#FF0000"&gt;Parallel processing job timed out [Failed to generate table]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" color="#FF0000"&gt;Failed to execute (DetectObjectsUsingDeepLearning).&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;I have my GPUs configured as follows:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MrG_1-1606758495325.png" style="width: 550px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/1190i39802ED8202277FE/image-dimensions/550x223?v=v2" width="550" height="223" role="button" title="MrG_1-1606758495325.png" alt="MrG_1-1606758495325.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;As you can see, both in WDDM mode and both with ECC mode disabled. I have also tried setting the #1 Titan RTX to TCC mode, instead of WDDM:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MrG_0-1606758406283.png" style="width: 553px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/1189i93BCAFDF280C3543/image-dimensions/553x221?v=v2" width="553" height="221" role="button" title="MrG_0-1606758406283.png" alt="MrG_0-1606758406283.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;With this last configuration, same error.&lt;/P&gt;&lt;P&gt;Any ideas about what should I check about the configuration of the second GPU? Is there any limitation right now in the software to process on GPU instance #1 or above??&lt;/P&gt;&lt;P&gt;Can any confirm that are no limitations to run processes on others GPU? Or do I have a misconfiguration on my side?&lt;/P&gt;&lt;P&gt;Any help will be appreciated.&lt;/P&gt;&lt;P&gt;Best regards.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Nov 2020 19:25:35 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/deep-learning-error-using-the-2nd-gpu-bug/m-p/1005594#M5255</guid>
      <dc:creator>MrG</dc:creator>
      <dc:date>2020-11-30T19:25:35Z</dc:date>
    </item>
    <item>
      <title>Re: Deep Learning error using the 2nd GPU. Bug?</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/deep-learning-error-using-the-2nd-gpu-bug/m-p/1005952#M5260</link>
      <description>&lt;P&gt;Ok, I reply to myself:&lt;/P&gt;&lt;P&gt;Seems to be a problem when you specify the GPU number in Pro and you have set the environmental variable &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Some time ago I used the command Viewshed2 that uses CUDA processing for speeding up the visibility calculations using full geodetic solution. For harnessing the power of one of my GPUs I set the env &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt;=0&lt;/P&gt;&lt;P&gt;This env variable seems to override whatever GPU you indicate in the Pro GUI, so if you have several GPUs on your system don't forget to leave the GPU number in Pro always as 0, and use the &lt;SPAN&gt;CUDA_VISIBLE_DEVICES&lt;/SPAN&gt; variable to point to the GPU that you want to use. At least this has solved the issue for me.&lt;/P&gt;&lt;P&gt;Prior to this, I double checked that my CUDA drivers were properly installed, that the bandwith with my GPU was healthy, and so on. This is so complex but it works.&lt;/P&gt;</description>
      <pubDate>Fri, 04 Dec 2020 16:37:25 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/deep-learning-error-using-the-2nd-gpu-bug/m-p/1005952#M5260</guid>
      <dc:creator>MrG</dc:creator>
      <dc:date>2020-12-04T16:37:25Z</dc:date>
    </item>
  </channel>
</rss>

