<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic RasterProcessingGPU service cannot release GPU memory in ArcGIS API for Python Questions</title>
    <link>https://community.esri.com/t5/arcgis-api-for-python-questions/rasterprocessinggpu-service-cannot-release-gpu/m-p/1591565#M11195</link>
    <description>&lt;P&gt;l used arcgis python api to perform deeplearning inference tasks. When one task successed, proceed to another task, the job of&amp;nbsp;RasterAnalysisTools failed. Job messages prompted GPU memory is occupied.&lt;/P&gt;&lt;P&gt;l found instance of RasterProcessingGPU&amp;nbsp; do not release GPU memory.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="FrankLee1_0-1741073988105.png" style="width: 400px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/126829iBEF80B264F14335A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="FrankLee1_0-1741073988105.png" alt="FrankLee1_0-1741073988105.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;l tried some ways but not&amp;nbsp;optimal.&lt;/P&gt;&lt;P&gt;1. Adjust RasterprocessingGPU service recycling interval to short time&lt;/P&gt;&lt;P&gt;2 Restarted&amp;nbsp;&amp;nbsp;RasterprocessingGPU service after&amp;nbsp;each execution of the script.&lt;/P&gt;&lt;P&gt;But my question is&amp;nbsp; instance of service can release GPU memory automatically ?&lt;/P&gt;</description>
    <pubDate>Wed, 05 Mar 2025 08:40:15 GMT</pubDate>
    <dc:creator>FrankLee1</dc:creator>
    <dc:date>2025-03-05T08:40:15Z</dc:date>
    <item>
      <title>RasterProcessingGPU service cannot release GPU memory</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/rasterprocessinggpu-service-cannot-release-gpu/m-p/1591565#M11195</link>
      <description>&lt;P&gt;l used arcgis python api to perform deeplearning inference tasks. When one task successed, proceed to another task, the job of&amp;nbsp;RasterAnalysisTools failed. Job messages prompted GPU memory is occupied.&lt;/P&gt;&lt;P&gt;l found instance of RasterProcessingGPU&amp;nbsp; do not release GPU memory.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="FrankLee1_0-1741073988105.png" style="width: 400px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/126829iBEF80B264F14335A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="FrankLee1_0-1741073988105.png" alt="FrankLee1_0-1741073988105.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;l tried some ways but not&amp;nbsp;optimal.&lt;/P&gt;&lt;P&gt;1. Adjust RasterprocessingGPU service recycling interval to short time&lt;/P&gt;&lt;P&gt;2 Restarted&amp;nbsp;&amp;nbsp;RasterprocessingGPU service after&amp;nbsp;each execution of the script.&lt;/P&gt;&lt;P&gt;But my question is&amp;nbsp; instance of service can release GPU memory automatically ?&lt;/P&gt;</description>
      <pubDate>Wed, 05 Mar 2025 08:40:15 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/rasterprocessinggpu-service-cannot-release-gpu/m-p/1591565#M11195</guid>
      <dc:creator>FrankLee1</dc:creator>
      <dc:date>2025-03-05T08:40:15Z</dc:date>
    </item>
  </channel>
</rss>

