<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Arcpy Multiprocessor issues in Python Questions</title>
    <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645925#M50362</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Hi,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I am currently writing an arcpy script that churns through some large datasets and extracts data; to make this quicker I figured it might be worth using something like &lt;/SPAN&gt;&lt;A href="http://www.parallelpython.com/"&gt;Parallel Python&lt;/A&gt;&lt;SPAN&gt; or the inbuilt &lt;/SPAN&gt;&lt;A href="http://docs.python.org/library/multiprocessing.html"&gt;Multiprocessing library&lt;/A&gt;&lt;SPAN&gt;. Ideally I require the script to be run from within Arc (well... there seems to be an inescapable Schema Lock problem using direct iterators within the script, which can be circumvented by using modelbuilder iterators that run the script once per iteration, but that is another story).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;However, I can get a script by either method to run fine outside of Arc (still calling arcpy), but running from within Arc the processing job gets part way through before failing:&lt;/SPAN&gt;&lt;BR /&gt;&lt;UL&gt; &lt;BR /&gt; &lt;LI&gt;with Parallel Python it gets to the point where job_server = pp.Server() is defined, then opens another instance of ArcMap if the script is being run from ArcMap, or Catalog if running from Catalog. From Catalog the script sits waiting (progress bar doing its swishy thing, but no processing occurring) until this new instance is closed, at which point the script fails with the following:&lt;SPAN style="color:&amp;quot;red&amp;quot;;"&gt;&amp;lt;class 'struct.error'&amp;gt;: unpack requires a string argument of length 8&lt;/SPAN&gt;&lt;/LI&gt; &lt;BR /&gt;Failed to execute (scriptPP).&amp;nbsp; &lt;BR /&gt;When running from ArcMap, the new instance initially provides an error dialog stating â??Could not open the specified fileâ?�, after clicking OK it then proceeds as above (same error as from Catalog, only occurs after closing ArcMap). This usually also causes AppROT to fail as well.&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt; &lt;LI&gt;With Multiprocessing it gets as far as actually running the module, again attempts to open a new instance of whichever program it was started from. If running from ArcMap it opens an instance for each process initiated; each of which starts with an error dialog stating â??Could not find file: from multiprocessing.forking import main; main().mxdâ?�, upon clicking OK the instance opens as normal. Upon closing all new instances, the initial ArcMap becomes unresponsive (sometimes the processing dialog is still there, swishing away, sometimes it has disappeared), although there is no error message.&lt;/LI&gt; &lt;BR /&gt;Running from Catalog, it brings up an error dialog stating that â??C:\Program Files (x86)\ArcGIS\Desktop10.0\Bin\from multiprocessing.forking import main; main() was not found.â?� Once for each instance. Then the progress dialog and ArcCatalog both become unresponsiveâ?¦&amp;nbsp;&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;&lt;/UL&gt;&lt;BR /&gt;&lt;SPAN&gt;I have created a simple python script for Multiprocessing which can reproduce the error; running fine if run directly, but causing errors as stated above when running from Arc, see below. Is anyone else familiar with these issues? Can they be confirmed on other setups? It appears that Arc is somehow trying to open parts of the modules that the processes are using, but I am not sure why. Any help would be greatly appreciated!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BLOCKQUOTE class="jive-quote"&gt; import arcpy&amp;nbsp; &lt;BR /&gt;import multiprocessing&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;def simpleFunction(arg):&amp;nbsp; &lt;BR /&gt; return arg*3&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;if __name__ == "__main__":&amp;nbsp; &lt;BR /&gt; arcpy.AddMessage(" Multiprocessing test...")&amp;nbsp; &lt;BR /&gt; jobs = []&amp;nbsp; &lt;BR /&gt; pool = multiprocessing.Pool()&amp;nbsp; &lt;BR /&gt; for i in range(10):&amp;nbsp; &lt;BR /&gt; job = pool.apply_async(simpleFunction, (i,))&amp;nbsp; &lt;BR /&gt; jobs.append(job)&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; for job in jobs: # collect results from the job server&amp;nbsp; &lt;BR /&gt; print job.get()&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; del job, jobs&amp;nbsp; &lt;BR /&gt; arcpy.AddMessage(" Complete")&amp;nbsp; &lt;BR /&gt; print "Complete"&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;ArcGIS Desktop 10.0 sp2, ArcInfo Licence running on Windows 7 with Core i7 and 8GB RAM.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Tue, 28 Jun 2011 05:02:11 GMT</pubDate>
    <dc:creator>StacyRendall1</dc:creator>
    <dc:date>2011-06-28T05:02:11Z</dc:date>
    <item>
      <title>Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645925#M50362</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Hi,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I am currently writing an arcpy script that churns through some large datasets and extracts data; to make this quicker I figured it might be worth using something like &lt;/SPAN&gt;&lt;A href="http://www.parallelpython.com/"&gt;Parallel Python&lt;/A&gt;&lt;SPAN&gt; or the inbuilt &lt;/SPAN&gt;&lt;A href="http://docs.python.org/library/multiprocessing.html"&gt;Multiprocessing library&lt;/A&gt;&lt;SPAN&gt;. Ideally I require the script to be run from within Arc (well... there seems to be an inescapable Schema Lock problem using direct iterators within the script, which can be circumvented by using modelbuilder iterators that run the script once per iteration, but that is another story).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;However, I can get a script by either method to run fine outside of Arc (still calling arcpy), but running from within Arc the processing job gets part way through before failing:&lt;/SPAN&gt;&lt;BR /&gt;&lt;UL&gt; &lt;BR /&gt; &lt;LI&gt;with Parallel Python it gets to the point where job_server = pp.Server() is defined, then opens another instance of ArcMap if the script is being run from ArcMap, or Catalog if running from Catalog. From Catalog the script sits waiting (progress bar doing its swishy thing, but no processing occurring) until this new instance is closed, at which point the script fails with the following:&lt;SPAN style="color:&amp;quot;red&amp;quot;;"&gt;&amp;lt;class 'struct.error'&amp;gt;: unpack requires a string argument of length 8&lt;/SPAN&gt;&lt;/LI&gt; &lt;BR /&gt;Failed to execute (scriptPP).&amp;nbsp; &lt;BR /&gt;When running from ArcMap, the new instance initially provides an error dialog stating â??Could not open the specified fileâ?�, after clicking OK it then proceeds as above (same error as from Catalog, only occurs after closing ArcMap). This usually also causes AppROT to fail as well.&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt; &lt;LI&gt;With Multiprocessing it gets as far as actually running the module, again attempts to open a new instance of whichever program it was started from. If running from ArcMap it opens an instance for each process initiated; each of which starts with an error dialog stating â??Could not find file: from multiprocessing.forking import main; main().mxdâ?�, upon clicking OK the instance opens as normal. Upon closing all new instances, the initial ArcMap becomes unresponsive (sometimes the processing dialog is still there, swishing away, sometimes it has disappeared), although there is no error message.&lt;/LI&gt; &lt;BR /&gt;Running from Catalog, it brings up an error dialog stating that â??C:\Program Files (x86)\ArcGIS\Desktop10.0\Bin\from multiprocessing.forking import main; main() was not found.â?� Once for each instance. Then the progress dialog and ArcCatalog both become unresponsiveâ?¦&amp;nbsp;&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;&lt;/UL&gt;&lt;BR /&gt;&lt;SPAN&gt;I have created a simple python script for Multiprocessing which can reproduce the error; running fine if run directly, but causing errors as stated above when running from Arc, see below. Is anyone else familiar with these issues? Can they be confirmed on other setups? It appears that Arc is somehow trying to open parts of the modules that the processes are using, but I am not sure why. Any help would be greatly appreciated!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BLOCKQUOTE class="jive-quote"&gt; import arcpy&amp;nbsp; &lt;BR /&gt;import multiprocessing&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;def simpleFunction(arg):&amp;nbsp; &lt;BR /&gt; return arg*3&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt;if __name__ == "__main__":&amp;nbsp; &lt;BR /&gt; arcpy.AddMessage(" Multiprocessing test...")&amp;nbsp; &lt;BR /&gt; jobs = []&amp;nbsp; &lt;BR /&gt; pool = multiprocessing.Pool()&amp;nbsp; &lt;BR /&gt; for i in range(10):&amp;nbsp; &lt;BR /&gt; job = pool.apply_async(simpleFunction, (i,))&amp;nbsp; &lt;BR /&gt; jobs.append(job)&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; for job in jobs: # collect results from the job server&amp;nbsp; &lt;BR /&gt; print job.get()&amp;nbsp; &lt;BR /&gt; &lt;BR /&gt; del job, jobs&amp;nbsp; &lt;BR /&gt; arcpy.AddMessage(" Complete")&amp;nbsp; &lt;BR /&gt; print "Complete"&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;ArcGIS Desktop 10.0 sp2, ArcInfo Licence running on Windows 7 with Core i7 and 8GB RAM.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 28 Jun 2011 05:02:11 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645925#M50362</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-06-28T05:02:11Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645926#M50363</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;You'll need to run your script out of process (there's a checkbox in the script tool settings dialog) to get it to work. ArcMap sets up a runtime environment that may not be compatible with the way that multiprocessing bootstraps &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;its&lt;/SPAN&gt;&lt;SPAN&gt; runtime.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 28 Jun 2011 19:36:23 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645926#M50363</guid>
      <dc:creator>JasonScheirer</dc:creator>
      <dc:date>2011-06-28T19:36:23Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645927#M50364</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Cool, thanks.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Just one note on that method; the default action when clicking on a Python file (*.py) within Windows must be set to run it (not edit it).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Also, can you add something to the documentation explaining this?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Regards&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 28 Jun 2011 20:41:41 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645927#M50364</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-06-28T20:41:41Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645928#M50365</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Seeing that Arcpy isn't a native Python object, would there be any benifit (or would it even work) to send an arcpy.Whatever commands to Parallel Python, Pickle, etc.?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The only way I've ever gotten a "parallel" process to actually work with gp/arcpy objects is using os.spawnv and/or subprocess.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 29 Jun 2011 16:20:44 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645928#M50365</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-06-29T16:20:44Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645929#M50366</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;Seeing that Arcpy isn't a native Python object, would there be any benifit (or would it even work) to send an arcpy.Whatever commands to Parallel Python, Pickle, etc.?&lt;BR /&gt;&lt;BR /&gt;The only way I've ever gotten a "parallel" process to actually work with gp/arcpy objects is using os.spawnv and/or subprocess.&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I doubt that sending an arcpy.Whatever command to Multiprocessing or Parallel Python would work... Haven't tried it though.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;In this case I am able to split the data beforehand, then define a python module that does the processing on the section of data that is passed to it; it's these things that run in parallel. Let me know if you want more info - I can prepare a sample code for you - there's not much around on the internet for this kind of thing!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jun 2011 05:49:08 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645929#M50366</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-06-30T05:49:08Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645930#M50367</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Correct, many objects in arcpy/arcgisscripting are not pickleable, so sending them directly across the wire in multiprocessing will not work. There are workarounds, however. If you are sending rows around, you can use tuples with the column values you need, and if you are using geometries you can use the __geo_interface__ attribute to send and the AsShape function to convert back to the geometry object in your function.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jun 2011 14:40:47 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645930#M50367</guid>
      <dc:creator>JasonScheirer</dc:creator>
      <dc:date>2011-06-30T14:40:47Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645931#M50368</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;In this case I am able to split the data beforehand, then define a python module that does the processing on the section of data that is passed to it; it's these things that run in parallel. Let me know if you want more info - I can prepare a sample code for you - there's not much around on the internet for this kind of thing!&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Stacy, I'd love to see some samples. Ocassionally, I have the need to write large number-crunching stuff in Python (traversing large arrays or dictionary-type structures), and I'd love to explore ways to speed it up the processing. Youre' right, it's pretty hard to get practical examples of using some of this stuff!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jun 2011 16:39:23 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645931#M50368</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-06-30T16:39:23Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645932#M50369</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;Stacy, I'd love to see some samples. Ocassionally, I have the need to write large number-crunching stuff in Python (traversing large arrays or dictionary-type structures), and I'd love to explore ways to speed it up the processing. Youre' right, it's pretty hard to get practical examples of using some of this stuff!&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Hi Chris - I'm working on this in my free time; hope to get something up before next week...&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 04 Jul 2011 23:36:19 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645932#M50369</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-07-04T23:36:19Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645933#M50370</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Chris,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The following is a simple example of an Arcpy script that uses Multiprocessing (or Parallel Python). I have created a blog, &lt;/SPAN&gt;&lt;A href="http://pythongisandstuff.wordpress.com/"&gt;pythongisandstuff.wordpress.com/&lt;/A&gt;&lt;SPAN&gt;, which goes into greater detail of the method for developing normal scripts into scripts suitable for Multiprocessing, and has the files used in this example available for download.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;There are a few requirements before you can make an Arcpy process suitable for use with Python�??s Multiprocessing library:&lt;/SPAN&gt;&lt;BR /&gt;&lt;OL&gt;&lt;BR /&gt;&lt;LI&gt;The most calculation intensive (time consuming) part of the code must be able to be made into a Python �??module�?? and parallelised; which will be described in the following posts.&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;Once it is made into a module, there must be no issues with data access �?? each invocation of the module should either write to a different output database (Arc locks the entrie *.gdb in use, not just the feature class being accessed) or pass data back for it to be written only at a later stage.&lt;/LI&gt;&lt;BR /&gt;&lt;/OL&gt;&lt;BR /&gt;&lt;SPAN&gt;To explain using Multiprocessing with Python, I will set out a hypothetical example. The objective of the example is to identify the number and type, and accumulate a weight value, of all the Polygons within a certain distance of some Point features. The Polygon feature class has &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;�??polyType'&lt;/SPAN&gt;&lt;SPAN&gt; and &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;�??polyWeight�??&lt;/SPAN&gt;&lt;SPAN&gt; attributes. This is just a simplified example that I thought up for explaining Multiprocessing �?? sorry if there is a better method than what follows for actually doing this!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Both Multiprocessing and Parallel Python use a Python list to which jobs are appended, and from which the result is then extracted. Each job is processed in turn by the first available worker (CPU/core). So, if there are six jobs in the list on a system with two CUPs, the first two jobs will start near simultaneously, then as each CPU finishes its task and becomes free it will start working on the next job in the list. Both methods can use all available CPUs on the system, or they can be limited to a predefined number (see below), or they may be limited by the method itself �?? in this example, as there are only 6 Polygon types, only 6 CPUs would be used.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG style="font-style: italic;"&gt;See next post for code...&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;On my computer, the non-parallel scripts posted in &lt;/SPAN&gt;&lt;A href="http://pythongisandstuff.wordpress.com/2011/07/06/using-arcpy-with-multiprocessing-part-2"&gt;Part 2&lt;/A&gt;&lt;SPAN&gt; of my blog all take about 300 seconds to execute if run from the command line, and about 400 seconds if run as a tool within ArcMap or ArcCatalog. The parallelised script above takes about 170 seconds to execute if run from the command line, and 340 seconds if run as a tool within ArcMap or ArcCatalog. It seems that the first set of processes through the queue (i.e. two if two CPUs are being used, four if four CPUs are being used) are delayed for some reason, but latter processes run a lot faster.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Feel free to ask questions or point out any improvements!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 11 Jul 2011 09:09:14 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645933#M50370</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-07-11T09:09:14Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645934#M50371</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;GDB files used in this code are attached to this post; see my blog &lt;/SPAN&gt;&lt;A href="http://pythongisandstuff.wordpress.com/" rel="nofollow noopener noreferrer" target="_blank"&gt;pythongisandstuff.wordpress.com&lt;/A&gt;&lt;SPAN&gt;, which goes into greater detail about the methods for developing normal scripts into scripts suitable for Multiprocessing...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;'''
Step 4 of 4: Example of using Multiprocessing/Parallel Python with Arcpy...

Can be run either:
 1. from the command line/a Python IDE (adjust paths to feature classes, as necessary)
 2. as a Script tool within ArcGIS (ensure 'Run Ptyhon script in Process' is NOT checked when importing)
 
 The Parallel Python library must be installed before it can be used.
'''

import arcpy
import multiprocessing
import time
try:
 import pp
 forceMP = False
except ImportError:
 forceMP = True

def performCalculation(points_fC, polygons_fC, searchDist, typeList, calcPlatform_input=None):
 '''performCalculation(pointsFeatureClass, polygonsFeatureClass, searchDistance, typeList, calcPlatform)
 
 All inputs are specific.
 
 calcPlatform is either 'mp', to use the inbuilt Multiprocessing library, or 'pp', to use Parallel Python (if it exists, otherwise defaults to mp)
 
 '''
&amp;nbsp; 
 # ---------------------------------------------------------------------------
 ## Pre-calculation checks
 # ---------------------------------------------------------------------------
 
 # Check calculation platform is valid, or resort to default...
 defaultCalcTpye = 'mp'
 calcTypeExplainDict = {'pp':'Parallel Python', 'mp':'Multiprocessing'}
 if forceMP: # unable to import Parallel Python library (it has to be installed seperately)
&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp; WARNING: Cannot load Parallel Python library, forcing Multiprocessing library...")
&amp;nbsp; calcPlatform = 'mp'
 elif (calcPlatform_input not in ['mp', 'pp']) or (calcPlatform_input == None): # Invalid/no input, resort to default
&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp; WARNING: Input calculation platform '%s' invalid; should be either 'mp' or 'pp'. Defaulting to %s..." % (calcPlatform_input, calcTypeExplainDict[defaultCalcTpye]))
&amp;nbsp; calcPlatform = defaultCalcTpye
 else:
&amp;nbsp; calcPlatform = calcPlatform_input


 # ---------------------------------------------------------------------------
 ## Data extraction (parallel execution)
 # ---------------------------------------------------------------------------
 
 searchDist = int(searchDist) # convert search distance to integer...

 # check all datasets are OK; if not, provide some useful output and terminate
 valid_points = arcpy.Exists(points_fC)
 arcpy.AddMessage("Points Feature Class: "+points_fC)
 valid_polygons = arcpy.Exists(polygons_fC)
 arcpy.AddMessage("Polygons Feature Class: "+polygons_fC)
 dataCheck&amp;nbsp; = valid_points &amp;amp; valid_polygons
 
 if not dataCheck:
&amp;nbsp; if not valid_points:
&amp;nbsp;&amp;nbsp; arcpy.AddError("Points database or feature class, %s,&amp;nbsp; is invalid..." % (points_fC))
&amp;nbsp; if not valid_polygons:
&amp;nbsp;&amp;nbsp; arcpy.AddError("Polygons database or feature class, %s,&amp;nbsp; is invalid..." % (polygons_fC))
&amp;nbsp;&amp;nbsp; 
 else: # Inputs are OK, start calculation...
 
&amp;nbsp; for type in typeList: # add fields to Points file
&amp;nbsp;&amp;nbsp; arcpy.AddField_management(points_fC, "polygon_type%s_Sum" % type, "DOUBLE")
&amp;nbsp;&amp;nbsp; arcpy.CalculateField_management(points_fC, "polygon_type%s_Sum" % type, 0, "PYTHON")
&amp;nbsp;&amp;nbsp; arcpy.AddField_management(points_fC, "polygon_type%s_Count" % type, "DOUBLE")
&amp;nbsp;&amp;nbsp; arcpy.CalculateField_management(points_fC, "polygon_type%s_Count" % type, 0, "PYTHON")
&amp;nbsp;&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp; Added polygon_type%s_Sum and polygon_type%s_Count fields to Points." % (type,type))

&amp;nbsp; pointsDataDict = {} # dictionary: pointsDataDict[pointID][Type]=[sum of Polygon type weightings, count of Ploygons of type]
&amp;nbsp; jobs = [] # jobs are added to the list, then processed in parallel by the available workers (CPUs)

&amp;nbsp; if calcPlatform == 'mp':
&amp;nbsp;&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp; Utilising Python Multiprocessing library:")
&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool() # initate the processing pool - use all available resources
#&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool(2) # Example: limit the processing pool to 2 CPUs...
&amp;nbsp;&amp;nbsp; for type in typeList: # For this specific example
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Passing %s to processing pool..." % type)
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs.append(pool.apply_async(findPolygons, (points_fC, polygons_fC, type, searchDist))) # send jobs to be processed

&amp;nbsp;&amp;nbsp; for job in jobs: # collect results from the job server (waits until all the processing is complete)
&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(pointsDataDict.keys()) == 0: # first output becomes the new dictionary
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsDataDict.update(job.get())
&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for key in job.get(): # later outputs should be added per key...
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsDataDict[key].update(job.get()[key])
&amp;nbsp;&amp;nbsp; del jobs

&amp;nbsp; elif calcPlatform == 'pp':
&amp;nbsp;&amp;nbsp; ppservers=()
&amp;nbsp;&amp;nbsp; job_server = pp.Server(ppservers=ppservers) # initate the job server - use all available resources
#&amp;nbsp;&amp;nbsp; job_server = pp.Server(2, ppservers=ppservers) # Example: limit the processing pool to 2 CPUs...
&amp;nbsp;&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp; Utilising Parallel Python library:")
&amp;nbsp;&amp;nbsp; for type in typeList: # For this specific example, it would only initate three processes anyway...
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Passing %s to processing pool..." % type)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs.append(job_server.submit(findPolygons, (points_fC, polygons_fC, type, searchDist), (), ("arcpy",))) # send jobs to be processed

&amp;nbsp;&amp;nbsp; for job in jobs: # collect results from the job server (waits until all the processing is complete)
&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(pointsDataDict.keys()) == 0: # first output becomes the new dictioanry
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsDataDict.update(job())
&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for key in job():
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsDataDict[key].update(job()[key]) # later outputs should be added per key...
&amp;nbsp;&amp;nbsp; del jobs
&amp;nbsp;&amp;nbsp; 
&amp;nbsp; # ---------------------------------------------------------------------------
&amp;nbsp; ## Writing data back to points file
&amp;nbsp; # ---------------------------------------------------------------------------

&amp;nbsp; arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp; Values extracted; writing results to Points...")
&amp;nbsp; pointsRowList = arcpy.UpdateCursor(points_fC)
&amp;nbsp; for pointsRow in pointsRowList: # write the values for every point
&amp;nbsp;&amp;nbsp; pointID = pointsRow.getValue("PointID")
&amp;nbsp;&amp;nbsp; for type in typeList:
&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsRow.setValue("polygon_type%s_Sum" % type, pointsRow.getValue("polygon_type%s_Sum" % type) + pointsDataDict[pointID][type][0])
&amp;nbsp;&amp;nbsp;&amp;nbsp; pointsRow.setValue("polygon_type%s_Count" % type, pointsRow.getValue("polygon_type%s_Count" % type) + pointsDataDict[pointID][type][1])
&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp; pointsRowList.updateRow(pointsRow)
&amp;nbsp;&amp;nbsp; del pointsRow
&amp;nbsp;&amp;nbsp; 
&amp;nbsp; del pointsRowList
&amp;nbsp; # just make sure any locks are cleared...
&amp;nbsp; del calcPlatform, calcPlatform_input, calcTypeExplainDict, dataCheck, defaultCalcTpye, job, key, pointID, pointsDataDict, points_fC, polygons_fC, type, valid_points, valid_polygons, searchDist, typeList


def findPolygons(points_FC, polygons_FC, Type, search_dist):
 funcTempDict = {}
 arcpy.MakeFeatureLayer_management(polygons_FC, '%s_%s' % (polygons_FC, Type))
 arcpy.MakeFeatureLayer_management(points_FC, '%s_%s' % (points_FC, Type))
 pointsRowList = arcpy.SearchCursor('%s_%s' % (points_FC, Type))
 for pointRow in pointsRowList: # for every origin
&amp;nbsp; pointID = pointRow.getValue("PointID")

&amp;nbsp; try:
&amp;nbsp;&amp;nbsp; funcTempDict[pointID][Type] = [0,0]
&amp;nbsp; except KeyError: # first time within row
&amp;nbsp;&amp;nbsp; funcTempDict[pointID] = {}
&amp;nbsp;&amp;nbsp; funcTempDict[pointID][Type] = [0,0]

&amp;nbsp; arcpy.SelectLayerByAttribute_management('%s_%s' % (points_FC, Type), 'NEW_SELECTION', '"PointID" = \'%s\'' % pointID)
&amp;nbsp; arcpy.SelectLayerByLocation_management('%s_%s' % (polygons_FC, Type), 'INTERSECT', '%s_%s' % (points_FC, Type), search_dist, 'NEW_SELECTION')
&amp;nbsp; arcpy.SelectLayerByAttribute_management('%s_%s' % (polygons_FC, Type), 'SUBSET_SELECTION', '"polyType" = %s' % Type)
&amp;nbsp; polygonsRowList = arcpy.SearchCursor('%s_%s' % (polygons_FC, Type))
&amp;nbsp; for polygonsRow in polygonsRowList:
&amp;nbsp;&amp;nbsp; funcTempDict[pointID][Type][0] += polygonsRow.getValue("polyWeighting")
&amp;nbsp;&amp;nbsp; funcTempDict[pointID][Type][1] += 1
 
 return funcTempDict


if __name__ == '__main__':
 # Read the parameter values:
 #&amp;nbsp; 1: points feature class (string to location, e.g. c:\GIS\Data\points.gdb\points01)
 #&amp;nbsp; 2: polygons feature class (string to location)
 #&amp;nbsp; 3: search distance for polygons, integer, e.g 500
 #&amp;nbsp; 4: calculation type ('mp' to use Multiprocessing library, 'pp' to use Parallel Python library (if available, otherwise defaults to mp))
 pointsFC = arcpy.GetParameterAsText(0) # required
 polygonsFC = arcpy.GetParameterAsText(1) # required
 search_Distance = arcpy.GetParameterAsText(2) # required
 calcType = arcpy.GetParameterAsText(3) # optional (will default to MP)

 t_start = time.clock()
 
 type_list = [1,2,3,4,5,6] # polygon types to search for...
 
 # run calculation
 if '' in [pointsFC, polygonsFC, search_Distance]:# if not running from Arc, the input parameters all come out as ''
&amp;nbsp; pointsFC = "c:\\Work\\GIS\\Data\\Points.gdb\\points01"
&amp;nbsp; polygonsFC = "c:\\Work\\GIS\\Data\\Polygons.gdb\\polygons01"
&amp;nbsp; search_Distance = 1000
&amp;nbsp; performCalculation(pointsFC, polygonsFC, search_Distance, type_list)
 else: 
&amp;nbsp; performCalculation(pointsFC, polygonsFC, search_Distance, type_list, calcType)
 
 arcpy.AddMessage("&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ... complete in %s seconds." % (time.clock() - t_start))
&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 03:25:03 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645934#M50371</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2021-12-12T03:25:03Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645935#M50372</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Wow - Very nice! I'd have to take a while to digest all that...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I agree, there is a significant amount of overhead splitting things up, managing the separate processes (multiprocess seems to make it easier), and then squishing it all back together. Not for the faint hearted...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;If you are interested, here's my stone age attempt at parallel processing using os.spawnv (this code is almost 5 years old now!). I have a large process that I run every 3 months or so for my bosses, just a big overlay of our land base (riparian areas, forest inventory, harvest areas, habitat concerns, etc.) that we use for reporting, forest modeling, etc. Originally, due to the size of all the layers involved, the union wouldn't run at all, so I create a "tile" feature class (composed of ~ 40 little rectangular polygons, and then would run the overlay for each tile - one at a time. Then I figured, hey, I could run many tiles at once (but just had to write some code to regulate the timing of it all). Instatnt paralell process! Anyway, it's pretty stone age (uses .txt files to comunicate!), but it works. The "run_union_slave" script does the actual geoprocessing, this "master" script below just manages the overall parallel process. &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;# Description
# -----------
# This master script regulates the execution of the script run_union_slave.py (for each tile)
# Author: Chris Snyder, WA Department of Natural Resources, chris.snyder(at)wadnr.gov

import sys, string, os, shutil, time, traceback, glob
try:
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Defines some functions
&amp;nbsp;&amp;nbsp;&amp;nbsp; def showPyMessage():
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; try:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print &amp;gt;&amp;gt; open(logFile, 'a'), str(time.ctime()) + " - " + str(message)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print str(time.ctime()) + " - " + str(message)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; except:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pass 
&amp;nbsp;&amp;nbsp;&amp;nbsp; def launchProcess(tileNumber):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; global message
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; global numberOfProcessors
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Processing tile #" + str(tileNumber); showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #Added this to address bug with indexTiles.shp used in the overlay tools in v9.2+ (can't run more than 1 overlay process at once)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; newTempDir = r"C:\temp\gptmpenvr_" + time.strftime('%Y%m%d%H%M%S')
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.mkdir(newTempDir)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.environ["TEMP"] = newTempDir
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.environ["TMP"] = newTempDir
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Set TEMP and TMP variables to " + newTempDir; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #End bug fix...
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; parameterList = [pythonExePath, slaveScript, root, str(tileNumber), yyyymmdd]
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if tileNumber &amp;lt;= indxPolyFeatureCount:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if tileNumber == indxPolyFeatureCount or numberOfProcessors == 1:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Waiting for tile #" + str(indxPolyFeatureCount) + " to finish..."; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.spawnv(os.P_WAIT, pythonExePath, parameterList)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.spawnv(os.P_NOWAIT, pythonExePath, parameterList)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(1); message = "Waiting a few seconds"; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Specifies the root variable, makes the logFile variable, and does some error checking...
&amp;nbsp;&amp;nbsp;&amp;nbsp; dateTimeStamp = time.strftime('%Y%m%d%H%M%S')
&amp;nbsp;&amp;nbsp;&amp;nbsp; root = sys.argv[1] #r"E:\1junk\overlay_20060531"
&amp;nbsp;&amp;nbsp;&amp;nbsp; yyyymmdd = sys.argv[2] #this is passed from the master script since the tiles may be processed over a 2+ day period
&amp;nbsp;&amp;nbsp;&amp;nbsp; if os.path.exists(root)== False:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Specified root directory: " + root + " does not exist... Bailing out!"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; sys.exit()
&amp;nbsp;&amp;nbsp;&amp;nbsp; scriptName = sys.argv[0].split("\\")[len(sys.argv[0].split("\\")) - 1][0:-3] #Gets the name of the script without the .py extension&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; logFile = root + "\\log_files\\" + scriptName + "_" + dateTimeStamp[:8] + ".log" #Creates the logFile variable
&amp;nbsp;&amp;nbsp;&amp;nbsp; if os.path.exists(root + "\\log_files") == False: #Makes sure log_files exists
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.mkdir(root + "\\log_files")
&amp;nbsp;&amp;nbsp;&amp;nbsp; if os.path.exists(logFile)== True:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.remove(logFile)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Deleting log file with the same name and datestamp... Recreating " + logFile; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; workspaceDir = root + "\\index_tiles"
&amp;nbsp;&amp;nbsp;&amp;nbsp; if os.path.exists(workspaceDir)== True:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Overlay directory: " + workspaceDir + " already exist... Deleting and recreating " + workspaceDir; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; shutil.rmtree(workspaceDir)
&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Creating index tiles directory: " + workspaceDir; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; os.mkdir(workspaceDir)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Process: Finds python.exe
&amp;nbsp;&amp;nbsp;&amp;nbsp; pythonExePath = ""
&amp;nbsp;&amp;nbsp;&amp;nbsp; for path in sys.path:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if os.path.exists(os.path.join(path, "python.exe")) == True:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pythonExePath = os.path.join(path, "python.exe")
&amp;nbsp;&amp;nbsp;&amp;nbsp; if pythonExePath != "":
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Python.exe file located at " + pythonExePath; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "ERROR: Python executable not found! Exiting script...."; showPyError(); sys.exit()

&amp;nbsp;&amp;nbsp;&amp;nbsp; #Determines the number of processors on the machine...
&amp;nbsp;&amp;nbsp;&amp;nbsp; numberOfProcessors = int(os.environ.get("NUMBER_OF_PROCESSORS"))
&amp;nbsp;&amp;nbsp;&amp;nbsp; maxNumberOfProcessors = 3 #The maximum number of processors you want to use
&amp;nbsp;&amp;nbsp;&amp;nbsp; if numberOfProcessors &amp;gt; maxNumberOfProcessors:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; numberOfProcessors = maxNumberOfProcessors

&amp;nbsp;&amp;nbsp;&amp;nbsp; #Determines the number of tiles in the index layer (tile numbers are assumed to be sequential)
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Note: I didn't use gp.getcount so that this script wouldn't use the gp at all (and eat up more memory)
&amp;nbsp;&amp;nbsp;&amp;nbsp; indxPolyCountFile = glob.glob(root + "\\log_files\\therearethismanytiles_*.txt")
&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(indxPolyCountFile) == 0:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Can't find index polygon count .txt file in " + root + "\\log_files" + "! Exiting script..."; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; sys.exit()
&amp;nbsp;&amp;nbsp;&amp;nbsp; indxPolyFeatureCount = int(indxPolyCountFile[0].split("_")[-1][0:-4])
&amp;nbsp;&amp;nbsp;&amp;nbsp; if indxPolyFeatureCount == 0:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Index polygon count is 0! Exiting script..."; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; sys.exit()
&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; tileNumber = 1
&amp;nbsp;&amp;nbsp;&amp;nbsp; numberOfProcesses = 0
&amp;nbsp;&amp;nbsp;&amp;nbsp; slaveScript = r"\\Snarf\am\div_lm\ds\gis\ldo\current_scripts\run_union_slave_v93.py"

&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList = []
&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatFinishedList = []
&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatBombedList = []
&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Process: This while loop initialy launches &amp;lt;numberOfProcessors&amp;gt; instances of the slave script&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; while numberOfProcesses &amp;lt; numberOfProcessors:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; numberOfProcesses = numberOfProcesses + 1
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList.append(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; launchProcess(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tileNumber = tileNumber + 1
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Process: This while loop checks for tiles that finished or bombed, if there are, new slave scripts are launched
&amp;nbsp;&amp;nbsp;&amp;nbsp; while len(tilesThatBombedList) + len(tilesThatFinishedList) &amp;lt; indxPolyFeatureCount:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; isDoneList = glob.glob(root + "\\log_files\\" + slaveScript.split("\\")[len(slaveScript.split("\\")) - 1][0:-3] + "_isalldone_*") #makes a list of the .txt file created by the slave script
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; doneBombedList = glob.glob(root + "\\log_files\\" + slaveScript.split("\\")[len(slaveScript.split("\\")) - 1][0:-3] + "_bombed_*")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(isDoneList) == 0 and len(doneBombedList) == 0: #if there are no .txt files, wait for x number of seconds
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(5)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; else: #else if there are .txt files...
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(isDoneList) &amp;gt; 0: #if there are tiles that are "_isalldone_"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for isDoneItem in isDoneList: #for each .txt file indicating completion...
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tileThatIsDone = int(isDoneItem[isDoneItem.rfind("_") + 1:isDoneItem.rfind(".")])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.remove(root + "\\log_files\\" + slaveScript.split("\\")[len(slaveScript.split("\\")) - 1][0:-3] + "_isalldone_" + str(tileThatIsDone) + ".txt")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Tile #" + str(tileThatIsDone) + " is done!!!"; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatFinishedList.append(tileThatIsDone)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList.remove(tileThatIsDone)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList.append(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; launchProcess(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tileNumber = tileNumber + 1
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(5)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(doneBombedList) &amp;gt; 0: #if there are tiles that "_bombed_"&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for doneBombedItem in doneBombedList: #for each .txt file indicating failure...
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tileThatBombed = int(doneBombedItem[doneBombedItem.rfind("_") + 1:doneBombedItem.rfind(".")])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; os.remove(root + "\\log_files\\" + slaveScript.split("\\")[len(slaveScript.split("\\")) - 1][0:-3] + "_bombed_" + str(tileThatBombed) + ".txt")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Tile #" + str(tileThatBombed) + " bombed!!!"; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatBombedList.append(tileThatBombed)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList.remove(tileThatBombed)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tilesThatAreProcessingList.append(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; launchProcess(tileNumber)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; tileNumber = tileNumber + 1
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(5)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Tiles that are currently processing: " + str(tilesThatAreProcessingList); showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Tiles that are done: " + str(tilesThatFinishedList); showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Tiles that bombed: " + str(tilesThatBombedList); showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(5)

&amp;nbsp;&amp;nbsp;&amp;nbsp; if len(tilesThatBombedList) &amp;gt; 0:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "ERROR - these tiles failed to process: " + str(tilesThatBombedList); showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; sys.exit()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "ALL DONE!"; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; print &amp;gt;&amp;gt; open(root + "\\log_files\\" + scriptName + "_isalldone.txt", 'a'), scriptName + "_isalldone.txt"
except:
&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "\n*** LAST GEOPROCESSOR MESSAGE (may not be source of the error)***"; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "\n*** PYTHON ERRORS *** "; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Python Traceback Info: " + traceback.format_tb(sys.exc_info()[2])[0]; showPyMessage()
&amp;nbsp;&amp;nbsp;&amp;nbsp; message = "Python Error Info: " +&amp;nbsp; str(sys.exc_type)+ ": " + str(sys.exc_value) + "\n"; showPyMessage()&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 03:25:06 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645935#M50372</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2021-12-12T03:25:06Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645936#M50373</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Thanks for sharing your examples and experiences. &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I have attempted to parallelize one of my scripts using the multiprocessing module. I split the data beforehand, concrete, I have a separate data/results folder for every parallel process / core. I also have a python module that does the parallel part. This parallelprocessing-module contains a couple of different arcpy-functions and also cursors.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Now here�??s the issue: if I use only one core&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;pool = multiprocessing.Pool(1) &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;then everything works fine all the time. Well, this is not parallel processing�?�&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;As soon as I apply more than one core, the processing is not stable any more. E.g. using&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;pool = multiprocessing.Pool(2)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I get errors in maybe 30% of the runs (I run the code with exactly the same data and exactly the same settings). The errors cannot be reproduced but are always thrown by arcpy-functions. One of the errors thrown today: 010088 Invalid input geodataset (Layer, Tin, etc.).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;The more cores I use, the more often errors occur.&amp;nbsp; Sometimes, Python even crashes (�??python.exe has stopped working�?�). Using 4 cores, I get errors in essentially every run. &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Has anybody run into similar problems? My parallel parts do never access the same data, so I�??m a little confused. Thanks for your help. Chris&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 23 Dec 2011 04:05:00 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645936#M50373</guid>
      <dc:creator>ChristianKienholz</dc:creator>
      <dc:date>2011-12-23T04:05:00Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645937#M50374</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;As soon as I apply more than one core, the processing is not stable any more&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Are you sure you aren't running out of memory? I have notice that a lot of the geoprocessing tools in ArcGIS v10 are far more "memory hungry" than in v9.3 (SelectByLocation and SpatialJoin being some examples). Which is probably why these tools in v10 appear to run faster...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;When an arcpy seasoned python.exe process runs out of memory while running a geoprocessing task, it tends to thow a rather obscure and unexpected errors in tools that otherwise (when run by themselves) run just fine.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I think many times these seemingly random failures are actually just a reflection of if/when the individual processes have gone over their memory limitations.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I haven't tried it myself, but if you have gobs of memory available, you can configure ArcGIS v10.0 to use more memory than the conventional 2.1 GB (which is actually more like 1.4 GB with overhead) - up to ~4GB orf RAM if running a 64-bit OS. An excellent post on the topic: &lt;/SPAN&gt;&lt;A href="http://forumsstg.arcgis.com/threads/26863-Memory-amp-CPU-settings-for-ArcGIS-10?p=150304&amp;amp;viewfull=1#post150304"&gt;http://forumsstg.arcgis.com/threads/26863-Memory-amp-CPU-settings-for-ArcGIS-10?p=150304&amp;amp;viewfull=1#post150304&lt;/A&gt;&lt;SPAN&gt;, which I am probably going to get around to trying this week.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 28 Feb 2012 16:25:32 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645937#M50374</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2012-02-28T16:25:32Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645938#M50375</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;I finally got around to rewriting a psudo-parallel processing framework that uses the subprocess module instead of the old fashioned os.spawnv module. I couldn't get the multiprocessing module to work well for me, so I wrote my own code to do the same sort of thing: Regulate parallel processes. Note that I am not claiming this code is superior to the Multiprocessing module in any way (quite the oposite). However, since I wrote it, I undertand it, and that is priceless for me! I hope that others might find it usefull. I tried to provide usefull comments in the code.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;This is an updated (and frankly MUCH better) version of the code I had posted here:&lt;/SPAN&gt;&lt;A href="http://forums.arcgis.com/threads/33602-Arcpy-Multiprocessor-issues?p=116611&amp;amp;viewfull=1#post116611" rel="nofollow noopener noreferrer" target="_blank"&gt;http://forums.arcgis.com/threads/33602-Arcpy-Multiprocessor-issues?p=116611&amp;amp;viewfull=1#post116611&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Note the example parent and child scripts below don't use any ESRI objects, but they certainly could. In the parent script, all the subprocesses are conveniently tracked in a dictionary consisting of:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;jobDict[jobId] = [applicationExePath, childScriptPath,[list of input varibles], status {"NOT_STARTED", "IN_PROGRESS", "SUCCEEDED", "FAILED"}, subprocess.Popen object]&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration:underline;"&gt;&lt;STRONG&gt;If you can make this code better in some way or have an idea to make it better, please post it!&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;PARENT SCRIPT EXAMPLE:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;#Import some modules
import os, random, subprocess, sys, time 

#Process: Define a function that will launch the processes and update the job dictionary accordingly
def launchProcess(jobId):
&amp;nbsp;&amp;nbsp;&amp;nbsp; global jobDict #make this a global so the function can read it
&amp;nbsp;&amp;nbsp;&amp;nbsp; inputVar1 = jobDict[jobId][2][0] #Input variables are being read from the job dictionary
&amp;nbsp;&amp;nbsp;&amp;nbsp; inputVar2 = jobDict[jobId][2][1]
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobDict[jobId][4] = subprocess.Popen([jobDict[jobId][0], jobDict[jobId][1], str(inputVar1), str(inputVar2)], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobDict[jobId][3] = "IN_PROGRESS" #Indicate the job is 'IN_PROGRESS'

#Determine how many processes to run concurrently
numberOfProcessorsToUse = 3 #The number of processors you (the user) wants to use
if numberOfProcessorsToUse &amp;gt; int(os.environ.get("NUMBER_OF_PROCESSORS")):
&amp;nbsp;&amp;nbsp;&amp;nbsp; numberOfProcessorsToUse = int(os.environ.get("NUMBER_OF_PROCESSORS"))

#Populate a "job dictionary" to keep track of all the subprocess jobs and all their various inputs and outputs
childScriptPath = r"C:\csny490\simple_subprocess_child.py"
jobDict = {}
for jobId in range (1,21): #this loop is just showing how you can populate the jobId's and their input variables - in this example theere will be 20 jobs
&amp;nbsp;&amp;nbsp;&amp;nbsp; inputVar1 = random.randrange(1,6) #this variable tells each subrocess how many seconds it will "sleep" for - between 1 and 5 seconds
&amp;nbsp;&amp;nbsp;&amp;nbsp; inputVar2 = random.randrange(2,9) #in the child proces script, if inputVar2 &amp;gt; 6 it will throw an exception, and the subprocess will be 'FAILED' - I simply did this to create the possibility of failed subprocesses
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Format of jobDict[jobId] = [applicationExePath, childScriptPath, [list of input varibles], status {"NOT_STARTED", "IN_PROGRESS", "SUCCEEDED", "FAILED"}, subprocess.Popen object]
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobDict[jobId] = [os.path.join(sys.prefix, "python.exe"), childScriptPath, [inputVar1, inputVar2], "NOT_STARTED", None]

#Process: Kick off some processes, monitor the processes, and start new processes as others finish
kickOffFlag = False #Indicate the process kick off has not yet occured
while len([i for i in jobDict if jobDict&lt;I&gt;[3] in ("SUCCEEDED","FAILED")]) &amp;lt; len(jobDict):
&amp;nbsp;&amp;nbsp;&amp;nbsp; if kickOffFlag == False:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; while len([i for i in jobDict if jobDict&lt;I&gt;[3] != 'NOT_STARTED']) &amp;lt; numberOfProcessorsToUse:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; launchProcess([i for i in jobDict if jobDict&lt;I&gt;[3] == 'NOT_STARTED'][0]) #Feed the appropriate jobId to the launchProcess() function 
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kickOffFlag = True #Set the flag as True once we have done the initial kickoff
&amp;nbsp;&amp;nbsp;&amp;nbsp; for jobId in [i for i in jobDict if jobDict&lt;I&gt;[3] == 'IN_PROGRESS' and jobDict&lt;I&gt;[4].poll() != None]: #if an subprocess is listed as 'IN_PROGRESS' and polls as 0 (i.e. "done" but success or failure unknown)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if jobDict[jobId][4].returncode == 0: #return code of 0 indicates success (no sys.exit(1) command encountered in the child process
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobDict[jobId][3] = "SUCCEEDED"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if jobDict[jobId][4].returncode &amp;gt; 0: #return code of 1 (or another integer value) indicates failure (a sys.exit(1) command was encountered in the child process)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobDict[jobId][3] = "FAILED"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if len([i for i in jobDict if jobDict&lt;I&gt;[3] == 'NOT_STARTED']) &amp;gt; 0: #if there are still jobs = 'NOT_STARTED', launch the next one in line
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; launchProcess([i for i in jobDict if jobDict&lt;I&gt;[3] == 'NOT_STARTED'][0])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "--------------------------------------"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for jobId in jobDict:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Job ID " + str(jobId) + " = " + str(jobDict[jobId][3])&lt;/I&gt;&lt;/I&gt;&lt;/I&gt;&lt;/I&gt;&lt;/I&gt;&lt;/I&gt;&lt;/I&gt;&lt;/PRE&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;CHILD SCRIPT EXAMPLE:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;try:
&amp;nbsp;&amp;nbsp;&amp;nbsp; import sys, time
&amp;nbsp;&amp;nbsp;&amp;nbsp; var1 = int(sys.argv[1])
&amp;nbsp;&amp;nbsp;&amp;nbsp; var2 = int(sys.argv[2])
&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Nothing to do, so sleeping for " + str(var1) + " seconds..."
&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(var1)
&amp;nbsp;&amp;nbsp;&amp;nbsp; if var2 &amp;gt; 6:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; test = "oh crap" + 1 #concatenating a string an int here is intended to throw an exception on purpose to demonstrate a failure
&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Epic Success!"
&amp;nbsp;&amp;nbsp;&amp;nbsp; #Note: If script runs through without error, return code will be, by default, 0 - indicating success
except:
&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Epic Fail!" #Note: You can gather/parse all these print messages in the parent scrip using jobDict[jobId][4].communicate()
&amp;nbsp;&amp;nbsp;&amp;nbsp; sys.exit(1)&amp;nbsp; #Note: If script fails, return code is 1 - indicating failure - you can set the return code to be &amp;gt; 0 if you want...&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 03:25:09 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645938#M50375</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2021-12-12T03:25:09Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645939#M50376</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;If anyone uses any of that code above, I found a bit of a bug..&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Basically if your child subprocess is quite chatty (like a bunch of logging to a text file), the subprocess.stdout buffer in the memory can fill up and cause the child subprocess to hang forever! If stdout is sent to PIPE and is more than 65536 characters, the subprocess will hang. &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;After pulling my hair out for a few hours, I found the answer here: &lt;/SPAN&gt;&lt;A href="http://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/"&gt;http://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Some work arounds:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;--------------------&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;1. Don't have a chatty subproces script (stdout memory buffer is 65536 characters or so, which can fill up fast).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;2. Don't use stdout = PIPE if you don't care about stdout messages. This is what I did in my case, because all my parent script needs is the subprocess .poll() and .returncode&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;So for example, I did this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;subprocess.Popen([jobDict[jobId][0], jobDict[jobId][1], str(inputVar1), str(inputVar2)], shell=False)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;instead of this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;subprocess.Popen([jobDict[jobId][0], jobDict[jobId][1], str(inputVar1), str(inputVar2)], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;3. Send stdout to an actual file on disk instead of memory... all the "filelike" stdout methods (like .readlines) are the same anyway! File or memory - doesn't really matter.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Mar 2012 17:20:48 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645939#M50376</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2012-03-09T17:20:48Z</dc:date>
    </item>
    <item>
      <title>Re: Arcpy Multiprocessor issues</title>
      <link>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645940#M50377</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Regarding the original poster's comment (yes I realize this is dated) about not being able to use multiprocessing from inside arc programs.&amp;nbsp; I too had a similar problem, and managed to get something working.&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Under windows, python does not (cannot?) make use of a fork() type api which would make the whole setup for multiprocess much more straight forward.&amp;nbsp; Instead, python first pickles the Process (multiprocessing.process.Process) object (which contains a function reference and arguments) and certain parts of the sys module.&amp;nbsp; Then the process executable (assumed to be a python interpreter) and is invoked with '-c "from multiprocessing.fork import main; main" --multiprocessing-fork' to start the boot strap process.&amp;nbsp;&amp;nbsp;&amp;nbsp; Obviously, this is a problem because for an in-process tool the process executable is NOT python, it is arcmap.exe or arccatalog.exe.&amp;nbsp;&amp;nbsp; Fortunately, the multiprocessing API provides a work around:&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
PYTHON_EXE = os.path.join(sys.exec_prefix, 'pythonw.exe') #if you use python.exe you get a command window
multiprocessing.set_executable(PYTHON_EXE)
&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;However, this would only work with certain scripts under arcmap/catalog.&amp;nbsp; Notably, the script would crash if I tried to access arcpy.env.* and arcpy.env.keys() was empty.&amp;nbsp; So something was up with arcpy.&amp;nbsp; The remainder of the multiprocessing bootstrap is fairly "dumb", it just loads stuff from pipes, restores parts of the sys, unpickles and goes.&amp;nbsp; Generally modules are not "pickled", in fact if you make a simple empty module, and run pickle.dumps(module) on it:&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
TypeError: can't pickle module objects
&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Modules will, however, be included in the pickle as a reference that says "load this module on unpickle."&amp;nbsp; This absolutely means that every referenced modules much be "findable" via the current sys.path.&amp;nbsp; As path problems were not being an issue, there had to be a problem with the initialization of the arcpy module.&amp;nbsp; There are six &lt;/SPAN&gt;&lt;STRONG&gt;OS&lt;/STRONG&gt;&lt;SPAN&gt; environment variables (from os.environ) that affect the initialization of arcpy and they all start with "GP_".&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;UL&gt;&lt;BR /&gt;&lt;LI&gt;GP_ENVPID_PATH&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;GP_ERRPID_PATH&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;GP_LICENSE_CODE&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;GP_PIPE_NAME&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;GP_PROXY_OPS&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;GP_VALIDATE_SCRIPT&lt;/LI&gt;&lt;BR /&gt;&lt;/UL&gt;&lt;SPAN&gt;These are used to pass information to out-of-process tools. For in process-tools they are blank, and in command line scripts, they don't exist.&amp;nbsp; The problem is that by default, the child process will inherit the os environment variables of the parent.&amp;nbsp; If multiprocessing is used from an in-process script tool, these environment variables will exist and be empty.&amp;nbsp; Hence, arcpy will think it is in-process, when it really isn't.&amp;nbsp; For an Out Of Process tool it really is out of process, though, I'm not sure if the environment which will be inherited will be the arc environments at the time of subprocess creation or OOP tool creation.&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Finally, If an alternative OS environment omitting these variables is passed to the API function that creates a sub-process, it will work as if a command-line script. But.. I think this also means you might be checking out another license, and I can't seem to get arcpy to populate these variable unless I run an OOP script, and then they only exist for the OOP tool, so I would suspect that if a license is checked out one would be checking out a license for each process.&amp;nbsp; But maybe not.&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;There would be two ways to possibly remove these variables.&amp;nbsp; One might be able to just delete them from os.env and then use multiprocess, which might not be smart as it may affect the parent process.&amp;nbsp; The second possibility would be to make a copy of os.env, remove the variables, and pass that copy to the new process.&amp;nbsp; However, some of the multiprocessing objects need to be modified.&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;[INDENT]multiprocessing.process's Process object would need to be sub-classed to override start() so it can use a custom POpen object.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;multiprocessing.forkings's POpen object would need to be sub-classed to do something like&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
env = dict(os.environ)
for x in list(env.iterkeys()):
&amp;nbsp;&amp;nbsp;&amp;nbsp; if x.startswith('GP_'):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; del env&lt;X&gt;

hp, ht, pid, tid = _subprocess.CreateProcess(
&amp;nbsp;&amp;nbsp;&amp;nbsp; _python_exe, cmd, None, None, 1, 0, env, None, None
)
&lt;/X&gt;&lt;/PRE&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[/INDENT]&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;in the __init__(), noting that the called CreateProcess was changed to take the env variable.&amp;nbsp; If one could emulate the the out of process setup for a tool, this would be same place to do this instead of deleting the GP_* os environment variables. This was all in 10.0sp5 under arccatalog.&amp;nbsp; If you are using process pools, I suspect you'll need to sub-class the pool object to make use of the custom process object.&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Oddly if you stay away from certain things (notably setting to set/get arc environments), you can probably use multiprocessing without any modifications.&amp;nbsp; And looks like some tricks with arcpy.Exist(...) might help too, but for my purposes, didn't seem sufficient.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 03:25:12 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/arcpy-multiprocessor-issues/m-p/645940#M50377</guid>
      <dc:creator>PhilipThiem</dc:creator>
      <dc:date>2021-12-12T03:25:12Z</dc:date>
    </item>
  </channel>
</rss>

