<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Pool from multiprocessing issues in Python Questions</title>
    <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720823#M55797</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;I received Memory Error after the total footprint of all four processes had grown to about 4 GB&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Questions:&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;1) Are these individual processes that are uniquely discernable in the task manager? (example: 4 individual python.exe (or ???) processes?) Or a single instance?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;2) If the latter,&amp;nbsp; I assume you are using the large address aware setting to get near 4GB (and a 64-bit OS, correct)? If so, have you noticed any drawbacks from using it?&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 29 Jul 2011 14:32:43 GMT</pubDate>
    <dc:creator>ChrisSnyder</dc:creator>
    <dc:date>2011-07-29T14:32:43Z</dc:date>
    <item>
      <title>Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720816#M55790</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;I am trying to use pool from the multiprocessing to speed up some operations def worker(d) that happen once for each leayer in the mxd.&amp;nbsp; this on is hard coded to D:\TEMP\Untitled4.mxd.&amp;nbsp;&amp;nbsp; it runs but only one at a time.&amp;nbsp; I can see it start the pool, but only on is being used.&amp;nbsp; any help would be great.&amp;nbsp; I am running it in arctool box in ArcMap and have unchecked run as process. like I said it runs, but only one at a time....&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;import arcpy&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;import os&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;import multiprocessing&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;def worker(d):&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; # buffer layer by the below values&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; bfs = [101, 200, 201, 400, 401, 750, 751, 1001,&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1500, 1501, 2000, 2001, 2500]&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; for bf in bfs:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Output = os.path.basename(d)[:-4] + "_Buffer" + str(bf) + ".shp"&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Buffering " + d + " at " + str(bf) + " Feet"&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if arcpy.Exists(d):&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Buffer_analysis(d, "D:\\Temp\\" + Output, str(bf) + " Feet")&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Project_management("D:\\Temp\\" + Output, "D:\\Temp\\Test\\" + Output, "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Geographic Coordinate Systems\North America\NAD 1983.prj")&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Delete_management("D:\\Temp\\" + Output)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; else:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "No Data"&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;if __name__ == '__main__':&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp; #Sets MXD&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; mxd = arcpy.mapping.MapDocument("D:\TEMP\Untitled4.mxd")&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; #mxd = arcpy.mapping.MapDocument("CURRENT")&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; #set some environments needed to get the correct outputs&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.overwriteOutput = True&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace&amp;nbsp; = "D:\TEMP\Test"&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.outputCoordinateSystem = "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Projected Coordinate Systems\UTM\NAD 1983\NAD 1983 UTM Zone 16N.prj"&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; # of processors to use set for max minus 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; prc = int(os.environ["NUMBER_OF_PROCESSORS"]) - 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; # Create and start a pool of worker processes&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool(prc)&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; # Gets all layer in the Current MXD&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; lyrs = arcpy.mapping.ListLayers(mxd)&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; #Loops through every layer and gets source data name and path&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; for lyr in lyrs:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; d = lyr.dataSource&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Passing " + d + " to processing pool"&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("Passing " + d + " to processing pool")&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.apply_async(worker(d))&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 26 Jul 2011 23:48:26 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720816#M55790</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2011-07-26T23:48:26Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720819#M55793</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Guess there's only one sure way to tell, but�?�&amp;nbsp; Do you guys have any ideas on how memory usage is doled out and/or recycled for multiprocess or parallel python? In days of yore many of us were forced to create separate processes (a separate python .exe) using spawnv or subprocess. Primarily, this was simply because the gp object (or arcpy) had some really bad memory leak problems, and any gp-enabled application (python.exe) would gradually run out of memory when running tools in a loop. As an added bonus, you could write additional code to �??manage�?� the separate processes as a psudo-paralle process. These gp/arcpy memory leak issues have been improved greatly, but are still a significant problem for �??heavy duty�?� geoprocesing�?�&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Anyway, I am curious if pp or multiprocess can somehow avoid these memory leak issues. In the pp example, I see that you hand the parallel process the same arcpy object that is instantiated in main. Do all the child processes use (and assumingly add to the memory consumption of) the same arcpy instance? In the mutiprocess I example, since I don�??t see such as explicit code as in the pp example, I assume the child processes by default just simply have access to all the global variables in main?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I am theorizing that pp and/or multiprocessing will still suffer from gradual and catastrophic gp/arcpy memory consumption, but I can�??t wait to try some stuff out for myself. Thanks a million for putting these examples up!!!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 28 Jul 2011 15:31:24 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720819#M55793</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-07-28T15:31:24Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720820#M55794</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Chris in my PP example I use "In_memory", which uses part of your ram to hold data, to speed up the wrker functions which runs in parallel. Each "job" makes a unique "In_memory" object so make sure to delete it before the wrker function ends.&amp;nbsp; As for program memory leaks, I assume they are all still there and would be a much better issue if the code was run as single process, might be able to do a two part process to clean the memory out 1/2 way through if you run into memory issues, but I have not even come close on my stuff.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The pp code (only one I got to work) has to be run out of process,&amp;nbsp; so "prc" in my example sets the number of processes to start (5 for my intel 980X).&amp;nbsp; Each process is a python process (32bit) with its own memory allocation, just like using spawn or a sub process.&amp;nbsp; Then all of the jobs are fed to pool of the 5 processes and those same 5 processes run through the jobs until all of the jobs are finished.&amp;nbsp; &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;In my testing I have not had any memory issues.&amp;nbsp; If you use "In_memory" make sure not to load really big objects ( like a 1GB feature clas) in to that, also there are several limitations as to what can and cannot use "In_memory", but I use them all the time and they can really speed up the processing, some time by a factor of 4 times faster.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The mp.pool "spawns" new processes as well, but I never could get it to work correctly always wanted to run everything a single process.&amp;nbsp; It would start the pool, but only feed one process.&amp;nbsp; The PP is free and seems to work well, so that is what I will be using.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Please look at &lt;/SPAN&gt;&lt;A href="http://pythongisandstuff.wordpress.c.../stacyrendall/"&gt;http://pythongisandstuff.wordpress.c.../stacyrendall/&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp; for some known issues with mp and PP&amp;nbsp; it is towards the bottom of the page.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 28 Jul 2011 17:24:34 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720820#M55794</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2011-07-28T17:24:34Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720821#M55795</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Burt,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;It is intriguing that Multiprocessing isn't working for you; can you confirm that it doesn't have an error, but just runs one process at a time? I use the Multiprocessing library exclusively now, as PP can't seem to handle extensions, but have never had any problems with it...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I don't know much about the specifics of memory use, but I have learned a few things (all with regard to using the Multiprocessing library, but they may shed some light on PP also). The child processes take a job from the job pool, complete it, take the next available job and so on. However, they seem to keep hold of some (not all) of the memory they were using; if you watch the processes over time the memory footprint of each will grow. Running some models involving Network Analyst with 4 child processes, and a worker pool of initially 56 items, I received Memory Error after the total footprint of all four processes had grown to about 4 GB, about half way through the pool. This was nowhere near the limits of the computer I was using. My best guess was that all the memory was still owned by the main process, and was thus hitting its 32 bit limit, but the numbers don't really add up to support that theory...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The easy way around this isn't fixed until Python 2.7, where you can set a &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;maxtasksperchild&lt;/SPAN&gt;&lt;SPAN&gt; value; although it doesn't seem to work very well - requiring more complicated code, which either hangs or runs quite slowly.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The hard way around is to recreate the pool every &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt; times for each child, depending on how big the operations being performed by the workers are... Obviously it isn't pretty, but works just fine. I have used a number of computers and a mix of Windows 7 and XP while testing all these out, and it seems that sometimes on some computers the first parallel processes through the pool are a lot slower than the later ones, by at least 4 to 7x. If this is happening on a computer, you want &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt; as high as possible, to minimise the number of your tasks that are first through the pool. If not, you might as well recreate the pool after each parallel process has run once.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Let me know if you want more info, and I'll try to get an example up on my blog sooner rather than later!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Cheers.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 28 Jul 2011 21:11:55 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720821#M55795</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2011-07-28T21:11:55Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720822#M55796</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Stacy correct the mp code you modified runs the same as the code I originally posted.&amp;nbsp; That is the problem I was having originally it runs, but only using one child (one process)&amp;nbsp; so really non-parallel.&amp;nbsp; It makes the pool, but only uses one worker to process all of the jobs.&amp;nbsp; I just don't know what is wrong.&amp;nbsp; is there some setting to use mp that I am missing.&amp;nbsp; I would like to use mp as it seem to be "better" supported as in I can use arcpy.Adderror and arcpy.AddMessage.&amp;nbsp; If you make an mxd with a hand full of Shapefiles (has to be shapes for this example) and save it, then point mxd in the code to that mxd you can test it.&amp;nbsp; Also will need to adjust the workspace as am I using "D:\Temp". Does it run in parallel in your pc.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;In case it matters... I am running win7 64-bit, Intel processor, and 12GB of RAM.&amp;nbsp; ArcGIS 10 SP2 with Python 2.6 (as it comes with ArcGIS) most of the patches after SP2 are installed as well.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 29 Jul 2011 13:52:57 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720822#M55796</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2011-07-29T13:52:57Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720823#M55797</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;I received Memory Error after the total footprint of all four processes had grown to about 4 GB&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Questions:&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;1) Are these individual processes that are uniquely discernable in the task manager? (example: 4 individual python.exe (or ???) processes?) Or a single instance?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;2) If the latter,&amp;nbsp; I assume you are using the large address aware setting to get near 4GB (and a 64-bit OS, correct)? If so, have you noticed any drawbacks from using it?&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 29 Jul 2011 14:32:43 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720823#M55797</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-07-29T14:32:43Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720824#M55798</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;1) each worker in the pool starts its own python.exe process and you can see them in the Taskmanager&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;2)I am not using largeaddressware.&amp;nbsp; Can you confirm that pyhton 2.6 has that capablility?&amp;nbsp;&amp;nbsp; If so can you point me in that direction?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I do know that ArcGIS 10 uses largeaddressaware by default and that in the next release or 2&amp;nbsp; (arcGIS 10.1 or 11 SP1 is the estimate) ArcGIS will go to 64-bit GP environment (not the entire application just the Geoprocessing environment) I was told that at the ESRI UC 2011 by an ESRI employee.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 29 Jul 2011 15:53:01 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720824#M55798</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2011-07-29T15:53:01Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720825#M55799</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Large Address Aware: I just assumed it was, but... for 32-bit version of Python 2.x (3.x too?), I guess not!&lt;/SPAN&gt;&lt;BR /&gt;&lt;A href="http://bugs.python.org/issue1449496"&gt;http://bugs.python.org/issue1449496&lt;/A&gt;&lt;BR /&gt;&lt;SPAN&gt;Looks like Jason S. from ESRI has tried hard to make it so, but...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The non-ESRI people in the forum link said something to the effect of "why don't you just have your wrapper program (aka ArcGIS) be large address aware". So maybe when run via ArcWhatever, Python would "act large adress awary"?? Dunno...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;"ArcGIS will go to 64-bit GP environment (not the entire application just the Geoprocessing environment) I was told that at the ESRI UC 2011 by an ESRI employee"&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Wow! I had "heard" that only Server was gouing to be native 64 bit, but whatever... The geoprocessing stuff is the only thing I really care about where I think gobs of memeory would actually do soemthing cool, so.... Cool!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 29 Jul 2011 17:51:07 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720825#M55799</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-07-29T17:51:07Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720826#M55800</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Large Address Aware: On a second look maybe there is another flag you can set just for Python that would make it so... Looked to me like it just wasn't linked to the global flag... That said, i don't have my stuff set up for it, but just have heard of others using it, so I was curious what others had experienced. I've never really seen exact ESRI-specific "how to" instructions for making all things ESRI GIS large address aware. Among other things, I would find the addded memory limits very usefull for the in_memory workspace as well as numpy-array type things.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 29 Jul 2011 18:03:48 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720826#M55800</guid>
      <dc:creator>ChrisSnyder</dc:creator>
      <dc:date>2011-07-29T18:03:48Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720827#M55801</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Burt, I found out why your Multiprocessing code didn't work: the line passing the worker to the pool was incorrect (sorry I didn't pick up on it earlier!). You had:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
&amp;nbsp; # add worker to job server
&amp;nbsp; jobs.append(pool.apply_async(worker(d)))
&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;It should be:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
&amp;nbsp; # add worker to job server
&amp;nbsp; jobs.append(pool.apply_async(worker, (d,)))
&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The function and the arguments are passed seperately. The comma after the d is because the arguments should be as a Tuple (if there is more than one you don't need a trailing comma)... It is a little wierd that it worked at all!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Chris, in answer to your questions:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;1)&amp;nbsp; Yes, they are individual processes in the task manager. If you start a pool with six workers, and all of them are being utilised, there will be seven instances of python.exe in the task manager - the six children and the parent process.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;2)&amp;nbsp; I had never heard of Large Address Aware. Yes, I was running that on Windows 7 64-bit. Interestingly I have had a Python 64-bit code use more than 10GB of memory at one stage (not arcpy related though) - would be nice to use that much with Arcpy!&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:50:28 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720827#M55801</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2021-12-12T06:50:28Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720828#M55802</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Stacy,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Thanks so much.&amp;nbsp; of course I feel like a moron now.&amp;nbsp; I had the same issue with tht PP but it threw and error when it was not a tuple.&amp;nbsp; I will try that out and change my code to use mp, as it seems more arcpy friendly.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Chris,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;You can make python 32-bit have 4gb of memory on a 64-bit OS.&amp;nbsp; You need to change the flag in the python.exe.&amp;nbsp; I found a tool to change it for me the link is below.&amp;nbsp; I found it in another forum and I have not tested it so use it with caution... &lt;/SPAN&gt;&lt;A href="http://www.techpowerup.com/forums/attachment.php?attachmentid=34392&amp;amp;d=1269231650"&gt;http://www.techpowerup.com/forums/attachment.php?attachmentid=34392&amp;amp;d=1269231650&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt; FYI ARCGIS is set to Largeaddressaware by default&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 30 Jul 2011 11:46:40 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720828#M55802</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2011-07-30T11:46:40Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720829#M55803</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Burt,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="color:&amp;quot;Green&amp;quot;;"&gt;Large Address Aware.exe&lt;/SPAN&gt;&lt;SPAN&gt;, v2.0.4 by Lee Glasser--aka FordGT90Concept is an excellent find! Thank you for posting the download link, but for anyone interested here is the techPowerUP! Forum thread link &lt;/SPAN&gt;&lt;A href="http://www.techpowerup.com/forums/showthread.php?t=112556" rel="nofollow"&gt;http://www.techpowerup.com/forums/showthread.php?t=112556&lt;/A&gt;&lt;SPAN&gt; where version release and configuration details are kept in the first posting.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I've modded a few executables using "EDITBIN.EXE /LARGEADDRESSAWARE" and "DUMPBIN.EXE" that unfortunately are only available from a Visual Studio install. But this .NET applet offering from the gaming world allows convenient bitwise toggle &lt;/SPAN&gt;&lt;STRONG&gt;post install&lt;/STRONG&gt;&lt;SPAN&gt; for any executable. It also provides enough of a GUI to be able to manage the settings should stability issues arise.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Toggling the LaregAddressAware bit on 32-bit Windows executables, like python.exe and pythonw.exe, allows full 4GB addressing of each instance under 64-bit OS. And when the OS /3G boot flag is set on 32-bit OS with a full 4Gb of physical RAM, setting the LargeAddressAware flag provides an extra ~1.2 Gb of address space for the instance to expand into.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;With such a convenient tool the question now becomes --which 32-bit executables to mod and of keeping track of any interdependencies. When to mod--and when not.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Stuart&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 30 Jul 2011 18:28:20 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720829#M55803</guid>
      <dc:creator>V_StuartFoote</dc:creator>
      <dc:date>2011-07-30T18:28:20Z</dc:date>
    </item>
    <item>
      <title>Do-It-Yourself Large Address Aware</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720830#M55804</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;I you are wary of using that third party utility, here are detailed instructions for using editbin.exe from Visual Studio 2010:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A href="http://gisgeek.blogspot.com/2012/01/set-32bit-executable-largeaddressaware.html"&gt;http://gisgeek.blogspot.com/2012/01/set-32bit-executable-largeaddressaware.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;This page shows that editbin.exe comes with Visual C++ Express 2010, so there should be a free do-it-yourself option for the security conscious:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A href="http://msdn.microsoft.com/en-us/library/hs24szh9.aspx"&gt;http://msdn.microsoft.com/en-us/library/hs24szh9.aspx&lt;/A&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 23 Jan 2012 21:08:42 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720830#M55804</guid>
      <dc:creator>deleted-user-4RbHy6ryQ4a8</dc:creator>
      <dc:date>2012-01-23T21:08:42Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720817#M55791</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Burt, a few things:&lt;/SPAN&gt;&lt;BR /&gt;&lt;UL&gt; &lt;BR /&gt; &lt;LI&gt;To make your code easier to view, when you paste it select all your code and click the # button at the top - this will present it as code, thus keeping your indentation.&lt;/LI&gt; &lt;BR /&gt;&lt;/UL&gt;&lt;BR /&gt;&lt;UL&gt; &lt;BR /&gt; &lt;LI&gt;Try printing prc and lyrs before the multiprocessing loop, with arcpy.AddMessage(str(prc)+' - '+str(len(lyrs))' - '+str(lyrs)), so you can check they are looking like you would expect...&lt;/LI&gt; &lt;BR /&gt;&lt;/UL&gt;&lt;BR /&gt;&lt;UL&gt; &lt;BR /&gt; &lt;LI&gt;I don't really know how exactly the Multiprocessing thing works, but I have been using it a bit. In my experience I think you need to add the jobs to a job server (just a Python list) and get a result from each worker. I have modified your code to make each worker return a string, and then print a message telling you when each worker is done.&lt;/LI&gt; &lt;BR /&gt;&lt;/UL&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;import arcpy
import os
import multiprocessing

def worker(d):
 # buffer layer by the below values
 bfs = [101, 200, 201, 400, 401, 750, 751, 1001,
 1500, 1501, 2000, 2001, 2500]
 for bf in bfs:
&amp;nbsp; Output = os.path.basename(d)[:-4] + "_Buffer" + str(bf) + ".shp"
&amp;nbsp; print "Buffering " + d + " at " + str(bf) + " Feet"
&amp;nbsp; if arcpy.Exists(d):
&amp;nbsp;&amp;nbsp; arcpy.Buffer_analysis(d, "D:\\Temp\\" + Output, str(bf) + " Feet")
&amp;nbsp;&amp;nbsp; arcpy.Project_management("D:\\Temp\\" + Output, "D:\\Temp\\Test\\" + Output, "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Geographic Coordinate Systems\North America\NAD 1983.prj")
&amp;nbsp;&amp;nbsp; arcpy.Delete_management("D:\\Temp\\" + Output)
&amp;nbsp;&amp;nbsp; 
&amp;nbsp; else:
&amp;nbsp;&amp;nbsp; print "No Data"
 
 # if it worked, return sucess
 resultString = 'Layer complete: %s' % (d) 
 
 return resultString


if __name__ == '__main__':

 #Sets MXD
 mxd = arcpy.mapping.MapDocument("D:\TEMP\Untitled4.mxd")
 #mxd = arcpy.mapping.MapDocument("CURRENT")

 #set some environments needed to get the correct outputs
 arcpy.env.overwriteOutput = True
 arcpy.env.workspace = "D:\TEMP\Test"
 arcpy.env.outputCoordinateSystem = "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Projected Coordinate Systems\UTM\NAD 1983\NAD 1983 UTM Zone 16N.prj"

 # of processors to use set for max minus 1
 prc = int(os.environ["NUMBER_OF_PROCESSORS"]) - 1
 
 # Create and start a pool of worker processes
 pool = multiprocessing.Pool(prc)

 # Gets all layer in the Current MXD
 lyrs = arcpy.mapping.ListLayers(mxd)
 
 # print info of number of processors, number of layers, and the layer list
 arcpy.AddMessage(str(prc)+' - '+str(len(lyrs))' - '+str(lyrs))
 
 # start job server
 jobs = []
 
 #Loops through every layer and gets source data name and path
 for lyr in lyrs:
&amp;nbsp; d = lyr.dataSource
&amp;nbsp; print "Passing " + d + " to processing pool"
&amp;nbsp; arcpy.AddMessage("Passing " + d + " to processing pool")
&amp;nbsp; 
&amp;nbsp; # add worker to job server
&amp;nbsp; jobs.append(pool.apply_async(worker(d)))
 
 # get 'results' from each worker in job server
 for job in jobs:
&amp;nbsp; arcpy.AddMessage(job.get())&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Let me know how you get on.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 16:51:09 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720817#M55791</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2021-12-12T16:51:09Z</dc:date>
    </item>
    <item>
      <title>Re: Pool from multiprocessing issues</title>
      <link>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720818#M55792</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Stacy,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Thanks for the help I worked on this for 2 days off and on, as I can benefit from several things by having a process or every layer in an mxd.&amp;nbsp;&amp;nbsp;&amp;nbsp; I gave up one mp and used the pp python. I was using your project as an example &lt;/SPAN&gt;&lt;A href="http://pythongisandstuff.wordpress.com/author/stacyrendall/" rel="nofollow noopener noreferrer" target="_blank"&gt;http://pythongisandstuff.wordpress.com/author/stacyrendall/&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;&amp;nbsp; I also had an install issue wiht ArcView, which compounded my issues.&amp;nbsp; The below code, which I just finished, buffers the layers in a saved mxd.&amp;nbsp; I ran it as a single thread and it took 420 seconds.&amp;nbsp; I ran using the pp libary and it took 151 seconds from python and 154 as a tool, so a significant improvement.&amp;nbsp; I never could get the jobs to start using mp, now that I have some understanding on the pp I think I will stick to it.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I tested your modification.&amp;nbsp; I it still runs as a single process...&amp;nbsp; Thanks for helping your web page was very useful&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;import arcpy
import os
import pp
import time

#forces script to run out of process no matter what
import win32com.client
gp = win32com.client.Dispatch("esriGeoprocessing.GpDispatch.1")


def wrker(d,bf):&amp;nbsp;&amp;nbsp;&amp;nbsp; #pp module everything in here get to run in multiprocesses
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #much faster than a single thread

&amp;nbsp;&amp;nbsp;&amp;nbsp; # DO NOT USE arcpy.AddMessage() OR arcpy.addError()&amp;nbsp; IN THIS PART OF THE SCRIPT.
&amp;nbsp;&amp;nbsp;&amp;nbsp; #set enviroments for each pp process
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.overwriteOutput = True
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace&amp;nbsp; = "D:\TEMP\Test"
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.outputCoordinateSystem = "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Projected Coordinate Systems\UTM\NAD 1983\NAD 1983 UTM Zone 16N.prj"

&amp;nbsp;&amp;nbsp;&amp;nbsp; # Buffer distance in Feet
&amp;nbsp;&amp;nbsp;&amp;nbsp; distance = str(bf) + " Feet"

&amp;nbsp;&amp;nbsp;&amp;nbsp; #output file will go into the Workspace from above
&amp;nbsp;&amp;nbsp;&amp;nbsp; Output = os.path.basename(d)[:-4] + "_Buffer" + str(bf) + ".shp"

&amp;nbsp;&amp;nbsp;&amp;nbsp; #no .shp at the end of an In_memory feature
&amp;nbsp;&amp;nbsp;&amp;nbsp; Tempfile = "In_memory\\" + Output[:-4]

&amp;nbsp;&amp;nbsp;&amp;nbsp; Jobdone = d + " buffered by " + distance
&amp;nbsp;&amp;nbsp;&amp;nbsp; Jobfail = d + " failed to buffer by " + distance
&amp;nbsp;&amp;nbsp;&amp;nbsp; if arcpy.Exists(d):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Buffer_analysis(d, Tempfile, distance)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Project_management(Tempfile, Output, "C:\Program Files (x86)\ArcGIS\Desktop10.0\Coordinate Systems\Geographic Coordinate Systems\North America\NAD 1983.prj")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Delete_management(Tempfile)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #print "Buffering " + d + " at " + distance
&amp;nbsp;&amp;nbsp;&amp;nbsp; else:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print Jobfail

&amp;nbsp;&amp;nbsp;&amp;nbsp; #deletes In_memory temp file as to not hold ram.
&amp;nbsp;&amp;nbsp;&amp;nbsp; del Tempfile

&amp;nbsp;&amp;nbsp;&amp;nbsp; #data retruned to main code and used in messaging there
&amp;nbsp;&amp;nbsp;&amp;nbsp; return Jobdone

if __name__ == '__main__':

&amp;nbsp;&amp;nbsp;&amp;nbsp; #starts clock
&amp;nbsp;&amp;nbsp;&amp;nbsp; clck_st = time.clock()

&amp;nbsp;&amp;nbsp;&amp;nbsp; #Sets MXD can not use current for pp processing
&amp;nbsp;&amp;nbsp;&amp;nbsp; mxd = arcpy.mapping.MapDocument("D:\TEMP\Untitled4.mxd")

&amp;nbsp;&amp;nbsp;&amp;nbsp; #Gets all layer in the Current MXD
&amp;nbsp;&amp;nbsp;&amp;nbsp; lyrs = arcpy.mapping.ListLayers(mxd)

&amp;nbsp;&amp;nbsp;&amp;nbsp; #PP prep
&amp;nbsp;&amp;nbsp;&amp;nbsp; # of processors to use set for max minus 1
&amp;nbsp;&amp;nbsp;&amp;nbsp; prc = int(os.environ["NUMBER_OF_PROCESSORS"]) - 1

&amp;nbsp;&amp;nbsp;&amp;nbsp; #using Parellel python
&amp;nbsp;&amp;nbsp;&amp;nbsp; ppservers=()

&amp;nbsp;&amp;nbsp;&amp;nbsp; #sets workers to prc
&amp;nbsp;&amp;nbsp;&amp;nbsp; job_server = pp.Server(prc, ppservers=ppservers)
&amp;nbsp;&amp;nbsp;&amp;nbsp; #sets workers to the max available
&amp;nbsp;&amp;nbsp;&amp;nbsp; #job_server = pp.Server(ppservers=ppservers)

&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Processing all Layers in the mxd"
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("Processing all Layers in the mxd")

&amp;nbsp;&amp;nbsp;&amp;nbsp; #list to hold all of the jobs
&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs = []

&amp;nbsp;&amp;nbsp;&amp;nbsp; #List of buffer values
&amp;nbsp;&amp;nbsp;&amp;nbsp; bfs = [101, 200, 201, 400, 401, 750, 751, 1001,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1500, 1501, 2000, 2001, 2500]

&amp;nbsp;&amp;nbsp;&amp;nbsp; #Loops through every layer and gets source data name and path
&amp;nbsp;&amp;nbsp;&amp;nbsp; for lyr in lyrs:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; d = lyr.dataSource
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("Passing " + d + " to processing pool")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for bf in bfs:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs.append(job_server.submit(wrker,(d,bf), (), ("arcpy","os","time")))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #wrker(d)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; msg = "Passing " + d + " buffer at " + str(bf) + " Feet to processing pool"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print msg
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage(msg)

&amp;nbsp;&amp;nbsp;&amp;nbsp; # Retrieve results of all submited jobs
&amp;nbsp;&amp;nbsp;&amp;nbsp; for job in jobs:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print job()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage(job())

&amp;nbsp;&amp;nbsp;&amp;nbsp; #script is finished
&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Processing completed in " + str(int(time.clock() - clck_st)) + " seconds"
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.AddMessage("Processing completed in " + str(int(time.clock() - clck_st)) + " seconds")
&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(5)
&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:50:25 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/pool-from-multiprocessing-issues/m-p/720818#M55792</guid>
      <dc:creator>BurtMcAlpine</dc:creator>
      <dc:date>2021-12-12T06:50:25Z</dc:date>
    </item>
  </channel>
</rss>

