<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Multiprocessing viewsheds (easy) in Python Questions</title>
    <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714248#M55406</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Two possible causes:&lt;/SPAN&gt;&lt;BR /&gt;&lt;OL&gt;&lt;BR /&gt;&lt;LI&gt;Something is happening with mystring that isn't what you expect, or that viewshed has some limitations allowed paths (I don't know, so check the docs for this).&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;You are using &lt;STRONG&gt;pool.map()&lt;/STRONG&gt;, but I think &lt;STRONG&gt;apply_async&lt;/STRONG&gt; might be better for this task as the processes do not need to be synchronised (i.e. you don't care how long each process takes, each chunk can finish whenever...). I'm not sure that this will cause issues, but it can't hurt to try it with async.&lt;/LI&gt;&lt;BR /&gt;&lt;/OL&gt;&lt;BR /&gt;&lt;SPAN&gt;You could also try it with Parallel Python rather than Multiprocessing. Some of my arcpy workflows will work only with one, while others only with the other. My blog (link below) has some posts describing the use of both Multiprocessing and Parallel Python.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG style="font-size: 3;"&gt;To check mystring&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Print statements don't work great in multiprocessing, so to test mystring you need to make the code run sequentially and print out, like this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;import arcinfo, os, re, multiprocessing
import arcpy
from arcpy import env
from arcpy.sa import *

arcpy.CheckOutExtension("Spatial")
env.workspace = "C:/studium/project/scriptx"
arcpy.env.overwriteOutput ='True'

arcpy.CreateFolder_management("C:/studium/project", "scriptxy")

def update_shapefiles(shapefile):
&amp;nbsp;&amp;nbsp;&amp;nbsp; '''Worker function'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed = Viewshed("C:/studium/00_Master/clean/demtora_okay1", shapefile, 2, "CURVED_EARTH", 0.15)
&amp;nbsp;&amp;nbsp;&amp;nbsp; mystring = "C:/studium/project/scriptxy/"+shapefile[26:37]
&amp;nbsp;&amp;nbsp;&amp;nbsp; print mystring
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed.save(mystring)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
# End update_shapefiles
def main():
&amp;nbsp;&amp;nbsp;&amp;nbsp; ''' Create a pool class and run the jobs.'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace = "C:/studium/project/scriptx"
&amp;nbsp;&amp;nbsp;&amp;nbsp; print arcpy.env.workspace
&amp;nbsp;&amp;nbsp;&amp;nbsp; fcs = arcpy.ListFeatureClasses('turbine*')
&amp;nbsp;&amp;nbsp;&amp;nbsp; fc_list = [os.path.join("C:/studium/project/scriptx", fc) for fc in fcs]
&amp;nbsp;&amp;nbsp;&amp;nbsp; print fc_list
&amp;nbsp;&amp;nbsp;&amp;nbsp; for fc in fc_list:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; update_shapefiles(fc)

&amp;nbsp;&amp;nbsp;&amp;nbsp; # End main
 
if __name__ == '__main__':
&amp;nbsp;&amp;nbsp;&amp;nbsp; main()&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Comment out the actual calculation steps in the function and it will essentially print all the paths (without you having to wait for ages).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Or you can get the processes to return an output of mystring, which will work with the code below.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG style="font-size: 3;"&gt;Asynchronous&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Essentially something like this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs = [] # job queue
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool()
&amp;nbsp;&amp;nbsp;&amp;nbsp; for fc in fc_list:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs.append(pool.apply_async(update_shapefiles, (fc, )) # send jobs to be processed - is "fc, " to make it a tuple of length 1
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for job in jobs: # collect results
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; job.get()
&amp;nbsp;&amp;nbsp;&amp;nbsp; del jobs&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;You might need to return something for job.get() to work, so you could get your function to return &lt;/SPAN&gt;&lt;STRONG&gt;mystring &lt;/STRONG&gt;&lt;SPAN&gt;and do &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;print job.get()&lt;/SPAN&gt;&lt;SPAN&gt; instead of just job.get().&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Sun, 12 Dec 2021 06:34:54 GMT</pubDate>
    <dc:creator>StacyRendall1</dc:creator>
    <dc:date>2021-12-12T06:34:54Z</dc:date>
    <item>
      <title>Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714247#M55405</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Hi there, &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I have been searching the forum for this but eventhogh it seems that the solution should be quite easy to come up with, I couldn't find one. &lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;My system: Arcgis 10.1 Sp1, Python 2.7.2, I7 4770k 3,5 ghz, 16gb ddr3 ram, ssd drive&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;So here is my (simple) problem: I want to take advantage of the multiprocessing module to speed up viewshed calculations&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;In order to get to know this function I set up a folder containung 156 shapefiles each containing one single point. No I want to calculate viewsheds for each of these points using the multiprocessing module. It starts out fine: all cores are in use up to 100% and the right viewsheds are being calculated. but then it fails telling me either that "viewshe_demt1" already exists or that a second person or application is using this folder. Below are the error messages and my current code. I discovered that the code is producing folders called "viewshe_demtX" in the "scriptx"-folder which I don't really understand, because I specified the folder for the output of the viewshed analysis to be "scriptx&lt;/SPAN&gt;&lt;SPAN style="color:&amp;quot;#FF0000&amp;quot;;"&gt;y&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;1) ExecuteError: ERROR 010429: "Error in GRID IO: CellLyrCreateInternal: Grid c:\studium\project\scriptx\viewshe_demt1 already exists."&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;2) (Couldn't find the error-code): "Another person or application is accessing this directory"&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;
import arcinfo, os, re, multiprocessing
import arcpy
from arcpy import env
from arcpy.sa import *

arcpy.CheckOutExtension("Spatial")
env.workspace = "C:/studium/project/scriptx"
arcpy.env.overwriteOutput ='True'

arcpy.CreateFolder_management("C:/studium/project", "scriptxy")

def update_shapefiles(shapefile):
&amp;nbsp;&amp;nbsp;&amp;nbsp; '''Worker function'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed = Viewshed("C:/studium/00_Master/clean/demtora_okay1", shapefile, 2, "CURVED_EARTH", 0.15)
&amp;nbsp;&amp;nbsp;&amp;nbsp; mystring = "C:/studium/project/scriptxy/"+shapefile[26:37]
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed.save(mystring)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
# End update_shapefiles
def main():
&amp;nbsp;&amp;nbsp;&amp;nbsp; ''' Create a pool class and run the jobs.'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace = "C:/studium/project/scriptx"
&amp;nbsp;&amp;nbsp;&amp;nbsp; print arcpy.env.workspace
&amp;nbsp;&amp;nbsp;&amp;nbsp; fcs = arcpy.ListFeatureClasses('turbine*')
&amp;nbsp;&amp;nbsp;&amp;nbsp; fc_list = [os.path.join("C:/studium/project/scriptx", fc) for fc in fcs]
&amp;nbsp;&amp;nbsp;&amp;nbsp; print fc_list
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool()
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.map(update_shapefiles, fc_list)

&amp;nbsp;&amp;nbsp;&amp;nbsp; # Synchronize the main process with the job processes to
&amp;nbsp;&amp;nbsp;&amp;nbsp; # ensure proper cleanup.
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.close()
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.join()
&amp;nbsp;&amp;nbsp;&amp;nbsp; # End main
 
if __name__ == '__main__':
&amp;nbsp;&amp;nbsp;&amp;nbsp; main()
&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I basically used the first code example from &lt;/SPAN&gt;&lt;A href="http://blogs.esri.com/esri/arcgis/2012/09/26/distributed-processing-with-arcgis-part-1/" rel="nofollow noopener noreferrer" target="_blank"&gt;http://blogs.esri.com/esri/arcgis/2012/09/26/distributed-processing-with-arcgis-part-1/&lt;/A&gt;&lt;SPAN&gt; and modified it (obviously poorly) to produce viewsheds. My normal Viewshed-analysis takes about 8 hours for 156 viewsheds and the CPU utilization never exceeds 12%. &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I think the problem with the new code has something to do with my file-naming and is otherwise fine. &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;please help me get it to work properly.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;any advise is welcome&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:34:51 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714247#M55405</guid>
      <dc:creator>nb</dc:creator>
      <dc:date>2021-12-12T06:34:51Z</dc:date>
    </item>
    <item>
      <title>Re: Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714248#M55406</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;Two possible causes:&lt;/SPAN&gt;&lt;BR /&gt;&lt;OL&gt;&lt;BR /&gt;&lt;LI&gt;Something is happening with mystring that isn't what you expect, or that viewshed has some limitations allowed paths (I don't know, so check the docs for this).&lt;/LI&gt;&lt;BR /&gt;&lt;LI&gt;You are using &lt;STRONG&gt;pool.map()&lt;/STRONG&gt;, but I think &lt;STRONG&gt;apply_async&lt;/STRONG&gt; might be better for this task as the processes do not need to be synchronised (i.e. you don't care how long each process takes, each chunk can finish whenever...). I'm not sure that this will cause issues, but it can't hurt to try it with async.&lt;/LI&gt;&lt;BR /&gt;&lt;/OL&gt;&lt;BR /&gt;&lt;SPAN&gt;You could also try it with Parallel Python rather than Multiprocessing. Some of my arcpy workflows will work only with one, while others only with the other. My blog (link below) has some posts describing the use of both Multiprocessing and Parallel Python.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG style="font-size: 3;"&gt;To check mystring&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Print statements don't work great in multiprocessing, so to test mystring you need to make the code run sequentially and print out, like this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;import arcinfo, os, re, multiprocessing
import arcpy
from arcpy import env
from arcpy.sa import *

arcpy.CheckOutExtension("Spatial")
env.workspace = "C:/studium/project/scriptx"
arcpy.env.overwriteOutput ='True'

arcpy.CreateFolder_management("C:/studium/project", "scriptxy")

def update_shapefiles(shapefile):
&amp;nbsp;&amp;nbsp;&amp;nbsp; '''Worker function'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed = Viewshed("C:/studium/00_Master/clean/demtora_okay1", shapefile, 2, "CURVED_EARTH", 0.15)
&amp;nbsp;&amp;nbsp;&amp;nbsp; mystring = "C:/studium/project/scriptxy/"+shapefile[26:37]
&amp;nbsp;&amp;nbsp;&amp;nbsp; print mystring
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed.save(mystring)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
# End update_shapefiles
def main():
&amp;nbsp;&amp;nbsp;&amp;nbsp; ''' Create a pool class and run the jobs.'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace = "C:/studium/project/scriptx"
&amp;nbsp;&amp;nbsp;&amp;nbsp; print arcpy.env.workspace
&amp;nbsp;&amp;nbsp;&amp;nbsp; fcs = arcpy.ListFeatureClasses('turbine*')
&amp;nbsp;&amp;nbsp;&amp;nbsp; fc_list = [os.path.join("C:/studium/project/scriptx", fc) for fc in fcs]
&amp;nbsp;&amp;nbsp;&amp;nbsp; print fc_list
&amp;nbsp;&amp;nbsp;&amp;nbsp; for fc in fc_list:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; update_shapefiles(fc)

&amp;nbsp;&amp;nbsp;&amp;nbsp; # End main
 
if __name__ == '__main__':
&amp;nbsp;&amp;nbsp;&amp;nbsp; main()&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Comment out the actual calculation steps in the function and it will essentially print all the paths (without you having to wait for ages).&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Or you can get the processes to return an output of mystring, which will work with the code below.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG style="font-size: 3;"&gt;Asynchronous&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Essentially something like this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs = [] # job queue
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool()
&amp;nbsp;&amp;nbsp;&amp;nbsp; for fc in fc_list:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; jobs.append(pool.apply_async(update_shapefiles, (fc, )) # send jobs to be processed - is "fc, " to make it a tuple of length 1
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for job in jobs: # collect results
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; job.get()
&amp;nbsp;&amp;nbsp;&amp;nbsp; del jobs&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;You might need to return something for job.get() to work, so you could get your function to return &lt;/SPAN&gt;&lt;STRONG&gt;mystring &lt;/STRONG&gt;&lt;SPAN&gt;and do &lt;/SPAN&gt;&lt;SPAN style="font-style:italic;"&gt;print job.get()&lt;/SPAN&gt;&lt;SPAN&gt; instead of just job.get().&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:34:54 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714248#M55406</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2021-12-12T06:34:54Z</dc:date>
    </item>
    <item>
      <title>Re: Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714249#M55407</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;first of all, thank you for your quick answer. In the last days I spend a lot of time researching my problem. I found a script and edited it further. The script takes my multiple point file, "splits" it up into single point features, buffers each of them to know the extend for the elevation raster (dem) for the viewshed analysis and creates so called multiple ring buffers. &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;it works fine (and is very fast) exept for one misterous problem that kept me up for the last nights. I searched everywhere but couldn't find anything about it, but then I am not a programmer and my python knowlegde is very restricted.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;SO here comes the mysterious problem: even though my multiple point file is valid and has no broken geometry or anything not every single point file is being bufferd. for some crazy reason the buffer analysis as well as the multiple ring buffer analysis skip certain points. If I manually delete these points the error passes to the next point. I also tried to buffer and multibuffer the whole multiple point file and it works fine. the error has to be somewhere in the script...so here is my newest version. I tried to comment it out as best as I could&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;# -*- coding: cp1252 -*-
# Imports...
import arcpy
from arcpy.sa import *
from arcpy import env
import multiprocessing, ntpath, os, shutil, time, math
from re import split

#overwrite is now possible
arcpy.gp.overwriteOutput = True

#create folders
arcpy.CreateFolder_management("C:/studium", "project")
arcpy.CreateFolder_management("C:/studium/project", "script0")
arcpy.CreateFolder_management("C:/studium/project", "script1")
arcpy.CreateFolder_management("C:/studium/project", "bufferRings")
arcpy.CreateFolder_management("C:/studium/project", "tmp")



def calcVS(argsCSV):
&amp;nbsp;&amp;nbsp;&amp;nbsp; import arcpy
&amp;nbsp;&amp;nbsp;&amp;nbsp; '''Worker function'''
&amp;nbsp;&amp;nbsp;&amp;nbsp; #takes the list (of 4 items) and splits them so every item can be accessed
&amp;nbsp;&amp;nbsp;&amp;nbsp; argsList = split(",", argsCSV)
&amp;nbsp;&amp;nbsp;&amp;nbsp; vsFC = argsList[0] #item 1 (path of the singlepoint-shape)
&amp;nbsp;&amp;nbsp;&amp;nbsp; elevRas = argsList[1] #item2 (path of the dem)
&amp;nbsp;&amp;nbsp;&amp;nbsp; outWS = argsList[2] #item3 (output path)
&amp;nbsp;&amp;nbsp;&amp;nbsp; NABE = float(argsList[3]) #item4 (height)

&amp;nbsp;&amp;nbsp;&amp;nbsp; head, tail = ntpath.split(vsFC) # split into head and tail

&amp;nbsp;&amp;nbsp;&amp;nbsp; #creates a path for scratch workspace 
&amp;nbsp;&amp;nbsp;&amp;nbsp; MbrPath =os.path.join("c:/studium/project/tmp", "MRB_"+ tail[:-4])
&amp;nbsp;&amp;nbsp;&amp;nbsp; #makes the directory
&amp;nbsp;&amp;nbsp;&amp;nbsp; os.mkdir(MbrPath)
&amp;nbsp;&amp;nbsp;&amp;nbsp; #sets the scratch workspace
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.scratchWorkspace = MbrPath

&amp;nbsp;&amp;nbsp;&amp;nbsp; #bufferzones will later be calculated, but I spared you the calculation
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone1= 500
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone2= 1000
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone3= 2000
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone4= 5000
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone5= 9000
&amp;nbsp;&amp;nbsp;&amp;nbsp; Zone6=15000

&amp;nbsp;&amp;nbsp;&amp;nbsp; #create multiple buffer rings for every pointfile with specific zones
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.MultipleRingBuffer_analysis(vsFC , "c:/studium/project/bufferRings/Rings_%s" % (tail), str(Zone1)+";"+str(Zone2)+";"+str(Zone3)+";"+str(Zone4)+";"+str(Zone5)+";"+str(Zone6) , "meters" , "", "ALL")

&amp;nbsp;&amp;nbsp;&amp;nbsp; #delete the scratch directory for the multiple buffer rings
&amp;nbsp;&amp;nbsp;&amp;nbsp; shutil.rmtree(MbrPath)

&amp;nbsp;&amp;nbsp;&amp;nbsp; #the buffering for the extent. Viewsheds have a maximus of 15 kilometers, so
&amp;nbsp;&amp;nbsp;&amp;nbsp; #the dem extent can be scaled to that extent also
&amp;nbsp;&amp;nbsp;&amp;nbsp; buffVS = os.path.join(outWS, "bu%s" % (tail))
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Buffer_analysis(vsFC, buffVS, "RADIUS2")
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.extent = buffVS
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Delete_management(buffVS)

&amp;nbsp;&amp;nbsp;&amp;nbsp; #the scratch workspace for the viewshed analysis
&amp;nbsp;&amp;nbsp;&amp;nbsp; scratchWS = os.path.join(outWS, tail[:-4])
&amp;nbsp;&amp;nbsp;&amp;nbsp; os.mkdir(scratchWS)
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.scratchWorkspace = scratchWS

&amp;nbsp;&amp;nbsp;&amp;nbsp; #check out the license
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.CheckOutExtension("Spatial")

&amp;nbsp;&amp;nbsp;&amp;nbsp; #create viewsheds
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed = Viewshed(elevRas, vsFC,1,"CURVED_EARTH",0.15)
&amp;nbsp;&amp;nbsp;&amp;nbsp; outViewshed.save(os.path.join(outWS, "%s.tif" % (tail[:-4])))

&amp;nbsp;&amp;nbsp;&amp;nbsp; #delete second scratch workspace
&amp;nbsp;&amp;nbsp;&amp;nbsp; shutil.rmtree(scratchWS)
&amp;nbsp;&amp;nbsp;&amp;nbsp; #delete the single point file
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Delete_management(vsFC)



if __name__ == "__main__":
&amp;nbsp;&amp;nbsp;&amp;nbsp; obsShp = "C:/studium/00_Master/clean/drei.shp" #my multiple point-file 
&amp;nbsp;&amp;nbsp;&amp;nbsp; uidField = "FID" # unique identifier of pointfile table
&amp;nbsp;&amp;nbsp;&amp;nbsp; Hoehe= "NABENHOEHE" #field of heights of pointfile table
&amp;nbsp;&amp;nbsp;&amp;nbsp; elevationRas = "C:/studium/00_Master/clean/demtora_okay1" #digital elevation model
&amp;nbsp;&amp;nbsp;&amp;nbsp; outputWS = "C:/studium/project/script1" 
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.env.workspace = outputWS #sets the workspace

&amp;nbsp;&amp;nbsp;&amp;nbsp; start = time.clock() #remembers the time fot duration calculation at the end


&amp;nbsp;&amp;nbsp;&amp;nbsp; obsList=[] #create empty list for FIDs
&amp;nbsp;&amp;nbsp;&amp;nbsp; HoehenList =[] #create empty list for heights
&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.MakeFeatureLayer_management (obsShp, "pts")
&amp;nbsp;&amp;nbsp;&amp;nbsp; # make temporairy layer that can be searched, then search it and save
&amp;nbsp;&amp;nbsp;&amp;nbsp; # FID and GESAMTHOEH from everey row in a list
&amp;nbsp;&amp;nbsp;&amp;nbsp; with arcpy.da.SearchCursor('pts',['FID','GESAMTHOEH']) as cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for row in cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; HoehenList.append(row[1])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; obsList.append(row[0])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 
&amp;nbsp;&amp;nbsp;&amp;nbsp; vsList = []#create empty list for every value that shall be passed to calcVS

&amp;nbsp;&amp;nbsp;&amp;nbsp; for obs in obsList:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; obsFileName = "Vshed%s.shp" % (str(obs).zfill(4))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; HIES = HoehenList[obs] #save inherent height in a variable
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #make featureclasses for each point
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; arcpy.Select_analysis(obsShp, obsFileName, uidField+" = "+str(obs))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #fill list of lists for every value that shall be passed to calcVS 
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vsList.append("%s,%s,%s,%s" % (os.path.join(outputWS, obsFileName), elevationRas, outputWS, HIES))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print vsList





&amp;nbsp;&amp;nbsp;&amp;nbsp; pool = multiprocessing.Pool()# starts multiprocessing

&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.map(calcVS, vsList)
&amp;nbsp;&amp;nbsp;&amp;nbsp; # chops the iterable into a number of chunks and submits them to calcVS 

&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.close()# stops sending more data
&amp;nbsp;&amp;nbsp;&amp;nbsp; pool.join()# waits for processes to finish and terminates them

&amp;nbsp;&amp;nbsp;&amp;nbsp; #calculate duration
&amp;nbsp;&amp;nbsp;&amp;nbsp; end = time.clock()
&amp;nbsp;&amp;nbsp;&amp;nbsp; duration = end - start
&amp;nbsp;&amp;nbsp;&amp;nbsp; hours, remainder = divmod(duration, 3600)
&amp;nbsp;&amp;nbsp;&amp;nbsp; minutes, seconds = divmod(remainder, 60)
&amp;nbsp;&amp;nbsp;&amp;nbsp; print "Completed in %dh:%dm:%fs" % (hours, minutes, seconds)

&amp;nbsp;&amp;nbsp;&amp;nbsp; #shows a counter before calling the next script (so the user can interfere)
&amp;nbsp;&amp;nbsp;&amp;nbsp; countdown = 5
&amp;nbsp;&amp;nbsp;&amp;nbsp; for j in range(1, countdown):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; print str(countdown - j) + " Sekunden bis nächstes Skript gestartet wird"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; time.sleep(1)

&amp;nbsp;&amp;nbsp;&amp;nbsp; #here the next file would be called, but I disabled it
&amp;nbsp;&amp;nbsp;&amp;nbsp; #execfile("c:/studium/00_Master/clean/10_2.py")&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;@ stacy rendall:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I tried to implement apply_async but I couldn't get it to work. All the examples I found about it on the internet suggested using the map-function. But I'm starting to think my error might have something tot do with it.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;p.s.:I like your blog and have been there even before the initial post but it is definetly to complicated for me to undertsand it completely&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:34:57 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714249#M55407</guid>
      <dc:creator>nb</dc:creator>
      <dc:date>2021-12-12T06:34:57Z</dc:date>
    </item>
    <item>
      <title>Re: Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714250#M55408</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt; for some crazy reason the buffer analysis as well as the multiple ring buffer analysis skip certain points. If I manually delete these points the error passes to the next point.&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;if I use multiprocessing the buffer and multibufferrings of the singlepoints 13, 55, 75, 99, 105, 110, 115 from 156 single point features are not created&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;if I use single core processing, so the normal stuff without moduls and pool.map than ist is just the 13 and the 99 that are skipped.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I find this so frustrating...especially because it doesn't happen if I buffer or multibuffer the whole multipoint shapefile.&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 27 Jul 2013 20:11:55 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714250#M55408</guid>
      <dc:creator>nb</dc:creator>
      <dc:date>2013-07-27T20:11:55Z</dc:date>
    </item>
    <item>
      <title>Re: Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714251#M55409</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;okay I debugged it step by step and finally found out whats causing this strange behaviour: &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;The multiple buffer rings (as well as the normal buffer)&amp;nbsp; work fine! hey buffer &lt;/SPAN&gt;&lt;STRONG&gt;all &lt;/STRONG&gt;&lt;SPAN&gt;the featureclasses. When I set &lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_macro_code jive_text_macro"&gt;arcpy.env.extent = buffVS &lt;/PRE&gt;&lt;SPAN&gt; it somehow deletes my buffer feature class.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I checked this phenomenon 5 times to be extra sure that this is real. I edited the script back and forth and only this code causes the error. It still can not have to do anything with my point data, because, when I don't execute the line with the &lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_macro_code jive_text_macro"&gt;arcpy.env.extent = buffVS &lt;/PRE&gt;&lt;BR /&gt;&lt;SPAN&gt;it calculates the buffer-files and they ALL have a valid extent. I&amp;nbsp; triplechecked.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I'm pretty confused by this behaviour of arcpy.env.extent = path. Why does it kill my buffer feature class and why does it work with all the other feature classes????&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;update: well, it seems like everytime one process (from the 8 different which are running simmultainiously) sets the environment extent to a certain extent, another process that is also running through the script uses this "global" extent and tries to create the next multiple bufferring with it. I'm not too sure about it, because stuff like this makes my brain hurt.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I tried to find a solution in which not the "global" extent (env.extent) is used, but one directly for the raster of the viewshed analysis. But all my efforts failed so far. Any ideas?&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 28 Jul 2013 13:39:32 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714251#M55409</guid>
      <dc:creator>nb</dc:creator>
      <dc:date>2013-07-28T13:39:32Z</dc:date>
    </item>
    <item>
      <title>Re: Multiprocessing viewsheds (easy)</title>
      <link>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714252#M55410</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;update: well, it seems like everytime one process (from the 8 different which are running simmultainiously) sets the environment extent to a certain extent, another process that is also running through the script uses this "global" extent and tries to create the next multiple bufferring with it. I'm not too sure about it, because stuff like this makes my brain hurt.&lt;BR /&gt;I tried to find a solution in which not the "global" extent (env.extent) is used, but one directly for the raster of the viewshed analysis. But all my efforts failed so far. Any ideas?&lt;/BLOCKQUOTE&gt;&lt;BR /&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;OK. Good job working it out! It is really useful to be able to do that.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Env stands for environment, the things you set in there will affect all Arc tools. It is called a 'global' variable, as any tool running will check what it is when it needs to. This can be quite problematic when parallelising problems, as if 8 worker processes are running near simultaneously the value will get changed 8 times, and you can't really guarantee what the value will be when accessed later on by each process. The same is true for the scratch workspace - this should not be in the worker process code - if you need to, make your own temporary folders/GDBs. The moral of the story is that &lt;/SPAN&gt;&lt;STRONG&gt;you cannot reliably set &lt;SPAN style="text-decoration:underline;"&gt;any &lt;/SPAN&gt;environment variables in your worker process.&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The only thing you &lt;/SPAN&gt;&lt;STRONG&gt;can &lt;/STRONG&gt;&lt;SPAN&gt;do is what you were trying to do, somehow define the extent so that it applies only to the files being used by the worker process. To do this you need to understand the tools really well... I have no experience with these tools, sorry...&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I also notice that there is a lot of code in your worker process (~23 functional lines) this is way too much. It should be stripped back to the bare minimum, only the most CPU intensive parts and the absolutely essential other components to support them. What I want you to do is run your worker process sequentially with timers around every operation. This will be a pain, but is essential to work out what is the actual thing you need to parallelise. You can do it like this:&lt;/SPAN&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;tS = time.clock()
### arcpy.SomeOperation()
arcpy.AddMessage('SomeOperation took: %.2f seconds' % (time.clock() - tS))&lt;/PRE&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;When you read my blog, did you work through all three posts on multiprocessing? I tried to make it describe a simple and clear process for getting a general problem to parallelise, but it could still need some work to increase clarity. I suggest you do each one in order and copy any code across by manually typing it - this really helps lock it in. If you are still getting stuck with the posts, please let me know what is confusing and I will try to improve them!&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;You also mention that apply_async doesn't work for you - can you provide any more information?&lt;/SPAN&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 06:35:00 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/multiprocessing-viewsheds-easy/m-p/714252#M55410</guid>
      <dc:creator>StacyRendall1</dc:creator>
      <dc:date>2021-12-12T06:35:00Z</dc:date>
    </item>
  </channel>
</rss>

