Hi Sebastian,That is a good idea to use the buffer geometry object instead. In my case I have a need to use those pnt buffer extents in several later scripts so I figured it would be faster just to build them once and reference those.The block idea is a good one too. I hadn't thought of that.... Maybe in v2.0... A simple way might be to cut my study area into quadrants (4 rectangles) and simply ID the pnt buffers that exist "entirely within" a quadrant polygon. Then select them four at a time (random selection of quad pairs QUAD_IDs composed of 1,2,3,4), and process each quad pair at once. For my purpose I note that limiting the extent makes it run about 10 times as fast as the entire study area. So it might take some fancier algorithm design...I really like your GRASS work around... Do you have any example script that make use of that? I'd love to see them. I have noted in the past that the ESRI cost<whatever> algorithms are only 8 direction which leads to some not-so-natural-looking outputs: http://forums.arcgis.com/threads/44464-Problems-with-cost-path-analysis-not-shortest-distance?p=1517.... It sure would be nice if they could include a Knights move option in the Cost* functions Hint, hint to any ESRI people that might see this...In my own expereince I have noted some funky behavior of random crashes in SA tools... in the past I was able to "fix" the problem by using only raster inputs... For example, converting the input point feature in CostDistance or Watershed to a raster 1st, and then using the raster version of the point as input (yes even if the dang point was smack in the middle of the pixel). An interesting note: I have this new fancy computer that absolutely screems (Xeon 2687 with SSDs in RAID0). Am am really excited to have this machine (and its 8 ,but effectivly 16, cores) so that I can finally relly leverage some subprocess scripting and SA functions (previous to v10 you couldn't run concurrent SA tools). I noted that my subprocesses (15 concurrent ones!) occasionally "collided" and would randomly crash and fail... Not neccessarily CostDistance... but any SA tool. Big let down after I thouight I'd be able to do all this fancy concurrent SA processing finally. I guessed that all the seperate SA processes must be *** &^%$ing STILL*** trying to write some scratch temp file somewhere (occasoonally all at the same time). So after some real hair pulling, I figured out that creating some dummy TEMP directories BEFORE launching the subprocess script completely fixed the problem - *** boy was I happy to get that thing working!*** So my point: Could it be that you have concurrent SA processes that are all wrtting to the same TEMP folder and occassionally "colliding"?If it helps, here's the latest incarnation of my "master" script that calles the scripts that calls the CostDistance building script as a subprocess :# Description
# -----------
# This script is the master script that builds all the cost distance grids
# Written for Python version: 2.7.2 and above (PythonWin)
# Written for ArcGIS version: 10.1 SP0
# Fuzzy logic score scale of 0-100, 50 being neutral
# Author: Chris Snyder, WA Department of Natural Resources, chris.snyder(at)wadnr.gov
# Date created: 4/1/2010
# Last updated: 9/27/2012, csny490
try:
#Import system modules
import sys, os, time, traceback, subprocess
#Defines some functions
def launchProcess(jobId):
global jobDict
tmpDirName = r"C:\Temp\gp_tmp_" + time.strftime('%Y%m%d%H%M%S')
os.mkdir(tmpDirName)
os.environ["TEMP"] = tmpDirName
os.environ["TMP"] = tmpDirName
os.environ['TEMPDIR'] = tmpDirName
inputVar1 = jobDict[jobId][2][0]
inputVar2 = jobDict[jobId][2][1]
inputVar3 = jobDict[jobId][2][2]
jobDict[jobId][4] = subprocess.Popen([jobDict[jobId][0], jobDict[jobId][1], str(inputVar1), str(inputVar2), str(inputVar3)], shell=False)
jobDict[jobId][3] = "IN_PROGRESS" #Indicate the job is 'IN_PROGRESS'
time.sleep(2) #Give the subprocesses (especially arcpy enabled subprocesses) some time to catch up
def showPyMessage():
try:
print >> open(logFile, 'a'), str(time.ctime()) + " - " + str(message)
print str(time.ctime()) + " - " + str(message)
except:
pass
#Specifies the root directory variable, defines the logFile variable, and does some minor error checking...
root = r"C:\csny490\nso_model"
if os.path.exists(root)== False:
print "ERROR: Specified root directory " + root + " does not exist... Exiting script!";time.sleep(3);sys.exit()
scriptName = sys.argv[0].split("\\")[-1].split(".")[0] #Gets the name of the script without the extension
dateTimeStamp = time.strftime('%Y%m%d%H%M%S') #in the format YYYYMMDDHHMMSS
logFile = root + "\\" + scriptName + "_" + dateTimeStamp + ".log" #Creates the logFile variable
if os.path.exists(logFile)== True:
os.remove(logFile)
message = "Deleting existing log file " + logFile + "... Recreating " + logFile; showPyMessage()
#Process: Make sure the "cost_grids" dir exists...
if os.path.exists(root + "\\cost_grids") == False:
os.mkdir(root + "\\cost_grids")
#Process: Defines some variables
scenarioDict = {"NA": 'No Action', "LP": 'Landscape'}
decadeList = [0,1,2,3,4,5,6,7,8,9]
pathToScenarioDecadeGrids = root + "\\decade_scenario_grids"
#Determine how many processes to run concurrently
numberOfProcessorsToUse = 15 #The number of processors you (the user) wants to use
if numberOfProcessorsToUse > int(os.environ.get("NUMBER_OF_PROCESSORS")):
numberOfProcessorsToUse = int(os.environ.get("NUMBER_OF_PROCESSORS"))
#Process: Define the python.exe path and slaveScript path
childProcExePath = os.path.join(sys.prefix, "python.exe")
childScriptPath = r"\\snarf\am\div_lm\ds\gis\projects\oesf_eis_draft_second\nso_models\scripts\nso_model_scripts_v20120912\STEP5B_path_distance_slave_v101_20120912.py"
#Process: Loop through the decades and scenarios
jobDict = {}
for decade in decadeList:
for scenario in scenarioDict:
jobDict[(scenario, decade)] = [childProcExePath, childScriptPath, [root, scenario, decade], "NOT_STARTED", None]
#Process: Get those jobs going!
while len([i for i in jobDict if jobDict[3] != 'NOT_STARTED']) < numberOfProcessorsToUse:
launchProcess([i for i in jobDict if jobDict[3] == 'NOT_STARTED'][0]) #Feed the appropriate jobId to the launchProcess() function
while len([i for i in jobDict if jobDict[3] in ("SUCCEEDED","FAILED")]) < len(jobDict):
time.sleep(10)
for jobId in [i for i in jobDict if jobDict[3] == 'IN_PROGRESS' and jobDict[4].poll() != None]: #if an subprocess is listed as 'IN_PROGRESS' and polls as 'None' (i.e. "done" but success or failure unknown)
if jobDict[jobId][4].returncode == 0: #return code of 0 indicates success (no sys.exit(1) command encountered in the child process
jobDict[jobId][3] = "SUCCEEDED"
message = "SUCCESS: " + str(jobId) + " completed successfully..."; showPyMessage()
if jobDict[jobId][4].returncode > 0: #return code of 1 (or another integer value) indicates failure (a sys.exit(1) command was encountered in the child process)
jobDict[jobId][3] = "FAILED"
message = "ERROR: " + str(jobId) + " failed..."; showPyMessage()
if len([i for i in jobDict if jobDict[3] == 'NOT_STARTED']) > 0: #if there are still jobs = 'NOT_STARTED', launch the next one in line
launchProcess([i for i in jobDict if jobDict[3] == 'NOT_STARTED'][0])
#Process: Final check for failures - if any, exit(1)
if len([i for i in jobDict if jobDict[3] == 'FAILED']) > 0:
message = "ERROR: These jobs were an epic fail:"; showPyMessage()
for jobId in [i for i in jobDict if jobDict[3] == 'FAILED']:
message = str(jobId) + " failed..."; showPyMessage()
sys.exit(1)
message = "ALL DONE!"; showPyMessage()
except:
message = "\n*** PYTHON ERRORS *** "; showPyMessage()
message = "Python Traceback Info: " + traceback.format_tb(sys.exc_info()[2])[0]; showPyMessage()
message = "Python Error Info: " + str(sys.exc_type)+ ": " + str(sys.exc_value) + "\n"; showPyMessage()
sys.exit(1)