POST
|
For a while I had a spate of problems similar to yours, but it seems to have cleared itself up. Not sure if I am just writing less lock-prone code, or if an ArcGIS patch along the way fixed something up. Here are two things which might help you: Using Exists, Compact, Exists on the GDB in question. This seems to force a refresh and can sometimes clear out the locks. To do it in only one line of code: [INDENT] #input GDB is testWS
[arcpy.Exists(testWS), arcpy.Compact(testWS), arcpy.Exists(testWS)] [/INDENT] I wrote a function that will kill any process, except for the running process, that might have the workspace open. If it is this process that has the workspace open it will repeat the above step indefinitely until the lock disappears. I mostly used this when running scripts that use multiple CPUs at the same time. It can also close ArcMap or Catalog if you accidentally had them open using your workspace. It requires that you import the os library (part of Python), and download, install and import the Psutil library. It is definitely not fool-proof and can raise exceptions if a process is found that has the workspace open but that process closes its handle before the code gets to doing it. To use, add this to the top of your code: [INDENT] import os
import psutil
def clearWSLocks(inputWS):
'''Attempts to clear ArcGIS/Arcpy locks on a workspace.
Two methods:
1: if ANOTHER process (i.e. ArcCatalog) has the workspace open, that process is terminated
2: if THIS process has the workspace open, it attempts to clear locks using arcpy.Exists, arcpy.Compact and arcpy.Exists in sequence
Required imports: os, psutil
'''
# get process ID for this process (treated differently)
thisPID = os.getpid()
# normalise path
_inputWS = os.path.normpath(inputWS)
# get list of currently running Arc/Python processes
p_List = []
ps = psutil.process_iter()
for p in ps:
if ('Arc' in p.name) or ('python' in p.name):
p_List.append(p.pid)
# iterate through processes
for pid in p_List:
try:
p = psutil.Process(pid)
except: # psutil NoSuchProcess...
p = False
# if any have the workspace open
if p and any(_inputWS in pth for pth in [fl.path for fl in p.get_open_files()]):
print ' !!! Workspace open: %s' % _inputWS
# terminate if it is another process
if pid != thisPID:
print ' !!! Terminating process: %s' % p.name
p.terminate()
else:
print ' !!! This process has workspace open...'
# if this process has workspace open, keep trying while it is open...
while any(_inputWS in pth for pth in [fl.path for fl in psutil.Process(thisPID).get_open_files()]):
print ' !!! Trying Exists, Compact, Exists to clear locks: returned %s' % all([arcpy.Exists(_inputWS), arcpy.Compact_management(_inputWS), arcpy.Exists(_inputWS)])
return True then use a function call to try and clear the locks, like so: clearWSLocks(testWS) If it often throws exceptions, you could do this (but it may then not clear the locks...): try:
clearWSLocks(testWS)
except:
pass [/INDENT] Let me know if either of these help! I haven't used the function for a while, and had to adapt it a bit for posting, so there may be errors in it... You can use trial and error to work out where you need to use these within the code.
... View more
09-08-2013
05:20 PM
|
0
|
0
|
246
|
POST
|
Setting the value works for me. You could try printing the values before and after your assignment to confirm this: arcpy.AddMessage(Z: ' + arcpy.env.outputZFlag)
arcpy.env.outputZFlag = "Disabled"
arcpy.AddMessage(Z: ' + arcpy.env.outputZFlag)
arcpy.AddMessage('M: ' + arcpy.env.outputMFlag)
arcpy.env.outputMFlag = "Disabled"
arcpy.AddMessage('M: ' + arcpy.env.outputMFlag) It may be that the variable is being set, but some tool later on is not honoring the setting.
... View more
09-05-2013
01:58 PM
|
0
|
0
|
427
|
POST
|
Excellent! It is great that you have written out a plan, many people don't and then get hopelessly lost... It definitely looks possible. Right now we'll get step 1 partly working, just printing to screen. Once that is good we can put the data in a useful data structure, rather than just printing it. I think James' suggestion, of using Pandas, is good. I haven't used it myself, but I think it works pretty well with time data and plotting functions. So there is something else you will need to look up and have a play with... Do you intend to only ever assess one meter at a time? You have a few options, but for now we may as well keep part 1 collecting all the data, and between part 1 and part 2 define the one METER NUMBER to make the graph of. Stacy- Thank you for the posts. So far - I do not have much code written since I am still in the research and design phase of this project to get to the final outcome. Really looking for advice and maybe examples of different parts of the overall Python code. So far - I have some great information from different people to research and start doing some trial and error. The only thing I do not have yet: How do I go about taking a Python script and making a "One-Click" custom button in ArcMap that when someone clicks on it - it runs the Python Script in the background?
... View more
08-28-2013
07:08 AM
|
0
|
0
|
75
|
POST
|
I just did something similar to this yesterday. I have a test model and a code that might be a good reference. I just created 2 tables with different row counts to test and used a model parameters as input : import arcpy
rowcount = int(arcpy.GetParameterAsText(0))
if rowcount >= 1:
arcpy.SetParameterAsText(1, "True")
arcpy.SetParameterAsText(2, "False")
arcpy.AddMessage("Rows Exist, updating table")
elif rowcount == 0:
arcpy.SetParameterAsText(1, "False")
arcpy.SetParameterAsText(2, "True")
arcpy.AddMessage("0 rows, updating table")
else:
arcpy.AddMessage("Code Didn't Work")
del rowcount
This is close to the solution that I'm looking for. I'm "simply" attempting to iterate through my outputs of a process, and when there is an empty geography created it, to delete it. How did you get the outputs from this script to then link back to the geoprocessing tools? i.e. I have embedded this revised script into modelbuilder, but now am stuck on how to connect the Delete tool to the process when the rowcount == 0. Thanks in advance for any assistance.
... View more
05-30-2014
09:36 AM
|
0
|
0
|
855
|
POST
|
How do I do this if I'm trying to add an integer to each hatching value (to match up mileposts which don't start at zero) and round to the nearest tenth? If I use a hatching label expression, it won't let me also select the precision value so I need to include what I want the label to be rounded to in the hatching label expression and I don't know enough python to do this. Also, I'm using a measured polyline with one record so I don't have any fields to base it off of...except esri_measure, I think. I'm new to using measured polylines..
... View more
05-13-2014
03:11 PM
|
0
|
0
|
473
|
POST
|
Hello khibma, I hope you didn't spend too much time writing your answer, I have absolutely no problem with the scratch directory. The problem comes from log files that are not released when executed from the toolbox. I solved it by using a text file instead of using 'logging' facilities.
... View more
08-12-2013
12:14 AM
|
0
|
0
|
2487
|
POST
|
That looks better. You should read the Docs for ListDatasets. You are calling it with this: arcpy.ListDatasets("*.dwg", "CAD"), but "CAD" is not a valid feature type for this tool, so it returns a value of None. You can tell that something is fishy because of this in your test: >>> print '1: ', arcpy.ListDatasets("*.dwg", "CAD")
1: None That means your next line, for fd in arcpy.ListDatasets("*.dwg", "CAD"): is actually being evaluated as for fd in None:, which doesn't make sense, so it causes an error. I hope it is clear to you what is actually going on here... To fix it, you can try the code below. I have just removed the "CAD" part in the ListDatasets, it will default to listing all files with a .dwg extension. # Name: ImportCADandMerge.py
# Description: Imports and merges polylines from one workspace into a single feature class
# Import system modules
import arcpy
from arcpy import env
env.workspace = "C:\Users\jjudycki\Desktop\Vectren\130103.00 - VEDO 2014 Groups"
# Create a value table that will hold the input feature classes for Merge
vTab = arcpy.ValueTable()
print '1: ', arcpy.ListDatasets("*.dwg")
# Step through each dataset in the list
for fd in arcpy.ListDatasets("*.dwg"):
print '2: ', fd
layername = fd + "_Layer"
# Select only the Polyine features on the drawing layer EX-PAVE
arcpy.MakeFeatureLayer_management(fd + "/Polyline", layername, "\"Layer\" = 'EX-PAVE'")
print '3: ', layername
vTab.addRow(layername)
# Merge the CAD features into one feature class
arcpy.Merge_management(vTab, "C:\Users\jjudycki\Desktop\Vectren\130103.00 - VEDO 2014 Groups\VEDO_2014_Drawings.gdb")
... View more
08-08-2013
01:30 PM
|
0
|
0
|
603
|
POST
|
I was getting this error "AttributeError: 'NoneType' object has no attribute 'dataFrames'" today in a script that I was running against all STARTED services in AGS (10.2.2). I'm looping thru all those services, finding the MXDs and searching to see if the FGDB I'm needing to update is in the MXD, then creating a list of the services I need to STOP so I could replace the FGDB and then restart. This error was showing up on only one of my services (and therefore crashing). It turns out the for some reason, that mxd must be saved as 10.3.x, or so the error seems to indicate if I try to open with ArcMap 10.2.2. The error occurred with "lyrList = arcpy.mapping.ListLayers(mxd)" so I wrapped in in a ListDataFrames loop, with no luck. Since there is no easy or reliable way to check for the version of an .mxd (from several hours of searching and testing a half dozen suggestions), I found the try: my best option, and I'll just include this service in my STOP list, just in case. Thought I would give this thumbs up on the try: solution for anyone that might see this....we do what we need to get things done sometimes Snippet of my code, still under development for anyone interested. (will not run on its own) # ....
def myMsgs(message):
arcpy.AddMessage(message)
print(message)
updateServices = []
for startedService in myServices:
mxdFound = False
update = False
#myMsgs("Reviewing service: {0}".format(startedService))
theServicePath = os.path.join(servicesDir, startedService)
if not arcpy.Exists(theServicePath):
myMsgs("...we have a problem")
exit
else:
myMsgs("Reviewing service: {0}".format(startedService))
for root, dirs, files in os.walk(theServicePath):
for fn in files:
fullPath = os.path.join(root, fn)
basename, extension = os.path.splitext(fn)
if not extension == ".mxd":
pass
else:
#myMsgs(" mxd file found")
mxd = arcpy.mapping.MapDocument(fullPath)
try:
# Do stuff with MXD using arcpy.mapping
for df in arcpy.mapping.ListDataFrames(mxd):
lyrList = arcpy.mapping.ListLayers(mxd)
mxdFound = True
for lyr in lyrList:
if lyr.isFeatureLayer and update == False:
if lyr.supports("dataSource") and update == False:
theSource = lyr.dataSource
if updateDS[0] in theSource:
update = True
myMsgs(" update {0}".format(startedService))
updateServices.append(startedService)
except AttributeError:
myMsgs("!!!! WARNING: This MXD has unreadable dataframes\n{} !!!!".format(mxd.filePath))
myMsgs(" -> will stop as a precaution....")
updateServices.append(startedService) # edit....forgot this on original post
pass
del mxd
for a in updateServices:
myMsgs("will stop {}".format(a))
# ....
... View more
08-11-2016
05:08 PM
|
1
|
0
|
422
|
POST
|
I have this same problem. Lots of rasters using in_memory. I have even used arcpy.Delete_management to remove them when i am done, but I still get this error.
... View more
05-11-2017
12:14 PM
|
0
|
0
|
465
|
POST
|
Hi guys, This post is just to let you know that I had the courage to uninstall ArcGIS and delete the Python folder (I also deleted all the python2.7 software, just keeping pyscripter and the tool for Visual Studio). I redid the exercise and everything worked like a charm so it was really a problem with having that other Python installation. Thank you so much for all your help!!!:D
... View more
08-13-2013
11:59 AM
|
0
|
0
|
467
|
POST
|
update: well, it seems like everytime one process (from the 8 different which are running simmultainiously) sets the environment extent to a certain extent, another process that is also running through the script uses this "global" extent and tries to create the next multiple bufferring with it. I'm not too sure about it, because stuff like this makes my brain hurt. I tried to find a solution in which not the "global" extent (env.extent) is used, but one directly for the raster of the viewshed analysis. But all my efforts failed so far. Any ideas? OK. Good job working it out! It is really useful to be able to do that. Env stands for environment, the things you set in there will affect all Arc tools. It is called a 'global' variable, as any tool running will check what it is when it needs to. This can be quite problematic when parallelising problems, as if 8 worker processes are running near simultaneously the value will get changed 8 times, and you can't really guarantee what the value will be when accessed later on by each process. The same is true for the scratch workspace - this should not be in the worker process code - if you need to, make your own temporary folders/GDBs. The moral of the story is that you cannot reliably set any environment variables in your worker process. The only thing you can do is what you were trying to do, somehow define the extent so that it applies only to the files being used by the worker process. To do this you need to understand the tools really well... I have no experience with these tools, sorry... I also notice that there is a lot of code in your worker process (~23 functional lines) this is way too much. It should be stripped back to the bare minimum, only the most CPU intensive parts and the absolutely essential other components to support them. What I want you to do is run your worker process sequentially with timers around every operation. This will be a pain, but is essential to work out what is the actual thing you need to parallelise. You can do it like this: tS = time.clock()
### arcpy.SomeOperation()
arcpy.AddMessage('SomeOperation took: %.2f seconds' % (time.clock() - tS)) When you read my blog, did you work through all three posts on multiprocessing? I tried to make it describe a simple and clear process for getting a general problem to parallelise, but it could still need some work to increase clarity. I suggest you do each one in order and copy any code across by manually typing it - this really helps lock it in. If you are still getting stuck with the posts, please let me know what is confusing and I will try to improve them! You also mention that apply_async doesn't work for you - can you provide any more information?
... View more
07-28-2013
01:44 PM
|
0
|
0
|
425
|
POST
|
Then you would think it would be correct, unless it changed between 10.1 and 10.2. Hard to say without doing a lot more digging....which, since you got it working, just worth noting I guess.
... View more
09-25-2015
11:42 AM
|
0
|
0
|
68
|
POST
|
What is this line supposed to be doing? arcpy.DeleteRows_management(analysis_layer + "\\Routes") I am not sure that it is necessary, and it may cause other problems. Also, you can just access Python directly in these lines, with: arcpy.CalculateField_management(table_view, "Pop", Name[:Name.find(\" \")])
arcpy.CalculateField_management(table_view,"PW", Pop*Total_Length) I am still a bit dubious as to whether or not your code will even work with multiple processes. It seems like this line: arcpy.UpdateAnalysisLayerAttributeParameter_na(analysis_layer, "Drop_list", "Dropped Road", current_road) makes a change to the base analysis layer that is used by all processes, not just by this function. This would definitely cause issues, such as race conditions. I suggest that you test if it will work with Multiprocessing at all by stripping everything back and running it with just something like this (haven't tested, last line might be CopyFeatures or something else): def findRoadImportance(current_road,analysis_layer,table_view,OWS):
arcpy.UpdateAnalysisLayerAttributeParameter_na(analysis_layer, "Drop_list", "Dropped Road", current_road)
arcpy.Solve_na(analysis_layer,"SKIP", "TERMINATE", "")
arcpy.Copy_management(analysis_layer + "\\Routes", OWS + "\\" + str(int(current_road)) + "_result.shp") Then you can manually check that the outputs are consistent with what you might expect, i.e. the dropped road is in fact missing from the routes. If you need to, just make a tiny test dataset with, say, four roads in it to test... If it turns out that there are issues you might be able to copy the input analysis layer (and drop the roads) before the layer is passed to the function for each process.
... View more
07-16-2013
04:56 PM
|
0
|
0
|
186
|
Title | Kudos | Posted |
---|---|---|
1 | 03-02-2012 11:29 AM | |
1 | 01-13-2014 02:04 PM | |
1 | 09-26-2011 02:52 PM | |
1 | 10-23-2013 05:23 PM | |
1 | 04-01-2012 08:16 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:24 AM
|