So I have a python script the crawls though a parent directory and compacts all personal and file geodatabases. However, if there is a lock on any of the GDBs, the script fails. What's the best way to have the script skip over any GDBs that contain lock files? Using ESRI 10.2.2. Here is the script:
import arcpy, os, sys #set workspace arcpy.env.workspace = arcpy.GetParameterAsText(0) #total size of GDBs in workspace before compact def get_size(start_path = arcpy.env.workspace): total_size = 0 for dirpath, dirnames, filenames in os.walk(start_path): for f in filenames: fp = os.path.join(dirpath, f) total_size += os.path.getsize(fp) return round((total_size / float(1048576)), 2) print "Total Size before Compact " + str(get_size()) + "mb" results = open("R:\\GIS\\Compact_Results\\results.txt","w") results.write("Total Size before Compact " + str(get_size()) + "mb" + '\n') feature_classes = [] #loop through all folders in the workspace for dirpath, dirnames, filenames in arcpy.da.Walk(arcpy.env.workspace, datatype="Container"): for dirname in dirnames: if ".gdb" in dirname: #compact all gdbs in the user defined worspace arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') for dirname in dirnames: if ".mdb" in dirname: #compact all mdbs in the user defined worspace arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') #total size of GDBs in workspace after compact def get_size(start_path = arcpy.env.workspace): total_size = 0 for dirpath, dirnames, filenames in os.walk(start_path): for f in filenames: fp = os.path.join(dirpath, f) total_size += os.path.getsize(fp) return round((total_size / float(1048576)), 2) print "Total Size after Compact " + str(get_size()) + "mb" results.write("Total Size after Compact " + str(get_size()) + "mb" + '\n') results.close()
Thanks for taking a look and any help!
Solved! Go to Solution.
Hi Jason,
You could use a try/except statement to skip the geodatabase when the error is encountered. Ex:
#loop through all folders in the workspace for dirpath, dirnames, filenames in arcpy.da.Walk(arcpy.env.workspace, datatype="Container"): for dirname in dirnames: if ".gdb" in dirname: #compact all gdbs in the user defined worspace try: arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') except: print "Skipped " + dirpath + "\\ " + dirname + " due to lock." pass for dirname in dirnames: if ".mdb" in dirname: #compact all mdbs in the user defined worspace try: arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') except: print "Skipped " + dirpath + "\\ " + dirname + " due to lock." pass
Maybe use os.listdir? Depends on if you know the lockfile will always be named with a ".lock" extension. But this won't work if the lock is currently being applied I think. Also-also: doesn't the Compact process remove locks that have been left over? Anyway -- I did minimal test on the code below and didn't really pay attention to overall correctness, just looked to see if it would delete the .lock file I added (it does).
#loop through all folders in the workspace for dirpath, dirnames, filenames in arcpy.da.Walk(arcpy.env.workspace, datatype="Container"): for file in os.listdir(dirpath): if str(file).find(".lock") > -1: print "lockfile: " + dirpath + "\\" + file arcpy.Delete_management(dirpath + "\\" + file) print "deleted: " + dirpath + "\\" + file for dirname in dirnames: if ".gdb" in dirname: #compact all gdbs in the user defined worspace #arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\\" + dirname #results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n')
I'll give it a shot. Unfortunately, deleting the lock files is not an option since I don't own the locks. I need to be able to skip over any GDBs/MDBs that contain locks for that reason. Thanks for the input!
Hi Jason,
You could use a try/except statement to skip the geodatabase when the error is encountered. Ex:
#loop through all folders in the workspace for dirpath, dirnames, filenames in arcpy.da.Walk(arcpy.env.workspace, datatype="Container"): for dirname in dirnames: if ".gdb" in dirname: #compact all gdbs in the user defined worspace try: arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') except: print "Skipped " + dirpath + "\\ " + dirname + " due to lock." pass for dirname in dirnames: if ".mdb" in dirname: #compact all mdbs in the user defined worspace try: arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\ " + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n') except: print "Skipped " + dirpath + "\\ " + dirname + " due to lock." pass
Thanks! I ended up using this along with James CrandallTestSchemaLock. Much Appreciated
Maybe the TestSchemaLock would work?
for dirpath, dirnames, filenames in arcpy.da.Walk(arcpy.env.workspace, datatype="Container"): for dirname in dirnames: if ".gdb" in dirname and not arcpy.TestSchemaLock(dirpath + "\\" + dirname): #compact all gdbs in the user defined worspace arcpy.Compact_management(os.path.join(dirpath, dirname)) print "Successfully compacted " + dirpath + "\\" + dirname results.write("Successfully compacted " + dirpath + "\ " + dirname + '\n')
James, that is a very handy arcpy method!
But it should be said that The Python Way is to try it and ask for forgiveness (try except) instead of running various tests for different potential problems.
This approach has the advantage in that it allows you to handle many possible (even unanticipated) problems with minimal code. For example, what if the compact fails for some other reason? Your script will fail and you'll have to write another check for that other special case to make your script complete.
I think the great think to do will be to contact the support team they are going to figure that out for you or even better they will provide you with some instructions.
Best regards.
Hey Jason, can you mark your question as answered? This will help others looking for a similar solution, and also let the rest of us know that you no longer need help. Thanks!