Yet another Schema Lock post...error with shapefile only

1717
13
11-03-2011 11:54 AM
LornaMurison
Occasional Contributor
Hi everyone,
I have a script which creates several intermediate files saved into a temporary location.  The user is allowed to select this location.
My code loops through features 300 at a time, performs some geoprocessing operations, and attempts to add a field and populate it with the area geometry.  This always works the first time through.  Once I get to the second loop though, if the intermediate files are being saved into a folder as shapefiles (rather than a geodatabase as feature classes), I get a schema lock error when trying to add the field.  I have overwrite output set to True, and there are several other files in the same folder which are successfully overwritten before encountering this error.  The actual shapefile whose attribute table I am attempting to edit has, in fact, just been successfully overwritten before encountering this error.
I know very little about schema locks, please help 🙂
Tags (2)
0 Kudos
13 Replies
DanPatterson_Retired
MVP Emeritus
Could you post the script?  Or if the file is to be overwritten anyway, you could try to throw a Delete_management (see help for syntax) into the mix.
0 Kudos
MathewCoyle
Frequent Contributor
I've found using functions solves a lot of ghost schema locking issues I've had.
0 Kudos
LornaMurison
Occasional Contributor
Here is the script, I have bolded the line that is returning the error:
# Import modules
import arcgisscripting
import datetime

# Create geoprocessor object
gp = arcgisscripting.create(9.3)

# Get Parameters
catchments = gp.GetParameterAsText(0)           # Feature Class
idField = gp.GetParameterAsText(1)              # Field
streams = gp.GetParameterAsText(2)              # Feature Class
bufferDistance = int(gp.GetParameterAsText(3))  # Long
landCover = gp.GetParameterAsText(4)            # Feature Class
outFolder = gp.GetParameterAsText (5)           # Workspace
tempWorkspace = gp.GetParameterAsText (6)       # Workspace

gp.overwriteoutput = True
gp.Workspace = tempWorkspace
gp.ScratchWorkspace = tempWorkspace

# Set-up environment variables
desc = gp.describe (catchments)
xmin = desc.extent.xmin
ymin = desc.extent.ymin
xmax = desc.extent.xmax
ymax = desc.extent.ymax
gp.XYDomain = str(xmin - (bufferDistance*2)) + " " + str(ymin - (bufferDistance*2)) + " " + str(xmax + (bufferDistance*2)) + " " + str(ymax + (bufferDistance*2))

# Create names for geoprocessing interim files based on the type of workspace selected
wsDesc = gp.Describe(tempWorkspace)
wsType = wsDesc.WorkspaceType
# If the workspace is a folder
if wsType == "FileSystem":
    ## Interim files must have the ".shp" extension
    streamsd = "streamsd.shp"
    catchStrm = "CatchStrm.shp"
    catchStrmBuff = "CatchStrmBuff.shp"
    catchStrmBuffDiss = "CatchStrmBuffDiss.shp"
    catchStrmBuffInt = "CatchStrmBuffInt.shp"
    selection = "Selection.shp"
    selectionDiss = "SelectionDiss.shp"
    selectionDissInt = "SelectionDissInt.shp"
    tableName = "Rip" + str(bufferDistance) + ".shp"
    appendTable = "AppendMe.dbf"
# If the workspace is a geodatabase
elif wsType == "LocalDatabase": 
    ## Interim files do not require an extension
    streamsd = "streamsd"
    catchStrm = "CatchStrm"
    catchStrmBuff = "CatchStrmBuff"
    catchStrmBuffDiss = "CatchStrmBuffDiss"
    catchStrmBuffInt = "CatchStrmBuffInt"
    selection = "Selection"
    selectionDiss = "SelectionDiss"
    selectionDissInt = "SelectionDissInt"
    tableName = "Rip" + str(bufferDistance)
    appendTable = "AppendMe"
    
# Dissolve streams not allowing multipart to reduce the number of features to be buffered
gp.Dissolve_management (streams, streamsd, "", "", "SINGLE_PART", "")

# Count the total number of catchments
resultCatchments = gp.GetCount_management(catchments)
countCatchments = int(resultCatchments.GetOutput(0))

# Make a feature layer from the catchments
gp.MakeFeatureLayer_management (catchments, "CatchLayer") 

# Create a search cursor to loop through the catchments 300 at a time
rows = gp.SearchCursor (catchments)
row = rows.next()
x = 0
list = []
while x < countCatchments:
    ## If there are fewer than 300 features remaining
    if countCatchments - x <300:
        remaining = countCatchments - x
        for i in range (0,remaining,1):
            objectID = row.getvalue ("ObjectID")
            row = rows.next()
            x = x+1
    ## Otherwise
    else:
        for i in range (0,300,1):
            objectID = row.getvalue("ObjectID")
            row = rows.next()
            x = x+1
    ## Add the 300th, or last, ObjectID to the List
    list.append (objectID)
    
# Get the ObjectIDs from the list and perform selections and geoprocessing
# How many items are in the list?
items = len(list)
for i in range (0, items, 1):        
    ## If this is the first item on the list
    if i == 0:
        query = '"OBJECTID" <= ' + str(list)
        lastitem = list
    ## Otherwise
    else:
        query = '"OBJECTID" > ' + str(lastitem) + ' AND "OBJECTID" <= ' + str(list)
        lastitem = list
    gp.SelectLayerbyAttribute_management("CatchLayer", "NEW_SELECTION", query)    

    ## Get the number of catchments selected
    resultSelCatch = gp.GetCount_management("CatchLayer")
    countSelCatch = int(resultSelCatch.GetOutput(0))
    
    ## Intersect catchments and streams
    gp.Intersect_analysis ("CatchLayer;" + streamsd, catchStrm, "ALL", "", "INPUT")
    resultIntersect = gp.GetCount_management(catchStrm)
    countIntersect = int(resultIntersect.GetOutput(0))

    ## Buffer intersected catchments and streams
    gp.RepairGeometry_management (catchStrm)
    gp.Buffer_analysis (catchStrm, catchStrmBuff, bufferDistance, "FULL", "ROUND", "NONE")

    ## Dissolve the buffered streams... process may fail if this is done as part of the buffer operation
    gp.Dissolve_management (catchStrmBuff, catchStrmBuffDiss, idField, "", "MULTI_PART", "")

    ## Intersect catchments and buffer
    gp.Intersect_analysis (catchments + ";" + catchStrmBuffDiss, catchStrmBuffInt, "ALL", "", "INPUT")

    ## Select features where buffer ID and catchment ID are the same
    gp.Select_analysis (catchStrmBuffInt, selection, '"'+ idField + '"="' + idField + '_1"')

    ## Dissolve selected features based on ID, allowing multipart
    gp.Dissolve_management (selection, selectionDiss, idField, "", "MULTI_PART", "")

    ## Intersect dissolved features and land cover
    gp.Intersect_analysis (selectionDiss + ";" + landCover, selectionDissInt, "ALL", "", "INPUT")

    ## Dissolve intersected features based on ID and land cover type, allowing multipart
    gp.Dissolve_management (selectionDissInt, tableName, idField + "; SCO_Class", "", "MULTI_PART", "")

    ## Add field (Area)
    gp.AddField_Management (tableName, "Area", "DOUBLE", "#", "#", "#", "", "#", "#", "")

    ## Calculate area field
    gp.CalculateField_management (tableName, "Area",  "!" + gp.describe(tableName).shapefieldname + ".AREA@SQUAREMETERS!", "PYTHON_9.3", "")    

    ## If this is the first set
    if i == 0:
        ### Export attribute table
        gp.TableToDBASE_conversion (tableName, outFolder)
    ## Otherwise
    else:
        ### Export the attribute table and append it onto the existing one
        gp.MakeTableView_management (tableName, "RipView")
        gp.CopyRows_management ("RipView", appendTable)
        gp.Append_management (appendTable, outFolder + "\Rip" + str(bufferDistance) + ".dbf", "NO_TEST")

# Delete the Temporary Files    
gp.Delete_management (catchStrm)
gp.Delete_management (catchStrmBuff)
gp.Delete_management (catchStrmBuffInt)
gp.Delete_management (tableName)
gp.Delete_management (selection)
gp.Delete_management (selectionDiss)
#gp.Delete_management (selectionDissInt)
gp.Delete_management (streamsd)
gp.Delete_management (catchStrmBuffDiss)
gp.Delete_management (appendTable)


Thanks
0 Kudos
DanPatterson_Retired
MVP Emeritus
don't have 9.3 but could you make Management in that line lower case, I can't remember if 9.3 is case-sensitive or not
0 Kudos
LornaMurison
Occasional Contributor
don't have 9.3 but could you make Management in that line lower case, I can't remember if 9.3 is case-sensitive or not

Tried that, still gives me the schema lock error

I've found using functions solves a lot of ghost schema locking issues I've had.

I'm not sure what you mean by this...

you could try to throw a Delete_management (see help for syntax) into the mix.

Do you mean within the loop after the file is no longer needed?  I will try.
0 Kudos
LornaMurison
Occasional Contributor
Threw in a Delete_management after the final if-else clause and that didn't fix it either.  After the code has run, and given me the schema lock error I can go in to ArcMap, add the file in question, and add a field without encountering any locks.

Also tried creating a function:
def AddField (fc):
    "Adds a field to a feature class, hopefully removes any schema locking errors"
    gp.AddField_management (fc, "Area", "DOUBLE", "#", "#", "#", "", "#", "#", "")

....
## Add field (Area)
    #gp.AddField_management (tableName, "Area", "DOUBLE", "#", "#", "#", "", "#", "#", "")
    AddField(tableName)

Same error.
0 Kudos
MathewCoyle
Frequent Contributor
Sorry, I wasn't very clear. You should try to put the previous GP tools in a function, the add field isn't the problem it is just encountering the lock, it is a previous tool, probably the last dissolve there, that is holding the lock.

Are you sure your pathnames are not too long? No other processes accessing the data?

May be redundant, but have you tried using TestSchemaLock at various times to see where that particular shapefile is being locked and not released?
http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#//000v00000024000000
0 Kudos
StacyRendall1
Occasional Contributor III
On a similar vein, arcpy.Exists() seems to help clear out a lot (well... it depends) of locking issues. Hopefully it works the same for 9.3 and the GP. I.e. place:
gp.Exists(tableName)

just before the line that gives you the errors. This normally outputs a True or False, but in this case you don't need to worry about that - the mere fact that it queries for the file should be sufficient.
0 Kudos
LornaMurison
Occasional Contributor
Thanks a lot for your help everyone,

I have tried a combination of what you suggested.  I went through all my code and tested for schema locks with gp.TestSchemaLock before every geoprocess.  If one was available I went through the process, if one was not available I printed a message saying "unable to get schema lock for file name".

By doing this I found that the second process I run (Make feature layer) will not release it's schema lock so I have been using your tips to try and release this lock.  I have created a function:

# Define any functions necessary to avoid schema locks
## Create a feature layer from catchments
def MakeFL (catch):
    "Makes a feature layer from the input... hopefully a schema lock will be available for this layer"
    gp.MakeFeatureLayer_management (catch, "CatchLayer")


and used it to make a feature layer:
# Make a feature layer from the catchments
#gp.MakeFeatureLayer_management (catchments, "CatchLayer")
MakeFL (catchments)
exists = gp.exists ("CatchLayer")
print "Was 'CatchLayer' created? " + str(exists)
lockTest = gp.TestSchemaLock ("CatchLayer")
print "A schema lock is available for 'CatchLayer' immediately afer it is created: " + str(lockTest)


This line : "print "Was 'CatchLayer' created? " + str(exists)" returns 'true' proving that the function does actually create a layer.  The line : "print "A schema lock is available for 'CatchLayer' immediately afer it is created: " + str(lockTest)" returns "false" showing that even though I used a function to create it, a schema lock is still not available for this layer.
I even deleted my workspace folder and re-created it to make sure that there were no files in there getting mixed up from a previous run of the code.

Could you please let me know if I have implemented your suggestions correctly, and if I have, do you have any others?? 😛
Thank-you!
0 Kudos