Why arcpy.Copy_management() tool require exclusive schema lock?

7720
8
07-07-2015 12:03 AM
SuryaKant
New Contributor III

Sometimes, when I run the arcpy.Copy_management tool it through exception "ERROR 000464: Cannot get exclusive schema lock. Either being edited or in use by another application. Failed to execute (Copy).". Why Copy tool require exclusive schema lock?

0 Kudos
8 Replies
DanPatterson_Retired
MVP Emeritus

are you trying to overwrite an existing file?  Is the file loaded in ArcMap? In either case, use a different filename for the output.  If you want to use the same filename, remove the original from arcMap, use Delete_management to get rid of it, then copyfeatures.  If you are doing this manually, then that is the procedure.

If you are using code, the procedure is the same.  For example, the snippet to overwrite an existing file.  In this case, a shapefile, not loaded in arcmap is checked to see if it exists (output_shp is the full filename and path to the file), it does, it is deleted then copyfeatures creates its new incarnate.

...
if arcpy.Exists(output_shp):                   # overwrite any existing versions
    arcpy.Delete_management(output_shp)
arcpy.CopyFeatures_management(polygons, output_shp)
...
BlakeTerhune
MVP Regular Contributor

+1 for using incarnate!

SuryaKant
New Contributor III

Yes, I am trying to overwrite the existing file on backup sde. This dataset is used by many users for editing or view purpose (Not sure of time). Also, dataset is not versioned and don't have GlobalID in it and I can not change the schema or any other parameter. For this case, I wrote a small python code using arcpy module which runs on scheduler at night to copy the feature datasets. I really want to avoid this copy failure situation. Is there any way to avoid this? Thanks for your reply.

0 Kudos
DanPatterson_Retired
MVP Emeritus

In addition to my other suggestion...

You can also use os.path.isfile, to check for the existence of a file instead of os.exists, if you need to be sure it's a file.

Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path.

import os.path
os.path.isfile(fname)

That will allow you to...

  • close the application that is creating the lock, ensure that the lock is gone, then save, or
  • alter the filename that you are using for the output to copyfeatures

The schema lock should really report why there is a lock in the first place and what is putting the lock on.  It is also interesting to note that the file has 0 bytes file size and even if you copy the *.lock file to another location to examine it...there is nothing there to examine, so it is an arc* generated process that creates it.

If you want to find out what has the lock on...and you are using a Windows environment, use the CTRL ALT DEL method to bring up task manager and compare the number in the lock file to the application that is producing it and the username that is creating it...as shown in the image below.  Note there is nothing you can do about it except say ... cool!  At least I confirmed that it was arcmap producing the lock on one of my shapefiles.  Also disabling the application C:\Program Files (x86)\ArcGIS\Desktop10.3\bin\AppROT.exe has no affect as some threads have noted.

Lock_Files.png

XanderBakker
Esri Esteemed Contributor

For managing locks in an enterprise geodatabase you could have a look at (you need to use an administrative account to be able to use these):

arcpy.ListUsers

The see the users that have currently a connection to the EGDB:

arcpy.ListUsers("Database Connections/admin.sde")

arcpy.DisconnectUser

To disconnect a user in case necessary

arcpy.DisconnectUser("Database Connections/admin.sde", "ALL")

arcpy.AcceptConnections

To not allow users to connect to the database (do set this back to True when finished updating the database)

arcpy.AcceptConnections("Database Connections/admin.sde", False)

XanderBakker
Esri Esteemed Contributor

Schema locks can cause severe headaches... and many times I have though that they cause more problems then they avoid. In addition to the reasons already mentioned by  Dan Patterson there are a few more. things like:

  • writing the a geodatabase (file, enterprise), if the workspace is in use, being edited, etc
  • when a resource is published as a service on ArcGIS for Server, a service by default will cause a lock on the data source
  • when you use cursors in python (or other code) and not using a with statement (inside a function) then cursor and row objects will have to be deleted to remove the locks.
  • Processes that didn't complete correctly
  • etc

IMHO, schema locks have caused me more problems than they have avoided. I would really like to see the possibility to configure (like an environment setting) if a process should invoke a lock on a dataset...

TinaMorgan1
Occasional Contributor II

Hm, I agree that it does not seem necessary for the copy tool to need a schema lock.  The tool is simply copying a feature class, creating a new feature class and pasting the copied data into it. 

I just ran a test on my end using Desktop 10.4.1, a 10.4.1 geodatabase and SQL Server 2014 (those kinds of details will be helpful from your end in order to provide you with more accurate information).  I created a python script with only the copy tool, I ran it while I had a profiler trace and sdeintercept set up.  I did not see any exclusive locks made.  To further test this, I ensured that the input feature class is being accessed- i published it as a map service, and opened the base feature class in a few ArcMap sessions.  I saw that there were two shared locks already placed on my input data (the feature class being copied).  I then ran my script and it ran fine.  It did not require an exclusive lock.

Does your script do anything else other than call the Copy Data Management geoprocessing tool?

SuryaKant
New Contributor III

No, nothing apart from Copy Data Management.

I have versioned feature dataset, other users can access one of the version of feature class on their ArcMap Desktop and my script is set to run on scheduler so I only get to know on next day that it failed due to schema lock.

0 Kudos