Schema lock error when trying to rename and update a feature class in a File Geodatabase

1752
2
07-21-2016 10:38 AM
Anish_Adhikari
Occasional Contributor

I am trying to update a parcel layer inside a File geodatabase with new parcel data. The parcel layer is joined to various SQL Server 2008 database tables and views. Then different layers using the parcel data is being published as ArcGIS rest services and primarily being used in a web application. The new Parcel data has exactly the same schema as the old layer. The approach I took was loaded the new parcel data into the File  geodatabase containing the old parcel feature class which is named parcels and renamed the layer with new parcels as parcelsnew. I was trying to rename the old parcel layer "parcelsold" and new parcel layer I just uploaded as parcels. I can rename the new parcel layer without any issues so it does not seem to be a folder permissions issue but when trying to rename the old layer, it gives me an error message " Failed to rename selected objects, Cannot acquire a schema lock because of an existing lock".

The only application using the file I can think of is the ArcGIS Server which seems to have the schema lock.

My question is how can I remove the schema lock so that I can rename the parcel layer efficiently. I can always stop individual services on the ArcGIS Server but there are quite a few services using the file. Any help would be greatly appreciated.

Thanks.

Software used

ArcGIS Desktop/ArcCatalog 10.3.1, ArcGIS Server 10.0, SQL Server 2008

0 Kudos
2 Replies
NajmehServatsamarein2
New Contributor III

I have the same question.  did you solve it?!

0 Kudos
Anish_Adhikari
Occasional Contributor

@NajmehServatsamarein2 

I found that schema lock is created by users using the file as well as ArcGIS Server processes. So you will have to make sure that no users are using the file and also stop the ArcGIS server processes that are using that file. Usually it tells you go to the folder on the File Geodatabase where your data is housed, there will be files with .lock extension prefixed by machine name which tells you which machine has created the lock. 

 

I have now automated this process using a python script which first checks for schema lock for the feature in question. If no lock exists, it deletes the entire feature class and replaces it with the updated one. If a lock exists, it only deletes individual features and I end up with an empty feature class. I then append data to the empty feature class using updated data.  In theory we should be ideally using the second option but in our environment we found out that there is severe performance degradation overtime when only using only the second option.