Shared services and persisting database locks

907
5
06-22-2021 09:13 PM
AngusHooper1
Occasional Contributor II

We have an enterprise geodatabase (AWS Aurora, postgres) that has a significant number of sessions and locks (10,000+). These sessions and locks date back to 8+ months ago. These tables and feature classes are only accessed by cached map services from a multi-machine ArcGIS Server site. All of these map services have been set to use "shared services" in the pooling. Theoretically if everything maxed out we would expect to see 200 concurrent connections to the database via these services. 

We have tested these services in our dev environment using dedicated pooling and cannot replicate these persistent lock issues. 

Has anyone else noticed that shared services from a multi-machine ArcGIS Server site not dropping the connection to the database? Our assumption is that when the service is recycled it never truly drops the connection. This is somewhat justified by Aurora never showing 0 connections to the database as there is always the minimum threshold as indicated by our shared service pooling size.

 

update

We have been able to replicate the schema locks persisting after the service has been stopped in our dev environment. Only changing the number of shared instances for the ArcGIS Server site cleared the locks. 

5 Replies
by Anonymous User
Not applicable

You should be able to clear the locks from the database property side in ArcPro.  Have you tried that?  Do the locks reappear?

 

0 Kudos
AngusHooper1
Occasional Contributor II

Unfortunately the locks cannot be easily cleared as the python tool errors out due to the scale of locks. It does clear some, but not all.

0 Kudos
by Anonymous User
Not applicable

locks or orphaned locks can also be cleared in SSMS for enterprise geodatabases using SQL

 

0 Kudos
AngusHooper1
Occasional Contributor II

I was wary of doing this for obvious reasons. Is there a specific table or workflow?

0 Kudos
by Anonymous User
Not applicable

This will identify orphaned locks in your db.

 

--Identify SDE owned orphaned locks                    

select * from sde.SDE_table_locks                        

where sde_id not in (select sde_id from sde.SDE_process_information)

order by sde_id                                        

                                                   

select * from sde.SDE_layer_locks                      

where sde_id not in (select sde_id from sde.SDE_process_information)

order by sde_id                                        

                                                    

select * from sde.SDE_object_locks                     

where sde_id not in (select sde_id from sde.SDE_process_information)

order by sde_id  

                                      

select * from sde.SDE_state_locks                      

where sde_id not in (select sde_id from sde.SDE_process_information)

order by sde_id 

 

Then take a backup of your db just in case and use this process to delete orphaned locks\

delete from sde.SDE_state_locks

where sde_id not in (select sde_id from sde.SDE_process_information);