What could cause arcpy.TruncateTable_management(fc) to be so slow in one specific geodatabase?

582
1
07-16-2020 05:51 AM
PaulCyr1
New Contributor III

We have tiny specific Enterprise geodatabase that only has one function + a test function.

The database only has two feature datasets with qty=5 feature classes (fc) in each.

No Versioning, archiving is not enabled, editor tracking is off.

The feature classes are almost always empty but that has no bearing on how long it takes to truncate each fc.

When you execute arcpy.TruncateTable_management(fc) in your Enterprise environment, how long does it take to truncate a test fc, with or without data? What could cause this tool to run 100 times slower on one geodatabase mounted on the same server versus another geodatabase?

Truncates are normally designed to be faster than deletes in database operations and this is just not happening with the arcpy tool on this specific geodatabase. Why?

BTW Pointing ArcCatalog 10.8 at this specific geodatabase to look at features is also painfully slow too, it can take a full minute after right clicking to see the properties dialog box.

What would you do or try to identify the bottleneck here and yes a case with ESRI has been created?

0 Kudos
1 Reply
PaulCyr1
New Contributor III

It turns out the answer to this question is:

The build up of "geoprocessing history" baggage in each feature class in this geodatabase, caused this slowness. Once we ran the "clear geoprocessing history" tool developed by Luke Rogers, the change has been tremendous! The truncate statements now take a second where before they were taking a minute or two. So, if you have an automated python script that runs frequently, like each hour, you should seriously consider clearing geoprocessing history baggage as it could be slowing your system down over time.

Almost forgot to mention, if your python script executes this statement before it uses any geoprocessing tools it will not add to the geoprocessing history baggage.

arcpy.SetLogHistory(False)

0 Kudos