Sudden performance deterioration of sdebinary feature classes in a geodatabase

2832
6
02-16-2016 03:01 AM
jorisfrenkel
Occasional Contributor II

I searched the forums for "sudden sde performance change" but couldn't find any post covering the subject, so out of curiosity I am dropping a line here, even though it's not a big problem for me right now anymore.

We are in the process of migrating from the old SDE 9.3.1 with Oracle (sdebinary) to 10.3.1 with SQL Server (geometry or sdebinary, not decided yet). In the old database it sometimes happened that a feature class suddenly became extremely slow - let's say from displaying a certain area in 2 seconds to 30 seconds or more. As far as I remember this always was noticed in the morning. The solution always is to delete the indexes and calculate new ones. Quite cumbersome, because some layers are so heaviliy used it's quite impossible to remove all locks at daytime, and the indexes have to be renewed at night, so all users suffering the slow performance a whole day long.

In the new test environment (SQL Server 2014) I did some somparisons between the Geometry format and Sdebinary. By the way, I had good support with this from our local Esri office, and if anyone is interested in the results then let me know and I will post them in a separate thread.

During this testing however, I was a little surprised to find out that with the Sdebinary format I quickly ran into similar issues as we have in 9.3.1 in Oracle. Occasionly, the performance of a feature class suddenly degrades with a factor of 10 or more (I used PerfQAnalyzer).

The remedy was the same: deleting the indexes and calculating them anew. So the cause seemed to be a broken index. These are feature classes that have not been edited at all.

For those who wonder why not use RebuildIndexes, this does not work for Sdebinary, see http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/modifying-a-spatial-index.htm. I also tried this first, and I wonder why the RebuildIndex tool does not give you some kind of warning when you use it on a format - like sdebinary - that does not support it. It even reports 'Succeeded' :-{.

However,another remedy is copying the feature class. The copy displays fast again, even without deleting and recreating the indexes, So the question was, is it really the index that is broken? Then on the next day the problem was there again, and our DBA proposed to have a look at it. As an experiment, he cleared the 'Execution plan cache' of the database (never heard of that before but now I have). This also solved the problem. As far as I can see, this proves that it has nothing to do with a broken or deteriorating index, but that somehow, at some point, SQL Server chooses the wrong execution plan for the query. This was just an experiment, clearing the execution plan cache is not something you want to do regularly, I learned from our DBA.

So I liked to share this, and I am interested to find out if anyone else ran into this problem and how they dealt with it.

By the way, feature classes in Geometry format so far never showed this performance degradation in our database, and generally perform almost on par with Sdebinary with one exception, so I am inclined to go for the Geometry format. The one exception is when displaying large datasets with a single symbol legend. Then Sdebinary is significantly faster (2 to 3 times).

Joris Frenkel,

Staatsbosbeheer, the Netherlands

0 Kudos
6 Replies
VinceAngelo
Esri Esteemed Contributor

Optimizers are a law unto themselves.  By changing between 9.3.1/Oracle and 10.3.1/SQL-Server there really is nothing in common for comparison.  The main issue for query plan selection is the database's determination as to whether the spatial index statistics are "fresh enough" to be trusted.  "Freshness" can include both the number and relative volume of edits in a table, and a raw "time since last update" component.

You haven't given any indication of the edit frequency and editing volume on the tables.  In an ideal world, indexes would be rebuilt on a regular basis.  Actually, in an ideal ideal world, a table with frequent edits would be spatially defragmented on regular basis, then the spatial index rebuilt, and the index statistics updated.  In the real world, you might have to settle for updating index statistics at half the interval where you're noticing performance degradation, then scheduling an outage when the table is badly spatially fragmented.

- V

jorisfrenkel
Occasional Contributor II

Vince, thanks for your input.

The thing is that the feature classes that suffer from /these kind of problems are not being edited at all.

0 Kudos
VinceAngelo
Esri Esteemed Contributor

The optimizer can choose not to trust indexes for any reason, including for being "too old".  Updating the statistics on tables not being regularly updated is still a necessary maintenance step.

- V

0 Kudos
jorisfrenkel
Occasional Contributor II

So basically you say that regularly updating the statistics could possibly prevent this kind of performance degradation - even though when this performance degradation happens, updating the statistics with Analyze is not enough to solve it - I need to recalculate the index in that case. That seems to be good advice to follow, especially since it is necessary to do this anyway.

0 Kudos
ThomasColson
MVP Frequent Contributor

I've has similar, but intermittent, issues with SQL optimizer not being terribly smart about index-choice: often it will decide that the default index created with a new feature class is not "optimal" and will choose to use no index! By adding a few spatial indexes of varying cell and grid levels, as well as some non-clustered indexes on frequently used attributes, I can often (but not always) get around that performance hit. As always, run a trace through tuning adviser to see where you "might" gain some performance on index tuning. 

jorisfrenkel
Occasional Contributor II

Thanks for your input. What spatial format are you using? (Geometry, Sdebinary, both?) What version of SQL Server?

0 Kudos