AnsweredAssumed Answered

Non-versioned data slowing way down

Question asked by mattlane86 on Apr 21, 2014
Latest reply on May 1, 2014 by mattlane86
I have two unversioned featureclasses in a dataset that our organization adds and deletes up to 10s of thousands of records daily. One is over 3million points and the other is over 500k lines. Each have an organizational ID field with an index and a GlobalID field with an index. We will be versioning and replicating these in the near future. They are getting slower and slower over time, to the point where just deleting a couple records could take 8 minutes and getting a table feature count using the toolbox tool takes 3 minutes. The processing that used to take 1 to 2 hours now takes over 12.

Details:

  • SQL Server 2008 R2

  • SDE Release 101010

  • Desktop 10.1 sp1

  • Each spatial index is built by ESRI (16 cells per object, Medium on all levels)

  • Data is not in SDE schema, but a separate one

  • There isn't a lot of other data in this database, although there is a single versioned featureclass in a different dataset that has topology rules applied to it.


What I've tried:

  • I rebuild and reorganize all indexes in this database as needed according to Microsoft documentation

  • Checked the Bounding Boxes on the spatial indexes

  • Rebuilt statistics (esri tools)

  • Compressed

  • Shrunk database (our DBA does this periodically)

  • Restarted server (memory doesn't seem to be the issue)

Outcomes