Select to view content in your preferred language

How to handle Archiving tables during schema reloads

11-02-2015 02:16 PM
Occasional Contributor III

I'm looking at using ESRI's archiving on our SDE for the first time and I was hoping some folks who've worked with it could ask some questions about it.

1) Archiving uses the feature class' OBJECTID field to track historical records for each row in the archive table.  When I've done large schema updates in the past, I've extracted the data from SDE to a feature class, deleted the old table, loaded in a new table schema from UML or ArcGIS Diagrammer, and then reloaded the old data. When you do this the objectID's change while the data is reloaded into SDE.  If I did this with archiving, then all the OBJECTID's on the base table of the feature class would change and break the link to all the archive table records.

     -Has anyone dealt with this and built a workaround?  or are you forced to do all schema changes with the data in place on the system and within the limits of the ESRI toolbox tools (add field, etc.).  We have a unique ID we maintain on each feature class using Oracle sequences, so I'd rather relate the historical records to the base table using that ID since it doesn't change when data moves around, but that is not an option when enabling archiving.

2)  Does enabling archiving create a big performance hit?  The feature classes I'd be implementing it on contain ~3 million records across 5 tables.  I read in the documentation that when archiving is first enabled it copies every row in the base table into the archive table.  I'll have to get with my DBA about disk space for all that duplication, but I'm also concerned about archiving slowing down edits with feature classes that are already very large.

Thanks for any help you can provide,

-Andrew Rudin

0 Kudos
0 Replies