Data Interoperability woes

855
3
04-30-2010 06:52 AM
LarryPhillips
New Contributor II
I have been trying to find a way to run my Spatial ETL tools in smaller sets and need help.  I work in a version and process 400000+ records in a single translation and that wrecks havoc on the server and the adds/deletes tables.  Any advice on how to run these large data sets would be greatly appreciated.
0 Kudos
3 Replies
BruceHarold
Esri Regular Contributor
Hello Furyk

Is it feasible to reduce the number of edits by first doing change detection with the Matcher or ChangeDetector transformers, then only making the edits you need?  The idea is to not write a delete/add pair for records which do not change.

Regards
0 Kudos
LarryPhillips
New Contributor II
Thank you for your reply, it points out how poor of a post I made.  My problem comes from having many (400,000+) records needing updated.  If this data were in a file geodatabase it would be fine because I can set the number of records to write in a transaction.  The problem is I'm working in a version and can't run transactions.  What I want to avoid is having to manually split the data into smaller sets and run one after the other instead of automating the whole process in Data Interoperability (DI).
0 Kudos
BruceHarold
Esri Regular Contributor
Hi again

If your problem is performance of the system with large A and D tables, set your Geodatabase writer to reconcile and post after translation, then the records will be moved to the base table.

Regards
0 Kudos