I thought I would run it once before the table is closed (= the time when by experience the size of the geodatabase on the disk grows)
Keep in mind that the storage increase is due to fragmentation caused by updates.
The "compact" function rewrites the file, causing storage to initially grow by the
size of the compacted data before the fragmented file is deleted.
If your updates are so voluminous that a 2Gb table grows to 100Gb, you might want
to look at using memory-based techniques for doing the updates, and only flushing
the results when the changes have been completed.
- V
Currently, we create all columns at the beginning. Do you think it might be worth
trying to create the culumns not during initialization but directly before writing it?
And is there any such method to write an array of values to a column as 1 block instead of
writing single values?
No.
No.
I think the root problem here is that you're using the wrong API. File geodatabase, like the SQL
databases it models, is row-oriented framework -- row insert is dirt cheap, but column update
is ruinously expensive -- and you're trying to use it to fill a table in column-major order. I think
a spreadsheet API might be more appropriate to your use case. If you must use FGDBAPI, then
you're going to have to create one table per "column" (a join key and the column data), finish
your modelling, then open the N tables and join them on the fly to produce the final table.
- V