Six hours! :rolleyes:I hope you found the bottleneck.
That would break my personal "Cup of Coffee Rule":
"If any single process takes longer than a cup of coffee, interrupt it and find a better way".
Since you are also running out of memory, finding what is causing that will probably speed things up enormously as well.
I like TruncateTable_management. I note that there is no help in the Beta for this new utility Esri, ...
You don't say how many records you have, but I would expect a couple of million records to take less than half an hour.
Suggestions to find the problem, don't give up until you can have that cup of coffee while it is still hot:
I immediately see a red flag where you open SDE which will inevitably be across a sloooow network, or at least it will be using
sloow handshaking. Could you try loading into a filegeodatabase on a local drive and then copy the file geodatabase in a single step?
Maybe FME (or Data Interoperability Extension) might be faster?
Maybe load the shapefiles separately directly into a filegeodatabase and then use some SQL queries to do the selection and editing,
instead of doing it all with the cursor. Very large records with many fields will be slow to load into a database. You could use MakeQueryTable to create a subset and then write out the view to SDE.
Does the database have any indexes? You should drop the indexes when you truncate otherwise every insert will trigger a re-index.
Does it get slower with more data. Try loading the first 10% to see if that is faster in proportion.
Even though you are careful to trigger a garbage collection with del statements, it clearly isn't working. Have a look at your memory usage using Task Manager.
If it keeps going up, then that might be the solution. If you can restructure the script to use a function, sometimes that garbage collects better.
You might be more successful if you could batch the transactions. The cursors are a bit simple here. But FME can do this better.
I find using Python and tools with more than a million records hits some sort of limit, even with my 8 CPU workstation. So I partition the work to use less than 1M records and it completes in a few minutes instead of never. Even for aspatial SQL queries.