I am trying to copy from an Oracle ArcSDE 9.x database to an AWS postgres geodatabase very specific datasets including indexes.
At present I am using arcpy as I have to automate the process.
Specifically I am using copy_management. Some datasets take a minute or 2 to copy and some take over a day.
I was wondering if copyfeatures_management would be any quicker to use? If so, does anyone have any metrics?
Copying from a local client to any cloud GDB may take some time and there will be inconsistencies on peformance.
What is the version of the GDB in AWS?
10.7.1.2.4 Geodatabase
I have definitely experienced the inconsistencies in performance which is why I would like to investigate improving the speed.... if at all possible
I would highly recommend only connecting to the GDB in the cloud from a machine in the same cloud and region for better performance.
You should think about copy to a local FGDB (or shape), zip it, load it to AWS and then copy it locally on the clod into the database.
The network handle 1 big file must better then many small packages.
With big files it usually much quicker.
Copy—Data Management toolbox | Documentation
does a lot more than
Copy Features—Data Management toolbox | Documentation
I would start by comparing what you need to copy and whether your data has any of the conditions and properties that would require Copy to be used over Copy Features