Select to view content in your preferred language

Migrating from oracle to PostgreSQL 9.2

2624
13
11-05-2013 01:14 PM
ESRIESRI1
New Contributor II
Hello All,

We have huge database with several feature class on oracle (ArcSDE 10) which need to migrate to postgreSQL9.2. I am new to postgreSQL , I could not find direct way other than exporting to shapefile and re-importing.  ESRI help provides method using pg_dump command, can any one give step by step procedure to execute the same or point to me correct tutorial to execute the same.

Please help me on this.
0 Kudos
13 Replies
lauravganges
New Contributor
Regardless of the table size/user(schema) size, the migration can occur through ARCCatalog?  I am thinking that moving from one database/OS to another database/new OS will best be accomplished by your first suggestion (the drag/drop in ARCCatalog).  The allure of bulk load (esp to a DBA) is easily overpowered by the fear of losing data during a migration ;o)

Thank you for your clear explanations.
0 Kudos
VinceAngelo
Esri Esteemed Contributor
Yes, you can use Desktop to process any table, BUT, if the row count is greater than
5 million, I'd recommend using multiple passes out of the table, appending the
results on import.

It's actually rather difficult to lose data in this sort of migration (partially because you
need to come up with a validation procedure to confirm that data hasn't been lost).

- V
0 Kudos
lauravganges
New Contributor
One last question?  When you refer to "bulk-loading", which ESRI tool/method are you indicating should be used?  There are several tables with more than 5 million rows. 
Also, in the database world, simple export and import of data defragments and optimizes data.  Is this true for the bulk-loading method, as well?
Thank you SO much...
0 Kudos
VinceAngelo
Esri Esteemed Contributor
"Simple bulk-loading" is a euphemism for "anything but Desktop."

A simple export/import has no impact on spatial fragmentation.  Spatial fragmentation
occurs when data is loaded in other than spatial order (usually over time).  Imagine
a lightning_strike table that contains all cloud-ground events over a continent for
several years; this table is very likely to be spatially fragmented.  Spatial queries will
hit the spatial index, which will quickly determine the rows, and then the database will
effectively need to run a full table scan (possibly multiple times over) fetching the rows
which are randomly distributed across the table.  Spatial defragmentation is the process
of using spatial queries or a representative proxy (eg. county FIPS code) to extract data
in other than the orginal order, with the goal of improving spatial query performance.
Of course, while reordering data by a spatial component will improve spatial queries,
it may negatively impact ordering for other queries (e.g. strike_time), so sometimes
compound ordering is necessary to preserve query performance across multiple
attributes.

- V
0 Kudos