Database migration from Oracle 12c to Postgres

1471
5
Jump to solution
02-10-2023 02:06 AM
GDT-Younup
New Contributor

Hello,

We have a geodatabase in Oracle 12c that we are planning to migrate on Postgres.
Is there any guidelines on how to proceed for doing so ?

Ideally we would like to re-create a new SDE schema on our new Postgres database to restart on clean base, and after that migrate our schemas that contains our geographic data with a migration tool (ora2pg).

  • Is that possible ?
  • If it is, how can we link our migrated schemas with the new SDE schema ?

Thanks in advance for your help.

0 Kudos
1 Solution

Accepted Solutions
George_Thompson
Esri Notable Contributor

The best (and safest) way I have seen this done is using the ArcGIS tools (copy / paste) from the old Oracle schemas (data owners) to the new PostgreSQL in the schemas that you want. Using that workflow will allow the conversion into the correct repository ("SDE / GDB") tables.

I have not used the ora2pg tool before but think that there could be some issues when it comes to the repository tables.

If your Oracle EGDB is versioned, make sure to get all the versioned data into the default version / base table before copying over.

--- George T.

View solution in original post

5 Replies
George_Thompson
Esri Notable Contributor

The best (and safest) way I have seen this done is using the ArcGIS tools (copy / paste) from the old Oracle schemas (data owners) to the new PostgreSQL in the schemas that you want. Using that workflow will allow the conversion into the correct repository ("SDE / GDB") tables.

I have not used the ora2pg tool before but think that there could be some issues when it comes to the repository tables.

If your Oracle EGDB is versioned, make sure to get all the versioned data into the default version / base table before copying over.

--- George T.
VinceAngelo
Esri Esteemed Contributor

The sde schema shouldn't be involved at all in any cross-DBMS geodatabase migration, only the table owner(s) at first.

I actually prefer to use SQL to define the tables *exactly* as desired, then Append from the source or even use FeatureClassToFeatureClass to generate a near clone, and populate the contents of the new table(s) via INSERT INTO newtableN(columnlist) SELECT columnlist FROM temptableN

This latter solution allows flexibility to restructure the table, or optimize physical order via an ORDER BY (or both). 

Of course, any versioned edits would be need to be reconciled and posted before starting transfer, and the feature classes only versioned in the target database after confirming successful transfer.

I recently did this with 120 million rows across a score of tables, optimizing the partitioning and adding timestamp columns for insert and update. I used Python to transfer the data, and added a parallel table with a SHA-1 hash to fingerprint the new contents, then compared the hash with a hash of the source data to confirm successful transfer.

- V

yockee
by
Frequent Contributor

Hi Vince,

I do not quiet get this statement "...and populate the contents of the new table(s) via INSERT INTO newtableN(columnlist) SELECT columnlist FROM temptableN. " Did you mean: create an empty FC first  and then load the data with INSERT INTO query.

 

Thanks

0 Kudos
VinceAngelo
Esri Esteemed Contributor

Yes, I mean create an empty table (not yet registered), and populate it via SQL (then register it).

- V

0 Kudos
yockee
by
Frequent Contributor

Allright, thanks Vince.

And how about "...added a parallel table with a SHA-1 hash to fingerprint the new contents, then compared the hash with a hash of the source data to confirm successful transfer." Could you point me to an article on how abut to do the process of SHA-1 hash ?

Thanks

0 Kudos