Select to view content in your preferred language

Migration of Oracle 11g DB from Solaris UNIX to RHEL 6.4

2368
3
09-10-2013 01:43 PM
ToddMcNeil
Emerging Contributor
This might be a long post but I have a project that involves the move of three database environments; development, test and production; from Solaris UNIX to RHEL 6.4 and upgrading the Oracle 11 g DB from 11.2.0.1.0 to 11.2.0.3.  This is due to Oracle not releasing any more patches for 11.2.0.1 so our IT department wants to move to a newer version that is currently supported by Oracle and to take advantage of Linux.  Our current environments each have one Windows 2003 R2 application server with ArcGIS Server 10 while the database server has ArcSDE 10 installed.  The current db environment will also be moved to ArcSDE 10 SP 3.

We are currently testing the migration of our test environment since it's gets relatively little use, only in times of testing and application/database releases, and IT can keep it running simultaneously during testing of the new environment. 

I wanted to outline the steps that our IT support staff have taken prior to explaining the few issues that we've encountered.  I had already opened an incident with esri tech support since we had already taken one stab and migrating the db but it went badly and we had to start again.  This was a list of steps provided to help us through it.

1) Install Oracle 11g 11.2.0.3 on RHEL 6.4 box.
2) Install ArcSDE 10 with SP 3 patch.
3) Create SDE admin account with same credentials as our current environment.
4) Import the SDE schema using impdp with logging on.
5) Import remaining schemas using impdp with logging on.

The first issue that we saw after the last step was that every feature class created the same ORA error on import:
ORA-39083: Object type INDEX failed to create with error:
ORA-20001: Object CPDMAPSO.CEN_TRACT_CENTROID.SHAPE exists in ST_GEOMETRY_COLUMNS.
Failing sql is:
BEGIN 
SDE.st_type_export.validate_spref('CPDMAPSO','WGS_1984_Web_Mercator_Auxiliary_Sphere',32,-20037700,-30241100,10000,0,1,0,1,999900000000000000000000000000000000,0,0,10845971.404,999900000000000000000000000000000000,-999900000000000000000000000000000000,999900000000000000000000000000000000,-999900000000000000000000000000000000,3857,'WGS_1984_Web_Me


I used a combination of sdelayer -o describe_long and ArcCatalog to look at one of the feature classes and determined that the SRID, Auth ID, Spatial Index, Spatial Index parameters and Layer Envelope were different than the same feature class in the current environment.  The spatial index for the layer was created but the SI parameters were different than the same layer in the current environment.  How could that be?  What was strange was that ArcCatalog could not see that the spatial index had been created and if I tried to create it, it would not stick.  So, naturally, I continued with tech support and it was determined that these oracle errors are a bug and have been addressed in 10.1 SP1 and 10.2.1, but we are not moving to 10.1 yet until this winter.  I still have to come up with a solution to address the problems.  The one thing that I knew was the same between the two environments was the XY coordinate system and it just added to the confusion.

I thought that it might be a good idea to test the import of an sdx file of the same layer to a new name.  Once imported, I used sdelayer -o describe_long and verified that all of the parameters were identical to the same layer in our current environment.  This did not make me feel any better and I thought something was wrong.  I did not want to have to import all of the sdx files just to fix the problem.

Tech support suggested ways to recreate the spatial index, spatial index (SI) parameters and rebuild the feature and table statistics using a combination of python scripts or I could create a batch script with sdelayer -o load_only_io, sdelayer -o normal_io and sdetable -o update_dbms_stats commands.  I was provided two python scripts the first that put each feature class into load_only_io mode to remove the existing spatial index and then switch to normal_io to recreate the spatial index and its parameters.  The second script used Analyze_management to rebuild the table indexes (similar to update_dbms_stats).  Once done, I used ArcCatalog to verify that the SI parameters were seen in the properties dialog, however, it was still different than the same layer in the current environment.  So, I was not sure it worked since we still had to deal with the layer envelope, the SRID and Auth SRID.

Tech support suggested to start testing the new environment with spatial selections and queries.  I ran queries using ArcMap's Select By Location where I would select points that fell inside a designated county.  This select process was as fast as the same selection when using the same data from the current environment.  The data also rendered as fast as the current environment and the spatial envelope and coordinate system did not seem to be an issue.

Tech support came back and said that the SRID's in version 10.0 does not necessary reflect the EPSG codes like it does at 10.1 and later releases.  SRID populates based on a sequence.  If there are differences in layer envelope or M values this will change the SRID.  Since we saw some slight changes in the layer envelope this is the likely reason for the difference in the SRID.  This should not influence the functionality of the feature class, so the integrity of the data should still be good.  Tech support also referenced some documentation that explains this but the link has to do with Oracle spatial so I am not sure how this is relevant, http://edndoc.esri.com/arcsde/9.0/capi_concepts/oracle_spatial_geomtypes.htm:

"ArcSDE creates new SRID values when new rows are inserted into SDE.SPATIAL_REFERENCES. New rows are inserted and a new ArcSDE SRID is generated when a new layer requires a spatial reference that does not already exist in the table. ArcSDE predefined projection strings are maintained in ArcSDE's projection engine. They are inserted into SDE.SPATIAL_REFERENCES as needed. Other information also appears in this table, such as offset and scale values needed by ArcSDE. Therefore, there may be multiple entries in SDE.SPATIAL_REFERENCES that contain the same projection string along with different values in the other columns."

We have several applications that use map services so the next step is to test some of these map services with the current data and see how it performs.  In the meantime, it seems to me that the migration is just about finished and I can outline the steps that need to be completed by the IT support staff for the remaining two environments.  This turned out to be a very long post but I hope that someone will take the time to read it and let me know if we are on the right path or if we have some serious problems that still have to be addressed.

Todd McNeil
Geographer

U.S. Department of Housing and Urban Development
Office of Policy Development and Research
451 7th Street S.W., Room 8126
Washington, DC 20410

Email: Todd.McNeil@hud.gov
0 Kudos
3 Replies
VinceAngelo
Esri Esteemed Contributor
Using 'exp/expdb' and 'imp/impdb' always seemed a sketchy way to transfer an
entire database, especially with RMAN available.

- V
0 Kudos
ToddMcNeil
Emerging Contributor
Using 'exp/expdb' and 'imp/impdb' always seemed a sketchy way to transfer an
entire database, especially with RMAN available.

- V


Vince,

If it is useful to use RMAN to migrate to a new database, what information would you suggest that I tell my IT support staff?  Please keep in mind that esri tech support recommended that 'exp/expdb' and 'imp/impdb' be used so I would think that other agencies or organizations have had similar issues and had them resolved.

Todd
0 Kudos
VinceAngelo
Esri Esteemed Contributor
Back in the day, when SDEBINARY was the only storage option, it didn't really
matter how you transferred data (as long as the SDE user was added first).
Now, with custom datatypes, the order in which tables are loaded can cause
all manner of issues, yet you can't trust exp/expdp to sequence the exports
for you.  It is for this reason that most of my clients have used RMAN and/or
DataGuard to transfer an entire database instance, rather than by piecemeal
with export utilities.

There is another school of thought in the migration world.  It holds that the
migration window is a prime opportunity to fix accumulated errors in the
database structure.  The main problem here is that you need to identify all
the problems up front (e.g., oodles of slightly incompatible coordinate
references, spatial fragmentation, et. al.) and engineer solutions for them.
There are limits to what can be transferred (e.g., versioned tables with
the version tree intact), and it can take much longer to load the database
in optimized form, but the goal is to improve query performance and
database stability, and that doesn't come without a price.

You seem to have encountered a worst case scenario, where slight abnormalities
in structure have exacerbated the difficulties in data transfer between different
releases.  Unfortunately, there isn't much that can be done about it at this
stage of the proceedings (except to test rigorously).

Good luck!

- V
0 Kudos