Select to view content in your preferred language

Convert varchar2 columns to nvarchar2

3406
8
12-22-2011 01:45 AM
ShawnHolyoak
Deactivated User
Is there a simple way to convert existing varchar2 columns in Oracle to nvarchar2?  The application we're using requires nvarchar2 columns, but of course, all of our data was created when the DEFAULTS keyword set the string column type to varchar2.  Manually it takes forever, and I didn't want to have to write an add in to do the conversion if I don't have to.  Am I missing something simple?
0 Kudos
8 Replies
ShawnHolyoak
Deactivated User
Should mention that I can't just export and import the feature classes, as our application programmers used ObjectId as the uniqueid.
0 Kudos
VinceAngelo
Esri Esteemed Contributor
What version of ArcSDE are you using?  Which service pack has been applied?  What geometry
storage type are you using?

What version of ArcGIS are you using?  Which Service Pack?

What application requires NVARCHAR2 columns? (Last I heard, Oracle recommended *against*
using NVARCHAR2 if most of the characters are ACSII7.)

Doing a this sort of conversion in-situ is going to fracture your table, causing lots of fragmentation.
The best way to implement the change is to copy the table contents with a "CREATE TABLE backup
AS SELECT * FROM original", TRUNCATE the original, drop the indexes, do ALTERs until the schema
is correct, then INSERT via a SELECT and rebuild indexes.  You'll probably want to add a proper
foreign key based on the old OBJECTID while you're at it.  In a perfect world, you'd reorder the
rows to defragment the features spatially at the same time (via an ORDER BY during INSERT), but
that depends on geometry storage.

- V
0 Kudos
ShawnHolyoak
Deactivated User
SDE is version 10, SP2.  We're using SDEBINARY for our geometry storage type.  ArcGIS is version 10, SP3.  Our building permitting application has an SDEListener application which reads GIS data at runtime. That SDEListener is built by our vendor with a requirement for NVARCHAR2 text columns.

Your solution sounds complicated, but doable.  I'm assuming to do it properly I will have to compress back to state 0, to ensure I have no records in the add/delete tables.  It also won't be very quick.

Is there a way to fix the fragmentation caused by an in-situ conversion?  I have very little time to do this, and writing a script to simply convert every column on every feature class would be faster for me, based on my lack of experience with your suggestion.

Regardless, thanks.

What version of ArcSDE are you using?  Which service pack has been applied?  What geometry
storage type are you using?

What version of ArcGIS are you using?  Which Service Pack?

What application requires NVARCHAR2 columns? (Last I heard, Oracle recommended *against*
using NVARCHAR2 if most of the characters are ACSII7.)

Doing a this sort of conversion in-situ is going to fracture your table, causing lots of fragmentation.
The best way to implement the change is to copy the table contents with a "CREATE TABLE backup
AS SELECT * FROM original", TRUNCATE the original, drop the indexes, do ALTERs until the schema
is correct, then INSERT via a SELECT and rebuild indexes.  You'll probably want to add a proper
foreign key based on the old OBJECTID while you're at it.  In a perfect world, you'd reorder the
rows to defragment the features spatially at the same time (via an ORDER BY during INSERT), but
that depends on geometry storage.

- V
0 Kudos
VinceAngelo
Esri Esteemed Contributor
SDEBINARY storage isn't really supported on Oracle 10g due to issues with LONG RAW deprecation,
and isn't *possible* with 11g (Oracle support for 10g ended in the first half of 2011).  ST_GEOMETRY
is generally faster than SDELOB storage (due to elimination of the F table).

There is no way to defragment tables fractured by in-situ conversion (without truncating and repopulating
them, of course).

Doing the job right is going to take longer than doing it quick, but it should save time in the long run
as the time savings on each query result adds up.  At a minimum, you should start planning for an
11g/ST_GEOMETRY/ArcGIS 10.1 upgrade.

- V
0 Kudos
ShawnHolyoak
Deactivated User
Thanks for your answers.  We are in the planning phase of moving to ST_GEOMETRY, for various reasons.  We'll now add another.  I guess I'll also start planning the upgrade for each feature class.  My DBA has confirmed he can do the truncate/alter/insert without problem.  We now just need to prepare our permitting system to use a real unique id, and identify a method of maintaining that unique id.  Thanks again.

SDEBINARY storage isn't really supported on Oracle 10g due to issues with LONG RAW deprecation,
and isn't *possible* with 11g (Oracle support for 10g ended in the first half of 2011).  ST_GEOMETRY
is generally faster than SDELOB storage (due to elimination of the F table).

There is no way to defragment tables fractured by in-situ conversion (without truncating and repopulating
them, of course).

Doing the job right is going to take longer than doing it quick, but it should save time in the long run
as the time savings on each query result adds up.  At a minimum, you should start planning for an
11g/ST_GEOMETRY/ArcGIS 10.1 upgrade.

- V
0 Kudos
VinceAngelo
Esri Esteemed Contributor
You can use a variant of the method ArcSDE uses to maintain OBJECTID rowid columns to
maintain your independent IDs. ArcSDE shifted from a simple sequence because of the
performance cost during bulk insert, but if you're not doing much bulk assignment, then
a simple sequence may be the way to go.

- V

Note: If you use the TRUNCATE/ALTER/INSERT procedure, you'll need to run an 'sdetable
-o describe' as the owner on each table to correct the internal SDE.COLUMN_INFO type
codes.  You'll also want to prototype this with a trivial layer, to make sure ArcGIS doesn't
object to the type change.  This is especially true for versioned tables.
0 Kudos
ShawnHolyoak
Deactivated User
We are planning on using sequences, as they're so simple.  Thanks for the warning on needing to run the 'describe'; I would have missed that.  I've run into another concern, though, and that's related to archiving.  What happens when I truncate and insert?  Will those be interpreted as changes to the data by the archiving triggers/procedures?  I know there's no way through the ArcGIS software to turn off and turn back on archiving without creating new archive classes, which won't work for us.  Is there a way to disable the triggers/procedures that make archiving function temporarily, or is that proprietary/wrapped code we don't have access to?  If I can't disable archiving and continue to use the same archive classes, what options do I have for altering the column types for my archived feature classes?  Thanks.

You can use a variant of the method ArcSDE uses to maintain OBJECTID rowid columns to
maintain your independent IDs. ArcSDE shifted from a simple sequence because of the
performance cost during bulk insert, but if you're not doing much bulk assignment, then
a simple sequence may be the way to go.

- V

Note: If you use the TRUNCATE/ALTER/INSERT procedure, you'll need to run an 'sdetable
-o describe' as the owner on each table to correct the internal SDE.COLUMN_INFO type
codes.  You'll also want to prototype this with a trivial layer, to make sure ArcGIS doesn't
object to the type change.  This is especially true for versioned tables.
0 Kudos
ShawnHolyoak
Deactivated User
We are planning on using sequences, as they're so simple.  Thanks for the warning on needing to run the 'describe'; I would have missed that.  I've run into another concern, though, and that's related to archiving.  What happens when I truncate and insert?  Will those be interpreted as changes to the data by the archiving triggers/procedures?  I know there's no way through the ArcGIS software to turn off and turn back on archiving without creating new archive classes, which won't work for us.  Is there a way to disable the triggers/procedures that make archiving function temporarily, or is that proprietary/wrapped code we don't have access to?  If I can't disable archiving and continue to use the same archive classes, what options do I have for altering the column types for my archived feature classes?  Thanks.


Basically, according to ESRI, I have no options.  The archived feature classes will register the truncate and insert as modifications to every row in the table, and will add those to the archived feature classes.  ESRI really needs to fix archiving.
0 Kudos