Select to view content in your preferred language

Preserve GlobalID on Append

11059
13
02-23-2017 08:01 AM
Status: Open
Labels (1)
AllanBenvin
Regular Contributor

In our environment we have several databases including a maintenance and a published environment that we replicate data to. Because or maintenance database has feature datasets (edit focused) and our published database has different feature datasets (more consumer focused) we are unable to use geodatabase replication. If data resides in a feature dataset in the source the same feature dataset must exist in the target. Because of this we have been using our own custom replication. In our replication we still want to maintain GlobalIDs in both databases, we do not want them to be recalculated as they are when you append. To maintain the GlobalIDs we have had to add an additional GUID field to our published data that we can map the GlobalID to when we append new records. This works, but it involves extra logic, an extra column and more processing time.

I recently stumbled across an environment setting in ArcGIS Pro called PreserveGlobalIDs that can be used during an append operation. This seemed quite useful as it could eliminate all the jiggery pokery we are doing now. I tried it out and there seem to be some limitations that make it pretty much useless.

The documented usage notes are below:

      (from: http://pro.arcgis.com/en/pro-app/tool-reference/environment-settings/preserve-globalids.htm)

  • This is useful for workflows where the Global ID value for the row is required to stay the same when exporting or appending data.
  • For the Append tool, this environment only applies to enterprise geodatabase data and will only work on data that has a Global ID field with a unique index. If the Global ID field does not have a unique index, the tool may fail. To add a unique index to your Global ID field, you can use the Add Attribute Index geoprocessing tool.

The problems I have found with this are:

1) Works only in ArcGIS Pro, this is not available in ArcGIS Desktop 10.5.

2) Requires a unique index on the GlobalID column.

  •    When you create a layer in ArcGIS Pro or ArcCatalog and add a GlobalID column, the software automatically adds a non-unique index.
  • ArcGIS Desktop does not allow the user to remove the GlobalID index.
  • The GlobalID index is created as non-unique so that versioning can be supported.
  • A manual change to a system generated index is required to use this option. This manual change is in direct violation of rules enforced by other ESRI software.

I would like to see the preserveGlobalIDs setting supported in ArcGIS Desktop and to also be supported on GlobalID fields with a non-unique index as is created by the ESRI software.

13 Comments
StefanThorn2

Hello Alex,

A late reply but I have managed a bit different. Because I had to keep GlobalIDs and have a transformation the same time.

- I imported the 10.5 model in ArcGIS Pro

- I did your step 6 for all FC in the model

Then I saved it as a python script and manually I have to make sure the local variables where correct written because this was not properly don by Pro (with some help from colleague with python experiences)

- I added the pyhtonscipt to a new model and the nice thing about the model are the geoprocessing environment settings. Here you can set the transformation of the coordinates.

So i was able to import and transform from a FGDB to a ArcGIS Server database.

# Set Geoprocessing environments
arcpy.env.outputCoordinateSystem = "PROJCS['RD_New',GEOGCS['GCS_Amersfoort',DATUM['D_Amersfoort',SPHEROID['Bessel_1841',6377397.155,299.1528128]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]],PROJECTION['Double_Stereographic'],PARAMETER['False_Easting',155000.0],PARAMETER['False_Northing',463000.0],PARAMETER['Central_Meridian',5.38763888888889],PARAMETER['Scale_Factor',0.9999079],PARAMETER['Latitude_Of_Origin',52.15616055555555],UNIT['Meter',1.0]]"
arcpy.env.transferGDBAttributeProperties = "true"
arcpy.env.transferDomains = "true"
arcpy.env.maintainAttachments = "true"
arcpy.env.qualifiedFieldNames = "false"
arcpy.env.parallelProcessingFactor = "3"
arcpy.env.geographicTransformations = "Amersfoort_To_WGS_1984_4X"
arcpy.env.preserveGlobalIds = "true"

regards,

Stefan

NicoleSulzberger

For FeatureLayer on ArcGIS Online, I managed to do it with the following step:

1. Take FeatureLayer offline -> local FileGeoDB will be created with a FeatureClass in it.

2. Append Features from a local featureclass into this FeatureClass with PreserveGlobalIds enabled

3. Sync FeatureLayer with ArcGIS Online

AllanBenvin_yyc

As a follow up to this post I got an update on this from ESRI Support that I submitted in 2018. This is apparently "As Designed". The problem for us is that any objects created using 10.5 or earlier has a non-unique index. None of these layers are supported by the preserveGlobalIDs setting. Geodatabase upgrades do not update these indexes. The argument here is that ESRI does not update user data, of course the indexes are not user created, they were created by the software.

If you have preserveGlobalIDs set to True and the source data does not have a GlobalID it will fail in one of two ways: 

Scenario 1: target created by 10.5 or earlier, has non-unique GlobalID index: 
  Result: the data gets appended but GLOBALIDs are all set to '{00000000-0000-0000-0000-000000000000}' so the data is bad
 
Scenario 2: target created by Pro, has unique GlobalID index
  Result: data fails to append as there is no globalid in the source, it tries to put '{00000000-0000-0000-0000-000000000000}' into target globalid and fails due to unique constraint
 
We have over 4000 layers created over the last 20 years with different versions. We count on upgrades to modernize our databases. Apparently, the recommendation is just to recreate everything with the newest version of Pro or update the indexes using SQL, which is unsupported. The indexes on GlobalID get created differently depending if the data is branch or traditional versioned and archived. The base tables, delta tables and archive tables will all have different indexes and rules. 
 
Below is a script I used in Oracle to update GlobalID indexes to unique for the required tables. The script takes a schema name and spools out an SQL script that will drop/create the indexes. It also includes the original index statement as a backup.
 

--Generates sql statements to make globalid UUID indexes unique.
-- code is for Oracle
ACCEPT OwnerName char prompt 'Enter owner of the source table : ';

set echo on;
set serveroutput on size 1000000;

SET ECHO off
SET NEWPAGE 0
SET SPACE 0
SET LINESIZE 13000
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF
SET VERIFY OFF
SET TRIMSPOOL ON
set termout ON
EXECUTE DBMS_OUTPUT.ENABLE(1000000);

-- Retrieve the instance name and store it in a variable
COLUMN DB_NAME NEW_VALUE InstanceName
SELECT sys_context('USERENV', 'DB_NAME') AS DB_NAME FROM dual;

-- Dynamically name the spool file after the instance with "GLOBALID_IDX" and OwnerName in the name, and output to C:\TEMP folder
SPOOL C:\TEMP\&&InstanceName.GLOBALID_IDX.&&OwnerName..log


DECLARE
vOwner VARCHAR2(50) := UPPER('&&OwnerName');
vTablespaceName VARCHAR2(80) := '';
cnt NUMBER := 0;
colName VARCHAR2(60) := '';
sqlCols VARCHAR2(4000) := ' ';
intStart NUMBER:=1;
intLast NUMBER :=0;
intTabIdxCount NUMBER := 1;
PrevTabName VARCHAR2(100) := '';
TYPE v_arr IS VARRAY(950) of VARCHAR2(50);
vCurrentCreateString VARCHAR2(4000) := '';
vDropString VARCHAR2(4000) := '';
vNewCreateUniqueString VARCHAR2(4000) := '';
colArr v_arr;
vIndName VARCHAR2(100) := '';

CURSOR Indcur IS
-- omit all delta and active archive tables.
SELECT DISTINCT AI.OWNER, AI.INDEX_NAME,AI.TABLE_NAME,
AI.TABLE_TYPE,AI.UNIQUENESS, AI.STATUS,
AI.PARAMETERS--,AI.TABLESPACE_NAME
FROM all_indexes AI
WHERE AI.OWNER = vOwner
--IGNORE ANY INDEXES THAT ARE ALREADY UNIQUE
AND AI.UNIQUENESS != 'UNIQUE'
AND AI.INDEX_NAME LIKE '%UUID%'
--IGNORE ANY OBJECTS THAT ARE ARCHIVE TABLES
AND AI.OWNER||'.'||AI.TABLE_NAME NOT IN (SELECT TR2.OWNER||'.'||TR2.TABLE_NAME
FROM SDE.TABLE_REGISTRY TR1,
SDE.SDE_ARCHIVES A,
SDE.TABLE_REGISTRY TR2
WHERE TR1.REGISTRATION_ID = A.ARCHIVING_REGID
AND TR2.REGISTRATION_ID = A.HISTORY_REGID
AND A.ARCHIVING_REGID <> A.HISTORY_REGID
AND UPPER(TR2.TABLE_NAME) = AI.TABLE_NAME
AND UPPER(TR2.OWNER) = AI.OWNER )
--IGNORE ANY OBJECTS THAT ARE delta tables
AND AI.OWNER||'.'||AI.TABLE_NAME NOT IN (SELECT TR.OWNER||'.A'||TR.REGISTRATION_ID
FROM SDE.TABLE_REGISTRY TR )
--Only include objects where that have a globalid that is defined as globalid
AND AI.OWNER||'.'||
AI.TABLE_NAME IN (SELECT DISTINCT CR.OWNER||'.'||CR.TABLE_NAME
FROM SDE.COLUMN_REGISTRY CR
WHERE CR.SDE_TYPE = 12
--AND COLUMN IS GLOBALIDFIELDNAME
AND UPPER(COLUMN_NAME) = (SELECT UPPER(
XMLCast (
XMLQUERY ('*/GlobalIDFieldName'
PASSING xmltype.createxml (GI.DEFINITION) RETURNING CONTENT) AS VARCHAR2 (50)
)) AS "GLOBALIDFIELDNAME"
FROM SDE.GDB_ITEMS_VW GI
WHERE GI.NAME = AI.OWNER||'.'||AI.TABLE_NAME
)
)
ORDER BY AI.TABLE_NAME,AI.INDEX_NAME
;

CURSOR Colcur (IndexOwner IN VARCHAR2, IndexName IN VARCHAR2) IS
SELECT AIC.INDEX_OWNER, AIC.INDEX_NAME, AIC.COLUMN_NAME,AIC.COLUMN_POSITION,
AIC.DESCEND
FROM ALL_IND_COLUMNS AIC
WHERE AIC.INDEX_OWNER = IndexOwner
AND AIC.INDEX_NAME = IndexName
ORDER BY AIC.INDEX_NAME,AIC.COLUMN_POSITION
;

ind_rec Indcur%ROWTYPE;
col_rec Colcur%ROWTYPE;

BEGIN
DBMS_OUTPUT.PUT_LINE('-- Export indexes on Non-SDE registered table where Owner = '||vOwner);

OPEN Indcur;
LOOP
FETCH Indcur into ind_rec;

EXIT WHEN Indcur%NOTFOUND;

vCurrentCreateString := '';
--Initialize vars
sqlCols := '';
colName := '';

--GET vIndName simply
vIndName := ind_rec.INDEX_NAME;

--get cols from sub cursor
cnt := 1;
colArr := v_arr();
open Colcur(ind_rec.OWNER, ind_rec.INDEX_NAME);
LOOP
FETCH Colcur into col_rec;
EXIT WHEN Colcur%NOTFOUND;
colName := col_rec.COLUMN_NAME;
sqlCols := sqlCols||','||colName; --LINE 66
colArr.Extend(1);
colArr(cnt) := colName;
cnt := cnt + 1;

END LOOP;
CLOSE Colcur;
--START OF WRITE OUT COLUMNS FOR TARGET TABLE;
vCurrentCreateString := 'CREATE ' ||
CASE
WHEN ind_rec.uniqueness = 'UNIQUE' THEN 'UNIQUE '
ELSE ''
END ||
'INDEX ' || ind_rec.OWNER || '.' || ind_rec.index_name ||
' ON ' || ind_rec.OWNER || '.' || ind_rec.table_name || ' (';

vCurrentCreateString := vCurrentCreateString || ' ';

FOR i in 1..colArr.Count
LOOP
IF i = colArr.Count THEN
vCurrentCreateString := vCurrentCreateString||''||colArr(i) ;
--dbms_output.put_line(' '||colArr(i) );
ELSE
--dbms_output.put_line(' '||colArr(i)||',');
vCurrentCreateString := vCurrentCreateString||''||colArr(i)||',';
END IF;
END LOOP;
-- END OF WRITE OUT COLUMNS

--terminate the string
vCurrentCreateString := vCurrentCreateString||');';

DBMS_OUTPUT.PUT_LINE('');

--set the vCurrentCreateString by editing the current string
IF INSTR(vCurrentCreateString, 'UNIQUE') = 0 THEN
vNewCreateUniqueString := REPLACE(vCurrentCreateString,'CREATE INDEX','CREATE UNIQUE INDEX');
vDropString := 'DROP INDEX '|| ind_rec.OWNER||'.'||vIndName||';';
DBMS_OUTPUT.PUT_LINE('--REQUIRES UNIQUE INDEX');
ELSE
DBMS_OUTPUT.PUT_LINE('--NO CHANGES REQUIRED');
vNewCreateUniqueString:= '';
vDropString := '';
END IF;


DBMS_OUTPUT.PUT_LINE('--backup of current index: '||vCurrentCreateString);
DBMS_OUTPUT.PUT_LINE('/*drop index index */ '||vDropString);
DBMS_OUTPUT.PUT_LINE('/*create unique index */ '||vNewCreateUniqueString);


PrevTabName := ind_rec.TABLE_NAME;
cnt := cnt + 1;

END LOOP;
CLOSE Indcur;

END;

--spool off;