Append tool crashing in ArcPy if arcpy.env.preserveGlobalIds = True?

753
4
Jump to solution
11-09-2019 02:36 PM
DavidLindsey2
New Contributor II

So I've written a Python script called driver.py (using Python environment from Pro 2.4.2) that will append several SDE databases' feature classes to a coalesced database. All databases have matching schema. My goal is to preserve Global IDs on all output. The script performs several tasks:
1) It creates an empty file geodatabase.

2) The necessary feature classes from the first SDE database are exported to the output file GDB (with arcpy.FeatureClassToFeatureClass_conversion).

3) The remaining SDE database feature classes are then appended to the exports from the first SDE's feature classes.

Here's the script:

import arcpy
import os

sde_folder = "My\\SDE\\Folder\\Path"

output_folder = "My\\Output\\Folder\\Path"

gdb_name = "testing_gdb.gdb"

schemaType = "TEST" #"NO_TEST"
fieldMappings = ""
subType = ""

if arcpy.Exists(output_folder + gdb_name):

    print("GDB already exists")

else:

    arcpy.CreateFileGDB_management(output_folder, gdb_name)

arcpy.env.workspace = sde_folder

folder_Connections_SDE = arcpy.ListWorkspaces("*", "SDE")

# For each GDB in the workspace connections folder...
for sdeDatabase in folder_Connections_SDE:

    arcpy.env.workspace = sdeDatabase
    arcpy.env.preserveGlobalIds = True

    if sdeDatabase == folder_Connections_SDE[0]:

        print("First SDE database")
        fc_list = arcpy.ListFeatureClasses()

        for fc in fc_list:

            if "QC_REPORTS" in fc:

                print("Skipped:" + fc)
                pass

            elif "REPLICATION_BND" in fc:

                print("Skipped:" + fc)
                pass

            else:

                output_name = fc.rsplit(".")[2]
                print(output_name)
                arcpy.env.overwriteOutput = True
                arcpy.FeatureClassToFeatureClass_conversion(fc, output_folder + gdb_name, output_name)

    else:

        print(sdeDatabase)

        fc_list = arcpy.ListFeatureClasses()

        for fc in fc_list:

            if "QC_REPORTS" in fc:

                print("Skipped:" + fc)
                pass

            elif "REPLICATION_BND" in fc:

                print("Skipped:" + fc)
                pass

            else:

                output_name = fc.rsplit(".")[2]

                input_feature_class_path = os.path.join(sdeDatabase, fc)
                output_feature_class_path = os.path.join(output_folder + gdb_name, output_name)

                arcpy.Append_management(input_feature_class_path, output_feature_class_path, schemaType, fieldMappings, subType)‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

If I run the above script with...

arcpy.env.preserveGlobalIds = False

...the script executes without error and all output is created, but the Global IDs are not preserved.

However, if I change that boolean to True, the script successfully creates the output GDB and successfully completes the Feature Class to Feature Class conversion (while also preserving the Global IDs), but it fails during the Append operation with the following error:

Traceback (most recent call last):
  File "My/Script/Folder/Path/driver.py", line 81, in <module>
    arcpy.Append_management(input_feature_class_path, output_feature_class_path, schemaType, fieldMappings, subType)
  File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 4929, in Append
    raise e
  File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 4926, in Append
    retval = convertArcObjectToPythonObject(gp.Append_management(*gp_fixargs((inputs, target, schema_type, field_mapping, subtype, expression), True)))
  File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\geoprocessing\_base.py", line 506, in <lambda>
    return lambda *args: val(*gp_fixargs(args, True))
arcgisscripting.ExecuteError: ERROR 999999: Something unexpected caused the tool to fail. Contact Esri Technical Support (http://esriurl.com/support) to Report a Bug, and refer to the error help for potential solutions or workarounds.
Failed to execute (Append).

I read this from Esri on the preserveGlobalIds help page:

"For the Append tool, this environment only applies to enterprise geodatabase data and will only work on data that has a Global ID field with a unique index. If the Global ID field does not have a unique index, the tool may fail."

Since I'm utilizing enterprise geodatabases for the append, I'm unsure about the unique index issue. Does anybody know if I'm missing something here? Thanks for any clarification!

1 Solution

Accepted Solutions
DavidLindsey2
New Contributor II

After further research and testing, preserving Global IDs from SDE databases directly to a local File GDB does not appear to be possible with the Append tool. To solve this, I needed to Append all individual SDE databases to a coalesced SDE database, then perform a Feature Class to Feature Class conversion from the coalesced SDE database to a local File GDB (with arcpy.env.preserveGlobalIds = True).  That extra step allowed me to preserve the Global IDs as desired. Weird, and definitely not ideal. However, it is functional for now.

View solution in original post

4 Replies
DavidLindsey2
New Contributor II

After further research and testing, preserving Global IDs from SDE databases directly to a local File GDB does not appear to be possible with the Append tool. To solve this, I needed to Append all individual SDE databases to a coalesced SDE database, then perform a Feature Class to Feature Class conversion from the coalesced SDE database to a local File GDB (with arcpy.env.preserveGlobalIds = True).  That extra step allowed me to preserve the Global IDs as desired. Weird, and definitely not ideal. However, it is functional for now.

DavidTillberg_community
New Contributor

Since you are using Pro, another option is to use an arcpy.FeatureSet / RecordSet.  These preserve GlobalIDs.

 

if is_table:
    fs = arcpy.RecordSet()
else:
    fs = arcpy.FeatureSet()

fs.load(fc_fullpath)

fs.save(dest_fc_fullpath)

0 Kudos
MichaelVolz
Esteemed Contributor

This appears to be a good solution to your problem, but I have a couple of questions:

How many records will you be processing in total?

Will this process be run continuously at some interval (e.g  Daily, Weekly, Monthly, etc.)?

Is the intermediate feature class in SDE going to be deleted if the process is repeated?  I'm not a database expert but I thought that table space was an issue in a relational database so a process such as this was not an ideal solution due to its impact on the performance of the overall SDE database.

0 Kudos
DavidLindsey2
New Contributor II

It'll consist of several hundred thousand records being coalesced/exported every two weeks.

I haven't quite determined the best workflow for dealing with the intermediate SDE database once the export has finished. I was thinking I'd just leave it "as-is" and have the script simply overwrite the output each time. The intermediate data holds no value once the export/submission is complete, so retaining it won't be necessary. Having said that, if performance degrades, I'll consider emptying out the SDE each time the script finishes if it's a noticeable improvement.

I'm no database expert either, but we have a fairly robust enterprise setup with dozens of SDE connections going here, there, and yonder. We've not had any issues with large data transfers like this so far (*knocks on wood*), and my tests so far appear more stable than the clunky script we inherited for the process. I can follow up here with more feedback once the application is completed.

0 Kudos