Copying GDB Features to existing SDE Features

08-10-2022 03:51 PM
New Contributor III

I am trying to clear existing records in a list of SDE features and then copy duplicate (but updated) GDB records into those same SDE features.  Looking for advice on how to set up the loop through both lists within the append tool so each GBD feature is copied into its respective SDE feature.  The code below throws the error "ValueError: too many values to unpack".

               CBTSDE + "REF.SDE.RMA", CBTSDE + "REF.SDE.Road_Easement", CBTSDE + "REF.SDE.Road_Gates", CBTSDE + "REF.SDE.Road_Type",
               CBTSDE + "REF.SDE.RoadRouteNamed", CBTSDE + "REF.SDE.SMA", CBTSDE + "REF.SDE.Stream_Fish_Barrier", CBTSDE + "REF.SDE.Stream_Fish_Use",
               CBTSDE + "REF.SDE.Stream_Flow_Regime", CBTSDE + "REF.SDE.Stream_Size", CBTSDE + "REF.SDE.Stream_StateType", CBTSDE + "REF.SDE.StreamRouteNamed", CBTSDE + "REF.SDE.StrRt"]
GDBFeatures = [CBTGDBp + "NamedRMA", CBTGDBp + "NamedSMA", CBTGDBp + "OR_State_Type", CBTGDBp + "RdRt", CBTGDBp + "RMA", CBTGDBp + "Road_Easement",
               CBTGDBp + "Road_Gates" + CBTGDBp + "Road_Type", CBTGDBp + "RoadRouteNamed", CBTGDBp + "SMA", CBTGDBp + "Stream_Fish_Barrier",
               CBTGDBp + "Stream_Fish_Use", CBTGDBp + "Stream_Flow_Regime", CBTGDBp + "Stream_Size", CBTGDBp + "Stream_StateType", CBTGDBp + "StreamRouteNamed", CBTGDBp + "StrRt"]

# Empty and Reload SDE feature
if current_status == 'YES':
    print 'Processing empty and reload SDE feature'
    for items in SDEFeatures:
    for (GDBitems, SDEitems) in (GDBFeatures, SDEFeatures):    
        arcpy.Append_management(GDBitems, SDEitems, "NO_TEST", "", "")
    print 'NetAcresCurrent SDE feature has been updated.'
    print 'NetAcresCurrent SDE feature has NOT been updated.'


0 Kudos
4 Replies
MVP Regular Contributor

1. please insert you code as a code sample rather then a screenshot


2. Is your code throwing an error?
3. I can make some suggestions right off bat. I think I would used a dictionary or a tuple for the target and source information. Not two lists. so:

# tuple
my_tuple_list = [(target_1, source_1), (target_2, source_2)...  ...(target_n, source_n)]
# dict
my_dict = {target_1: source_1, target_2: source_2...  ...target_n: source_n}

I would also suggest using os.path.join for creating paths not string concatenation with "+"

0 Kudos
by MVP Regular Contributor
MVP Regular Contributor

We do something similar for migrating data to test servers and I'll echo @forestknutsen1's suggestion of using a dictionary.

We use a list of featureclass names from a text file that we use to create the src_paths and dst_paths by iterating over the list and creating the path strings. We also track last edit times and check if the dataset is older/newer/same or different feature counts and handle it accordingly so we also have a lot more key:value entries.

The fc name would replace the 'fc_name' key in the dictionary below and is used for status/ error messages (omitted here):

fc_dict = {'fc_name': {'src_path': 'path to src fc', 'dst_path': 'path to dest fc'}, ... }

# then iterate over it:

for fc, fcPaths in fc_dict.items():
    if arcpy.Exists(fcPaths['dst_path']):
        arcpy.Append_management(src_path, dst_path, "NO_TEST")


You could make the paths when you create the dictionary/ iterating over the list of featureclass names.

fc_dict = {}

source_path = r'path to the source db\dataset.SDE.'
dest_path = r'path to the dest db'

for fc_name in ['fc1', 'fc2', 'fc3', ...]:
   fc_dict[fc_name] = {'src_path': os.path.join(source_path, fc_name}, 'dst_path': os.path.join(dest_path, 'REF.SDE.' + fc_name)}


MVP Regular Contributor

You can zip your lists to get around that error:

source = ['x', 'y', 'z']
target = ['a', 'b', 'c']

for a, b in zip(source, target):
    print a
    print b


0 Kudos
MVP Regular Contributor

My favorite pattern for this sort of stuff is to use a cvs file with named tuples. That way you can mange the paths in Excel. And you can get access to the "csv fields" with a dot operator within your python code.



from collections import namedtuple
import csv

with open('features.csv') as features:
    reader = csv.DictReader(features)
    Data = namedtuple('Data', reader.fieldnames)
    feature_paths = [Data(**x) for x in reader]
for feature_path in feature_paths:
    print feature_path.source


 csv file





0 Kudos