Select to view content in your preferred language

Appending- how to avoid duplicates?

16304
29
03-27-2017 06:50 AM
TheoFaull
Deactivated User

I have two point shapefiles, both have the exact same fields in them. There are some new records AND some duplciate records when comparing the two datasets.

I want to use the append tool as I don't want to create a new dataset, I just want to add data to the existing original shapefile.

However, when I append the two shapefiles, matching records are appended, thus leaving lots of duplicate records. How can I tell my script to only append new records and ignore duplicates?

Tags (2)
29 Replies
TheoFaull
Deactivated User

FGDB. Thanks for the help Neil. I've gone for the method posted earlier now. Where I create a new dataset with a date tag added to the end of the feature class name. I just have to remember to resource the MXDs to this new layer after every update.

0 Kudos
EinarDørheim1
Occasional Contributor

I've been using this script, posted here, to do basically what you asked for in the first place. It needs one field where you need to look for duplicate values and a date-field to sort your data. Before this code block runs I append my new/updated data to my master feature class, and then run this code to see if there is any duplicates, if yes it keeps only the newest. 

# Script loops through a feature class and deletes duplicate records, keeps newest entry
import arcpy

# Set parameters
# path to feature class:
fc = r"C:\Users\Documents\ArcGIS\Projects\kun.gdb\kun_A"
# unique field to search for duplicates
id = "USER_Id" 
# Field to sort by. D = Descending A = Ascending. choose D to keep newest entry
sort_field = "date D"

# print("Looking for duplicates")
cursor = arcpy.UpdateCursor(fc,"","",id,sort_field)
keepList = list()
for row in cursor:
    row_val = row.getValue(id)
    if row_val not in keepList:
        keepList.append(row_val)
    elif row_val in keepList:
        cursor.deleteRow(row)
    else:
        pass
    
# print("Finito!")‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The downside of this code as i see it is that it's slow. For my use it has to loop through 4.5M records each time, thus taking 15-20minutes to complete. (So if anyone has tips for faster methods I'm all ears)

NeilAyres
MVP Alum

Use the da module.

as in arcpy.da.UpdateCursor(....)

MichaelVolz
Esteemed Contributor

Use a truncate and append process.  I've been using this type of process for years on file gdbs that are tied to AGS mapservices and it works well.

You just need to test things when there is a schema change, as that would require stopping the service but hopefully this would be a very infrequent operation.

You might also need to compact the file gdb on some interval (quarterly, semi-annually) as performance can degrade over time as the OBJECTIDs keep getting larger in the file gdb.

If performance gets really bad, you could also create a new file gdb and just import the feature classes in from the existing renamed file gdb (This maintenance would require stopping the service).  This would be even less frequent, maybe once a year or every other year.

EinarDørheim1
Occasional Contributor

I tried modifying the script to use the da module, but it only slightly improved performance. previous script with arcpy.UpdateCursor used 1070s, while the new script with arcpy.da.UpdateCursor used 1032s. 

Here's the code I used this time: 

  1. import arcpy
  2. fc = fc = r"C:\Users\Documents\ArcGIS\Projects\kun.gdb\kun_A"
  3. fieldss = ["USER_Id", "date"]
  4. keepList = []
  5. with arcpy.da.UpdateCursor[fc, fieldss, sql_clause=(None, "ORDER BY date DESC"))
  6.     for row in cursor:
  7.         row_val = [0]
  8.         if row_val not in keepList:
  9.             keepList.append(row_val)
  10.         else:
  11.             cursor.deleteRow()

Am I using the da module correctly? after reading about the performance differend between arcpy.da. Cursors  and arcpy.Cursors I would think the improvement should be larger. 

0 Kudos
NeilAyres
MVP Alum

In the past I have seen speed improvements of about X 10. So yes, I would expect this to be quicker.

How many records are being deleted?

0 Kudos
EinarDørheim1
Occasional Contributor

in this test I had 424'857 records where 19 where duplicates. so after the script runs I'm left with 424'838 records. 

0 Kudos
NeilAyres
MVP Alum

You code doesn't look quite right

should look a bit like this :

  1. with arcpy.da.UpdateCursor[fc, fieldss, sql_clause=(None, "ORDER BY date DESC")) as cursor:
        for row in cursor:
            row_val = row[0]
            if row_val not in keepList:
                keepList.append(row_val)
            else:
                cursor.deleteRow()
    

So there are very few (a tiny %) that are actually deleted.

EinarDørheim1
Occasional Contributor

Sorry, that was a typo from my part, the code does state the row_val = row[0] as you pointed out. 

Indeed there is only a small % that gets deleted every time, will seldom be more than 500 rows that are updated/added each turn.

0 Kudos
JasonRogers1
Occasional Contributor

This worked PERFECTLY for me!  Thank you so much for this post!!!

0 Kudos