Select to view content in your preferred language

Is there a way to replace only part of the records in a table based on a unique ID?

1782
11
Jump to solution
05-11-2023 11:47 AM
AyeletGreenberg
Frequent Contributor

We have a vendor that collects Signs data in the field. They couldn’t work with our hosted layer so they downloaded the data, worked on it  (updated points location and other details), and then sent me the updated data.

I now need to import the data back to the original GIS layer and replace existing records with those in the vendor’s data, based on the Sign_ID (unique field).  But there is a problem - they didn't update all of the signs. So if there are 500 signs in the area they worked in, they updated about 450 of them, and the other 50 are not in their data.

So my question is: is there a way to replace only part of the records in a table based on a unique ID? (Without knowing which Unique IDs were updated and need to be replaced)

See the attached screenshot - the yellow points are our data (IDs= 6100, 6101, 6102, 6103), and the blue points are the vendor's data (IDs= 6100, 6101, 6102).

So is there a way to replace only 6100, 6101, 6102 and keep 6103 as is?

I can't use spatial proximity because their points often have different locations than the original points, and there are other close signs. (e.g. ID= 6101)

AyeletGreenberg_0-1683830720137.png

Thanks

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
KevinSaavedra
Esri Contributor

Hi @AyeletGreenberg,

In ArcGIS Pro 3.1, the Append tool now has the "Upsert" (Update and Insert) capability, which seems like an ideal fit for your use case. Note that we plan on adding performance enhancements for feature services in future releases, but it should still work.

For more information, please refer to the following blog post: Upsert your datasets using the Append tool in ArcGIS Pro 3.1. 

Kevin

View solution in original post

11 Replies
RyanUthoff
Frequent Contributor

I'm trying to understand the problem. If you don't know which Unique IDs were updated and need to be replaced, then how do you know they even updated them in the first place?

In your screenshot, somehow, you know that blue 6101 = yellow 6101. How are you able to determine that currently?

0 Kudos
AyeletGreenberg
Frequent Contributor

I know they updated the signs because they worked in a specific area, and this is what they were supposed to do.  The yellow points in the attached screenshots are ours, and the purple dots are the vendor's.
The screenshot in the original message was an example, to help explaining the problem.

This is the first part of a project that will include thousands of signs. so I can't sort the signs manually.

AyeletGreenberg_0-1683832587020.png

 

0 Kudos
RyanUthoff
Frequent Contributor

Ok, I understand. So, if you cannot join on UniqueIDs and you cannot do a spatial join, then I don't see how you will be able to accomplish what you are wanting. You have to have something unique to join on (or spatial join) in order to update your original data.

Can you concatenate other fields into a single field that would result in it being unique? Outside of that, I think you're out of luck. I don't see how it's possible to update two "different" datasets without having any sort of UniqueID, or not being able to do a spatial join.

0 Kudos
AyeletGreenberg
Frequent Contributor

I am going to do the join based on the Unique_ID field, it is just that I don't know which of the signs were updated and which weren't.
So let's say the original layer has 100 records, with unique IDs ranging from 1-100. The vendor’s data uses the same Unique_IDs but contains only 70 records. I want to replace the corresponding records from the original layer with these 70 records, and keep the other 30 records, but I don’t have a way to know which Signs were updated and which weren’t.

The table below represents both datasets. In the original data I need to replace only the records in red, and leave the ones in black untouched.

AyeletGreenberg_0-1683836429251.png

 

0 Kudos
RyanUthoff
Frequent Contributor

Oh, I misunderstood. If you have access to the Detect Feature Changes tool in ArcGIS Pro, then it might work. It will detect both spatial and attribute changes based on a UniqueID.

https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/detect-feature-changes.htm

0 Kudos
RhettZufelt
MVP Notable Contributor

From first post: "So if there are 500 signs in the area they worked in, they updated about 450 of them, and the other 50 are not in their data."

Last post: "I want to replace the corresponding records from the original layer with these 70 records, and keep the other 30 records"

Based on that, it sounds like if the feature is in the Vendor data, then it HAS been updated and you want to replace the Main dataset features with these...

In your pic above, it appears as if that is what you are showing.  If it is in the vendors data, you have it displayed as red to delete.

So, when you establish the join, un-check the "Keep All Target Features" box, and you will only get a list of the matching features (red in above), then you can select all of them, and delete.

In your example above, if you were to right click on the Vendor's Data column and sort descending, it will move the matching rows to the top, you can select any that have value in Vendors data column and delete.

Or, put definition query where Vendor's data is not null will show only matching features, and then select/delete.

Several ways to do the same thing.

Whichever method, after the delete, just copy/paste or append the vendor data back in.

R_

0 Kudos
AyeletGreenberg
Frequent Contributor

Thanks for the ideas!

0 Kudos
RhettZufelt
MVP Notable Contributor

You can use a search cursor to populate a list of the SignID's in the Vendor dataset.

Then you can select by attribute in the Main dataset by that list, and delete any that match.  This will delete any features in the Main dataset that is also in the Vendor dataset.

Since you just deleted them from the Main dataset, append the 'updated' Vendor dataset back to the main.

I use something very similar to:

 

import arcpy
arcpy.env.preserveGlobalIds = True

MySigns = r'\\pathtodata\signs.sde\signs.DBO.OrigSigns'
VendorSigns = r'\\pathtodat\vendorsigns.sde\vendorsigns.DBO.VendorSigns'

VendorSignList = []

   
with arcpy.da.SearchCursor(VendorSigns,"SignID") as cursor:
    for row in cursor:
        VendorSignList.append(row[0])


DBO_Sign_Layer, Count = arcpy.management.SelectLayerByAttribute(in_layer_or_view=MySigns, selection_type="NEW_SELECTION", where_clause="SignID IN ('" + '\',\''.join(VendorSignList) + "')")
Updated_Input_With_Rows_Removed = arcpy.management.DeleteRows(in_rows=DBO_Sign_Layer)[0]

MySigns = arcpy.management.Append(inputs=[VendorSigns], target=MySigns, schema_type="TEST", field_mapping="", subtype="")[0]

#MySigns = arcpy.management.Append(inputs=[VendorSigns], target=MySigns, schema_type="TEST", field_mapping="", subtype="", expression="SignID IN ('" + '\',\''.join(VendorSignList) + "')")[0] 

 

Since you are appending all the data from the Vendor data, you could omit the where_clause in the append.  I already had it in there for appending a subset, so I left it for example (commented out line).  In this case, since is the same list (VendorSignList), it would be the same as omitting it.

In Pro, without python, this would be the same as joining the Vendor list table to the Main list feature class, with keep only matching values.

Delete all the matching values, append the Vendor data to the Main feature class.

Of course, advise thorough testing of copys of the datasets before applying to prod.

R_

0 Kudos
AyeletGreenberg
Frequent Contributor

Thanks a lot! It looks like what I need. I will give it a try.

0 Kudos