I have two feature services. An old and new one. They serve different purposes and are regularly updated by different teams. They have some common fields but not all.
I would like to regularly find common rows based on column key and update one of them
Layer to be updated:
item=gis.content.get('xxxxx')
l=item.layers[0]
df=l.query().sdf
df.head()
Layer to be used to update;
item=gis.content.get('yyyyy')
l=item.layers[0]
df2=l.query().sdf
df2.head()
Using the python API CODE I tried the following;
g=l.query().features
g=l.query().features
for o in overlap_rows['key']:
# get the feature to be updated
original_feature = [f for f in g if f.attributes['key'] == o][0]
print(original_feature)
feature_to_be_updated = deepcopy(original_feature)
print(o)#print(str(original_feature))
matching_row = df2.where(df2.key== o).dropna()
feature_to_be_updated.attributes['Long'] = float(matching_row['Long'])
feature_to_be_updated.attributes['Lat'] = float(matching_row['Lat'])
features_for_update.append(feature_to_be_updated)
features_for_update
This results into an error;
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
In [532]:
Line 9: feature_to_be_updated.attributes['Long'] = float(matching_row['Long'])
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3-clone1\lib\site-packages\pandas\core\series.py, in wrapper:
Line 112: raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
---------------------------------------------------------------------------
On investigation, I found for some reason the line below at time returns two rows and not one despite the fact that the key is unique for each row;
matching_row = df2.where(df2.Ops_Code == o).dropna()
Any reasons why this is happening?
Has someone done such an update before?
Solved! Go to Solution.
I had this resolved. I dropped duplicates. Being a large file, I hadnt noticed the duplicates
I had this resolved. I dropped duplicates. Being a large file, I hadnt noticed the duplicates