append to agol feature service not working

5696
12
03-30-2017 02:01 PM
by Anonymous User
Not applicable

Hi all.

For some reason, I can only append one feature at a time into an existing AGOL feature service.

I bring the feature service layer into Pro, plus a local data source (typically a file geodatabase). I then run the Append gp tool, mapping the fields appropriately. It appends the first feature, then fails with the following message:

Start Time: Friday, 31 March 2017 9:41:20 AM
ERROR 999999: Error executing function.
General function failure [Cannot insert duplicate key row in object 'user_2456.BOAT_SURVEY_DEMO_FISHSPECIES' with unique index 'GlobalID_Index'. The duplicate key value is (9bc2a5e1-d2e2-42fd-9802-7caa78b1c739).
The statement has been terminated.]
Failed to execute (Append).

At first, my local data had no global ids, so I tried adding them, but no change, even if I ticked 'preserve global ids' in the append tool (although that failed with a different message).

Has anyone else experienced this, know what is going on, and/or have a fix?

Cheers,

-Paul

0 Kudos
12 Replies
by Anonymous User
Not applicable

Hi Drew

Below is the basic script I put together. It's pretty straightforward.

Just be warned that, if run as is, it truncates the dataset and deletes everything! Don't get caught out if that's not what you want to do! 

You would need to adjust the part that reads the source data and manipulates it into the json object as you need to construct an object with the correct fieldnames etc. I was just updating plain tables, so no geometry involved here but could be worked in I guess.

from arcgis.gis import GIS
from getpass import getpass #to accept passwords in an interactive fashion
import pandas as pd
import time
import datetime

username = input()
password = getpass()
gis = GIS("https://www.arcgis.com", username, password)

item_to_update = gis.content.get('item id from agol')

for index, table in enumerate(item_to_update.tables):
if table.properties.name == 'name of the table':
table_to_update = table

#confirmation we have the correct table...
print(str(table_to_update.properties.name))

# WARNING!! This deletes EVERYTHING in the table!
table_to_update.delete_features(where="objectid > 0")

# A couple of little helper functions:

def getDate(datestring):
if datestring is None:
return None
datestring = str(datestring).strip()
if datestring == 'nan' or datestring == 'NaT':
return None
try:
return datetime.datetime.strptime(datestring, "%Y-%m-%d %H:%M:%S")
except:
pass
return None

def getString(s):
if s is None:
return ''
if isinstance(s, str) is False:
s = str(s).strip()
if s == 'nan':
return ''
return s


sourceDataFrame = pd.read_excel(r'c:/temp/spreadsheet_data.xlsx')
sourceDataFrame.head()

features_to_be_added = []
notifier = 0

#this bit needs to be customised for the data source and data it contains...
for index, row in sourceDataFrame.iterrows():
notifier += 1
if notifier == 1000:
print('Number processed: ' + str(index + 1))
notifier = 0
try:
nextFeature = {"attributes":{}}
nextFeature["attributes"]["field1"] = getString(row['Field1'])
nextFeature["attributes"]["field2"] = getString(row['Field2'])
nextFeature["attributes"]["field3"] = getDate(row['Field3'])
features_to_be_added.append(nextFeature)
except ():
print(">>>>>>>>>>>>>>>>>>Error at index: " + str(index) )

len(features_to_be_added) #confirmation of number of features...

start = time.clock()
print('Number of features to upload: ' + str(len(features_to_be_added)))
chunksize = 2000
chunks = [features_to_be_added for x in range(0, len(features_to_be_added), chunksize)]
for index, chunk in enumerate(chunks):
print('chunk #' + str(index)+ ' with ' + str(len(chunk)) +' features started at {0:.2f} seconds'.format(time.clock() - start))
results = table_to_update.edit_features(adds = chunk)
print('Completed upload of {0} features in {1:.2f} seconds'.format(str(len(features_to_be_added)), time.clock() - start) )

0 Kudos
RussellBrennan
Esri Contributor

Hi Paul,

This is currently a limit with feature services in Pro. In the current software appending is done one row at a time. We are investigating possible performance improvements for the next few releases such as grouping the edits into fewer calls or use the Append API on the back end. 

by Anonymous User
Not applicable

Thanks Russell. Good to know.

cheers,

-Paul

0 Kudos