Hi all,
I've been running into an issue with uploading a .csv file from my local machine using the Add Data widget in a Web Experience. My organization is currently running Enterprise 11.4, and I have full administrative privileges/permissions. Upgrading to Enterprise 11.5 or 12.0 is on the very distant horizon.
I want end users (non-GIS) to be able to upload a .csv of Tax Roll numbers for a custom geoprocessing tool to read and run an analysis. But I cannot upload the .csv.
I get a general "The file cannot be uploaded" error in the Web Experience when I try to upload a .csv from either my local machine or a .csv that's hosted in the Portal. I have no issues adding hosted or published feature services and non-spatial tables-- only .csv files cause problems. Digging deeper, the dev tools in Chrome state this:
"Generate Features error: Server tool execution failed : ERROR 999999: Something unexpected caused the tool to fail. Contact Esri Technical Support (http://esriurl.com/support) to Report a Bug, and refer to the error help for potential solutions or workarounds. Missing publish parameter: longitudeFieldName. Failed to execute (Generate Features for Portal). Failed."
I tried uploading the same .csv with added columns: Longitude and Latitude, with nonsense numerical values in them. It uploaded right away. But I do not want to and should not expect the end-users of this Web Experience to include these two columns when they upload.
I've also tried uploading the same .csv without those two columns to a Web Experience created in the current version of ArcGIS Online (with the same Add Data widget settings). I encounter no errors with that upload.
I've attached the .json from Chrome's dev tools as well. As far as I can tell, everything looks as it should, but I'm not terribly familiar with .json
Has anyone run into a similar issue and/or have any suggestions? I'm using the browser version of Experience Builder. Thanks
Any luck? I built an app maybe 6 months ago that has the Add Data widget. The user adds a .CSV with an address field. Those addresses can then be added to a map. That widget has always been pretty sensitive with regards to the 'address' field in the table. Whereas the cause of the "cannot be successfully uploaded" error was once caused by characters in the field, it now seems to error on the schema. If I change the name from Address to anything else it won't error, but at the same time does not let me add to the map. Seems really buggy to me so I created a case with ESRI Support.
What I'm going through turned out to be a BUG:
Steps to Reproduce:
Other Information:
If the csv file has "address" or "Address" field, the upload through the add data widget fails. It fails during the /generate ArcGIS REST API call. When the first field in the csv file is "address", the /analyze ArcGIS REST call sets the "locationType" property in the publish parameter for /generate to "address". When the /generate is called via the "add data" widget with locationType "address", the request fails with a response:
"error": {
"code": 400,
"message": "Service request failed.\r\nStatus: 404 (The specified blob does not exist.)\r\nErrorCode: BlobNotFound\r\n\r\nHeaders:\r\nTransfer-Encoding: chunked\r\nx-ms-request-id: 6df340df-a01e-0055-13c5-c29188000000\r\nx-ms-client-request-id: 717be237-804f-4fd7-b079-cfe2823266d4\r\nx-ms-version: 2025-05-05\r\nx-ms-error-code: BlobNotFound\r\nAccess-Control-Expose-Headers: REDACTED\r\nAccess-Control-Allow-Origin: *\r\nDate: Thu, 02 Apr 2026 17:22:02 GMT\r\nServer: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0\r\n",
"requestId": "",
"traceId": "b18128b84db0f82964525ab3c200f968"
}
If the field is set to a different value, such as "addy" the /analyze will set the locationType to "unknown" and the upload will succeed. However, this cannot be used later to create features using the csv file's address data.
There is no limitation specified in the documentation for "Add data" widget for using Address field. For example, "geocoding address to coordinates is not possible to create features", "use only lon, lat fields for feature location data", etc. Users tend to reference ArcGIS Online general support for csv files. Address field is supported and is geocoded to publish hosted feature layer in ArcGIS Online.
https://doc.arcgis.com/en/arcgis-online/reference/csv-gpx.htm
Hi Jared,
I haven't yet found a solution, but I did find a workaround. The intent behind uploading the .csv was to use it as an input for a custom geoprocessing tool. The workaround:
-Create the tool in a Python Toolbox (.pyt), not as a Python Script (.py). If I tried it as a .py tool, I was still getting errors. I don't remember exactly why, but I think it had something to do with reading parameter inputs
-Set up an input parameter with a datatype of GPRecordSet
-Fetch the parameter as text, then use the csv module to read the contents. In my case, I skipped the header row. I'm not sure if that makes a difference
-Published the tool to Enterprise, with the Upload capability set to True, Execution set to Synchronous, and cleared the default value of that parameter. When I tried publishing without clearing the default, then the .csv to be uploaded had to have a header row that exactly matched the .csv I had used while testing the tool.
-Added the Analysis widget to ExB, and set the custom tool as the utility. Allowed an uploaded file to serve as the input in the widget settings
This has been working for my purposes. The end user is uploading a .csv with only a single column of data, so I don't know if anything would need to change if numerous columns were required
Relevant snippet of my .pyt code:
class custom_tool(object):
def __init__(self):
self.label = 'Custom tool'
self.description = 'Tool uploaded to Enterprise'
self.canRunInBackground = False
# === Did not include isLicensed, updateParameters,
# and updateMessages in this snippet ===
def getParameterInfo(self):
params = []
param0 = arcpy.Parameter(
displayName = 'Uploaded CSV',
name = 'uploaded_csv',
datatype = 'GPRecordSet', #GPFeatureRecordSetLayer did not work
parameterType = 'Required',
direction = 'Input'
)
params.append(param0)
return params
def execute(self, parameters, messages):
input_csv = parameters[0].valueAsText #parameters[0].value did not work
input_data = []
with open(input_csv, 'r') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
input_data.append(row[0])
# === Begin Analysis of input data ===