Skip navigation
All People > tchapin > Ted Chapin's Blog > 2017 > March > 01

There might be a few ways to do this, but this describes a method I found that works. The goal here is to return features as a json payload of attributes + geometry and it's up to the client to do something with them.

 

Here's how to author the tool in a Python Toolbox:
Make an output parameter with datatype = "DEFeatureClass" and parameterType = "Derived". I played around with GPFeatureRecordSetLayer and GPFeatureLayer, but DEFeatureClass was the only way I could get it to work. In the execute function, create a feature class in the in_memory or scratchGDB workspace. It doesn't really matter which one, but in_memory was much faster in my testing. You can make the feature class from scratch, or if you are getting your features from an existing feature class, make a feature layer with arcpy.MakeFeatureLayer_management, and then use arcpy.CopyFeatures_management to copy the feature layer to a feature class. Return the output parameter as follows: parameters[0].value = fc_path, where fc_path is the entire path to the feature class, like in_memory\<feature class>. Right before you create this feature class though, check for its existence and delete it if it exists, otherwise if you run it consecutively it will still be there from the previous run.

 

Here's how to publish the service:
Important! Even if you don't reference any existing data, you have to enable the feature "Allow data to be copied to the site when publishing services". This can be found in ArcGIS Server Manager under Site, GIS Server, Data Store, Settings. You can turn this on temporarily just for publishing the service and then turn it off and the service will still work, but you can't publish this gp service without it - you will get an ERROR 001488. Run the tool, right click on the gp result and and Share As GeoProcessing Service. There's really nothing special you have to set here, other than the normal metadata and parameter descriptions, but I recommend configuring the service to run asynchronously because you never know how long it will take to create the file. Be aware of the service's feature limit in the Parameters section. The default is 1000. If your result exceeds this, the json response will include "exceedTransferLimit": true and inexplicably the features list will be empty. You do not get the first <feature limit> features, you get no features at all. So be aware of this. When you execute the task from the REST endpoint, you will get a json payload with data type GPFeatureRecordSetLayer that includes the spatial reference, the fields schema, and the features, each with attributes and geometry. The Web AppBuilder very nicely adds this feature set as an operational layer to your layer list.

Obviously a web service cannot write a file directly to a user's file system, so ArcGIS Server allows the service to create files in a staging folder on the server and let's the user retrieve it from there.

 

Here's how to author the tool in a Python Toolbox:
Make an output parameter with datatype = "DEFile" and parameterType = "Derived". In the execute function, create a file in the arcpy.env.scratchFolder. When running the tool in Desktop, this folder will be buried in the user's AppData\Local\Temp folder (it might be helpful to see this location using "print arcpy.env.scratchFolder"). After publishing, each execution of the task will get a unique job folder with its own scratch folder, located here arcgisserver\directories\arcgisjobs\<folder>\<service>_gpserver\<job>\scratch. Return the output parameter as follows: parameters[0].value = file_path, where file_path is the entire path to the file in the scratch folder. I recommend something like this: file_path = os.path.join(arcpy.env.scratchFolder, "filename.ext")

 

Here's how to publish the service:
Run the tool, right click on the gp result and and Share As GeoProcessing Service. There's really nothing special you have to set here, other than the normal metadata and parameter descriptions, but I recommend configuring the service to run asynchronously because you never know how long it will take to create the file. When the task is executed, the REST endpoint will have a link to the output parameter in the Results section. That output parameter will include a url to the file that lives in the job's scratch folder. It's up to the client to go and retrieve that file.

Uploading files to a gp service is a two-step process. The first step is uploading the file to a staging area on the server where the service's server-side code can get to it, since a web service cannot directly access the user's file system. The second step is executing the task using the Submit Job (asynchronous) or Execute Task (synchronous).

 

Here's how to author the tool in a Python Toolbox:
Make an input parameter with a datatype = "DEFile". You can optionally limit the file extensions accepted by setting param0.filter.list = ["txt", "csv"], for example. In the execute function, you reference the input file by input_file = parameters[0].valueAsText. When running the tool in Desktop, this will be the location of the file in the file system. After publishing, when the service executes, this will be the location of the file in the arcgisserver\directories\arcgissystem\arcgisuploads\services folder (the file upload step will have already happened by this time). Then you can read the file or whatever you're going to do with it.

 

Here's how to publish the service:
Run the tool, right click on the gp result and and Share As GeoProcessing Service. Be sure to check the Uploads box under Operations Allowed in the Geoprocessing Capabilities section. Under the Parameters section, you can configure the service to be Synchronous or Asynchronous, although with a service that takes a file from the user I would only ever use asynchronous because who knows how big the file will be. That's it. Fill out the rest of the service metadata and publish. In the REST endpoint the service will have an Uploads Child Resource. This is where you can upload the file before running the service. You will get an item ID. If you are doing this manually in REST, click on the Item ID link and get the entire json representation of the item and give it to the task's Submit Job operation as the input parameter. Otherwise your client will have to perform these steps. For example, the Web AppBuilder's geoprocessing widget knows what to do with file input parameters and will present a nice UI for 1) uploading the file and 2) executing the task.

 

A note about the file extensions allowed to be uploaded to ArcGIS Server:
Regardless of the filter you put on the input parameter, a gp service with the Uploads capability will only accept files with an extension in a specific list of allowed file types. By default this list contains a large number of known file extentions. The list of extensions can be accessed and updated in the service's admin page arcgis/admin/services/<folder>/<service>.GPServer. To change the list, you need to get the complete json representation of the service, modify the allowedUploadFileTypes property, and give the modified json to the service's edit request in the admin/services endpoint. This is if you want to modify the allowed file types on a service-specific basis. If you want to change the default file types allowed across all services, there is a uploadItemInfoFileExtensionWhitelist property in arcgis/admin/system/properties.