Uploading and using data to a geoprocessing service

15441
27
12-19-2012 07:32 PM
TomSchuller
Occasional Contributor III
Hy,
I'm trying to build a simple geoprocessing service which should buffer an uploaded shapefile.


This is the scenario:
  1) uploading shapefile to geoprocessing service
  2) buffering the shapefile with a given parameter
  3) returning the buffered shapefile
 
The buffering is not the problem, but the handling of the upload and reusing it in the model or python part.
As far as I read, I should have to use a Datafile parameter for upload, but I'm always getting this error while running it:
ExecuteError: Failed to execute. Parameters are not valid. ERROR 000735: input: Value is required Failed to execute


Can anybody help me to use uploads on geoprocessing services?


Thanks,
Tom
0 Kudos
27 Replies
meriyalootka
New Contributor III
My apologies, the da.Walk command was added at 10.1 Service Pack 1. Based on the message I'm guessing you have 10.1 final (no service packs). If you're able to install the service pack it'll fix that.


Hi
Any solution?
I heard Khimba is a geoprocessing manager. I saw your project about airlines. Solving my problem is not hard for you:)
Regards
0 Kudos
meriyalootka
New Contributor III
My apologies, the da.Walk command was added at 10.1 Service Pack 1. Based on the message I'm guessing you have 10.1 final (no service packs). If you're able to install the service pack it'll fix that.


Any solution?
0 Kudos
meriyalootka
New Contributor III
The following code (10.1 + only) walks through a directory and finds FeatureClasses (this example assumes a folder of "zip" files).
It then copies each one into a fGDB. You can change what it does, but the logic in it should start you off.
You'll notice that a directory is created inside the scratchFolder (the variable "ZipFolder") - this is where the uploaded ZipFile is extracted to. Then the code looks in there.

import arcpy
import zipfile
import os

inFile = arcpy.GetParameterAsText(0) 

# Create a folder in the scratch directory to extract zip to
zipFolder = os.path.join(arcpy.env.scratchFolder, "zipContents")
os.mkdir(zipFolder)


# Extract the zip contents
zip2Extract = zipfile.ZipFile(inFile, 'r')
zip2Extract.extractall(zipFolder)
zip2Extract.close()

# Create a folder in the scratch directory to hold the fgdb which will be downloaded
fgdbFolder = os.path.join(arcpy.env.scratchFolder, "fgdbOutput")
os.mkdir(fgdbFolder)

# Work through all the FeatureClasses inside the extracted zip folder
for dirpath, dirnames, filenames in arcpy.da.Walk(zipFolder, datatype="FeatureClass"):
    
    for filename in filenames:
        
        # You could replace the code below here with your own code to do what you want....
        arcpy.AddMessage("Copying: {}".format(filename))
        
        # Strip .shp from the filename when merging shapefiles
        if filename.endswith("shp"):
            outFilename = filename[:-4]
        else:
            outFilename = filename

        # Copy each featureclass into the output.gdb
        arcpy.CopyFeatures_management(os.path.join(dirpath, filename),
                                      os.path.join(fgdbFolder, "output.gdb", outFilename))


Hi
I have this error:
Traceback (most recent call last):
  File "C:\Users\user1\Documents\getfeature.py", line 13, in <module>
    zip2Extract = zipfile.ZipFile(inFile, 'r')
  File "C:\Python27\ArcGIS10.1\Lib\zipfile.py", line 699, in __init__
    self.fp = open(file, modeDict[mode])
IOError: [Errno 13] Permission denied: u'C:\\Users\\user1\\Documents\\ArcGIS\\scratch\\ZipFolder'
0 Kudos
ShingLin
Esri Contributor
Hi,

Have you resolved your problem? If not, try to change your scratch folder to a different place on Desktop when running your script tool. sometimes the default folder in the User directory has permission issue.

You can also use model tool, but I am not 100% sure if it works if you are using web app as Client. Give it a try anyway. You can use Calculate Value with absolute path (String data type) of the shape file as the input. Then use Copy Feature to copy the input file so Server will save the uploaded input shape file in the scratch gdb, and finally do the Buffer or any other Analysis with the uploaded file.

I attached two gpk files for your reference - calcbuffer100.gpk (for 10.0 Server) and calcbuffer101.gpk (For 10.1 Server). Both models are almost the same, except in 10.1 model you have to set the output of Calculate Value as an output parameter, otherwise you won't get the input shape on the server. I have to figured out if this is a bug or as design. Let me know if this helps.



Best,



Shing
0 Kudos
meriyalootka
New Contributor III
Hi,

Have you resolved your problem? If not, try to change your scratch folder to a different place on Desktop when running your script tool. sometimes the default folder in the User directory has permission issue.

You can also use model tool, but I am not 100% sure if it works if you are using web app as Client. Give it a try anyway. You can use Calculate Value with absolute path (String data type) of the shape file as the input. Then use Copy Feature to copy the input file so Server will save the uploaded input shape file in the scratch gdb, and finally do the Buffer or any other Analysis with the uploaded file.

I attached two gpk files for your reference - calcbuffer100.gpk (for 10.0 Server) and calcbuffer101.gpk (For 10.1 Server). Both models are almost the same, except in 10.1 model you have to set the output of Calculate Value as an output parameter, otherwise you won't get the input shape on the server. I have to figured out if this is a bug or as design. Let me know if this helps.



Best,



Shing


Many thanks for your sincerely helps, but I didn't see any attachment. Where is your attachments??
0 Kudos
ShingLin
Esri Contributor
Sorry for the attachment did not get through. I realize if you adopt the model approach, the input shape file has to be located in a folder the server has access. If you want the users to provide the shape file from their machine, that is not going to work. So for both 10.0 and 10.1, Unzip the shape file is still the way to go.

Also, if you are going to create a web application. ArcGIS Viewer for Flex 3.2 (To be released early next week) now provides the upload feature, so you do not need to write a customized code like it used to. Hope this helps.


Regards,


Shing
0 Kudos
Nelly_CristalArellano_García
New Contributor
Hy,
I didn't yet used it in the FlexViewer but only manually over rest.

But it should point you to the right direction:
  enable "upload" on your geoprocessing service
  upload your file to the gp-service and remember the upload-id
  call your gp-service with the datafile paramater as the text below:

For GPDatafile parameter, you have to indicate this full json representation of the upload item:
parameter = {"itemID":"i80dfa12f-52ed-4841-94ff-37f9c3f5dd6f"}

Tom


Hi, I have to do a geoprocessing to validate a excel file with Arcgis 10.1.
It's the first time I have to working with geoprocessing.
Do you know how I can read the excel file with ID using python and toolbox parameters?

I defined my parameter like this:

excel_file = arcpy.Parameter(
displayName = "ID_Excel",
name = "excel_file",
datatype = "GPDatafile",
parameterType = "Optional",
direction = "Input")

But, when I run my script the parameter the ID that I wrote disappear... I am really confused. ¿How I can get my file after uploaded into a feature?

The GPdatafile what will return? A zip or the excel file?
0 Kudos
HNoakes
Esri Contributor
This is an extremely useful python script, but I have one question, does the arcpy.da.Walk function validate the datatype based on the parameter that is passed in (ie. FeatureClass). I just wanted clarification on this so I do not add additional code for handling invalid datatypes if it is not required. The documentation on this function does not appear to be complete.

Just for your information I am using it to upload a Shapefile to server that is within a zipped folder. And want to know if this will error if the Shapefile is missing one of its required associated files (ie. .prj, .dbf)

Thanks,
-h

The following code (10.1 + only) walks through a directory and finds FeatureClasses (this example assumes a folder of "zip" files).
It then copies each one into a fGDB. You can change what it does, but the logic in it should start you off.
You'll notice that a directory is created inside the scratchFolder (the variable "ZipFolder") - this is where the uploaded ZipFile is extracted to. Then the code looks in there.

import arcpy
import zipfile
import os

inFile = arcpy.GetParameterAsText(0) 

# Create a folder in the scratch directory to extract zip to
zipFolder = os.path.join(arcpy.env.scratchFolder, "zipContents")
os.mkdir(zipFolder)


# Extract the zip contents
zip2Extract = zipfile.ZipFile(inFile, 'r')
zip2Extract.extractall(zipFolder)
zip2Extract.close()

# Create a folder in the scratch directory to hold the fgdb which will be downloaded
fgdbFolder = os.path.join(arcpy.env.scratchFolder, "fgdbOutput")
os.mkdir(fgdbFolder)

# Work through all the FeatureClasses inside the extracted zip folder
for dirpath, dirnames, filenames in arcpy.da.Walk(zipFolder, datatype="FeatureClass"):
    
    for filename in filenames:
        
        # You could replace the code below here with your own code to do what you want....
        arcpy.AddMessage("Copying: {}".format(filename))
        
        # Strip .shp from the filename when merging shapefiles
        if filename.endswith("shp"):
            outFilename = filename[:-4]
        else:
            outFilename = filename

        # Copy each featureclass into the output.gdb
        arcpy.CopyFeatures_management(os.path.join(dirpath, filename),
                                      os.path.join(fgdbFolder, "output.gdb", outFilename))
0 Kudos