Publishing Geoprocessing Service with In Memory workspace

5438
11
Jump to solution
06-05-2015 10:58 AM
Highlighted
Occasional Contributor

I'm using ArcMap and ArcServer 10.2.1


I'm trying to publish a tool to ArcGIS server (federated to a Portal).  The script uses the in_memory workspace to store temporary data, and when you Analyze the service when publishing, it returns a warning that it has to upload the in_memory workspace to the server because it's not registered with the server (we have a couple of geodatabases and folders registered with the server already).  I ignore the warning and publish the service and get an error when it tried to stage the service.  The error is "ERROR 001270: Consolidating the data failed."

From the documentation, the error normally has something to do with the paths to the data.  I think this error is bubbling up because the data doesn't exist anymore.

Shouldn't Arc "know" not to upload the in_memory workspace?  At the end of the script I clean them up and delete all the datasets in the in_memory workspace.  I'd appreciate any advice.

-Brendan

Reply
0 Kudos
1 Solution

Accepted Solutions
Highlighted
Regular Contributor III

As Bill Daigle suggests, this is the sure workaround to the problem, its quick and easy and should move you forward.

os.path.join("in_memory", "myInMemFeatures") # == "in_memory/myInMemFeatures"

View solution in original post

11 Replies
Highlighted
Regular Contributor III

Are you sure everything is running from a registered location?  Are you getting a clean result from the results window in ArcMap from which to publish the tool?

Reply
0 Kudos
Highlighted
Occasional Contributor

When I run the tool on desktop, there's no issues or warnings.  What do you mean by running it from a registered location?  I'm not uploading anything to a registered file or geodatabase (or at least I shouldn't be).

Reply
0 Kudos
Highlighted
Regular Contributor III

What I mean is that when I publish a gp tool, I make sure the mxd itself and the data connections (or database in the case of a fgdb) contained in the mxd reside in the registered data store location...

Reply
0 Kudos
Highlighted
Occasional Contributor III

Hi Brendan,

Unfortunately I don't have a solution, I just remember I had the same problem.

If I remember correctly I eventually used something like arcpy.env.scratchWorkspace instead of "in_memory" workspace. Or was it arcpy.env.scratchGDB? Or was it %scratchworkspace% ? Sorry I can't remember. Luckily in my case the performance of the service was still pretty good.

I'll need to publish some services soon so I'm eager to know how you get on.

Best regards,

Filip

Reply
0 Kudos
Highlighted
Occasional Contributor III

Is the tool built using model-builder or a python script?  Sounds like you're using a script.

I have published python toolboxes with in_memory intermediates to a 10.1 server without a problem.  If you're using a python script and still having problems, you may want to try building the "in_memory" path using os.path.join (i.e. "os.path.join('in_memory','tempFC')").  I had a similar issue with data getting inadvertently getting copied and that seemed to fix it.

Highlighted
Occasional Contributor

Yes, it's a script.  I'm pretty solid with python but I'm new to geoprocessing services.  What are the advantages of publishing from an mxd?  How would the code be interpreted differently?

Reply
0 Kudos
Highlighted
Occasional Contributor III

I think the reason ESRI recommends publishing from an MXD is that they want you to run the tool before publishing.  You need a valid results object before you can publish to server. It is possible to do all of this through a script, but it's not easy.  

As part of the publishing process, your script will get modified.  The publishing tools will crawl the script, replace any hard-coded paths with a variables and reset the paths so they reference locations that will get created on the server -- assuming the hard-coded path is not a registered data store.  Any data that gets copied with the tool is included in the '.sd' file (i.e. service definition file) that gets copied to the server.  The updated script also get included in the '.sd' file.  Depending on where this fail in the publishing process, you may be able to view the sd file and figure out what is going wrong.  If you can find the sd file on your machine, you can view it by changing the extension to '.zip'.

Highlighted
Regular Contributor III

As Bill Daigle suggests, this is the sure workaround to the problem, its quick and easy and should move you forward.

os.path.join("in_memory", "myInMemFeatures") # == "in_memory/myInMemFeatures"

View solution in original post

Highlighted
Occasional Contributor

Thanks to everyone for your help.  I'm able to publish the service w/o issues now.

Reply
0 Kudos