Select to view content in your preferred language

Publishing a geoprocessing toolBOX

8878
17
07-03-2012 06:37 AM
KevinGooss
Regular Contributor
I see how i can publish a gp tool by creating the tool, running it in arcmap, and then selecting the results and clicking Share As...
That creates an sd file that i can then use in ags manager to publish my tool.
The problem is that i have a toolbox with 30 tools in it. I don't want to run all 30 and go through this process. Can't i just publish the entire toolbox somehow?

Also, why the change in publishing gp services? I liked being able to go into catalog, connect to my ags, then right click and add new service based on a python scripted tool.
that seemed like a great workflow because i could then edit the python script in the background without having to reup my service (unless i changed params).
It appears that under the new workflow I cannot modify my python script without having to open arcmap and rerun everything.
This will add much time to my dev process.
Tags (2)
0 Kudos
17 Replies
KevinHibma
Esri Regular Contributor
The idea behind publishing a good result, is that because it ran successfully in ArcMap, it has much higher odds of being successful as a service.
It makes more sense to do the dev/testing inside ArcMap with a local tool than it does to publish a service and modify that.
Additionally by publishing a result, we're able to help enforce best practices and modify tools/scripts/models that otherwise wouldn't have published successfully. Once something is a service and you modify it, we have no way to help with best practices or warn if something isn't supported. When you do all your dev work up front and publish the result (hopefully you only need to publish once), we're able to tell you what might be wrong with it and suggestion action.

Well I dont really suggest this, if you're only making minor edits to your script, you can modify the script which gets moved to the Server (understanding that these edits dont persist back to your original script).
You'd find the script powering your GP Service at:
C:\arcgisserver\directories\arcgissystem\arcgisinput\<service name>.GPServer\extracted\v101\<original folder that held toolbox name>
0 Kudos
KevinGooss
Regular Contributor
Thank you for the prompt answer Kevin. I have to say that is a little disappointing. As a developer I expect 'best practices' to be a document that i can digest and choose to follow - not a directive from the software telling me what i have to do.

The architecture you describe creates a deployment nightmare for me. Now i am required to completely replicate the target environment in order to publish what you consider to be a 'good' tool. There are perfectly valid scenarios were it makes more sense to publish a service and modify that on the target environment. It appears that option has been taken away from us.

For one project alone I have 30 custom python Script Tools. You are telling me that for a single deploy I need to run each of those in ArcMap and then publish the results or add them as tasks to an already existing service and then overwrite that. And if i have a fix for just one of those scripts I am now expected to edit that locally, retest all 30 in ArcMap and then overwrite the published service? And if that is a remote environment without ArcMap I have several more steps to create the sd file move it to the remote environment and then republish there?

So a single bug fix that used to require only an edit to a py file now requires much, much more. Additionally, this republish is going to kick out every user who happens to be using any one of those 30 tasks at the time?

I think perhaps that I am just confused. I'm sure there is a simple explanation and I'm just overlooking it. Could you correct me and point out the simple path to custom tool deployment that doesn't involve ArcMap and Results?
KevinHibma
Esri Regular Contributor
Please let me clarify some statements:
When I say best practices, I mean best practices being suggestions, and the addition of trapping for the things which prevent your service from working.
Best practices appear as warnings and messages in the analyzer and others are the errors which wont allow you to publish the service.
Warnings and messages are something you (the service publisher) can chose to ignore or implement. We don't force these on you.

I understand your concern with re-running and adding 30 tool executions into a task. Honestly, throughout beta nobody came to us with more than 3 or 4 tasks per service presenting a concern regarding running this many tools. To me, 30 seems like a lot of tasks on a single machine, but if you're deploying that many then you must have the system resources to deal with that many running instances.

At this point all I can do is offer you some suggestions to reduce the time it takes to republish 30 tasks. Hopefully you can make use of one of these in your publishing:

1) In the Geoprocessing Options in ArcMap (Geoprocessing > Options), you can set Results Management to "Never Delete".
With this setting, run every tool you want to publish, you'll get a result for each
Save the MXD.
Anytime you want to republish the service, you simply need to open that MXD, run the updated tool, and then put together the service with a combination of old/new results.
>This method is also handy as if any data required for the service has gone missing, the analyzers should catch it.

2) Go to Windows  > ArcMap options > Sharing Tab > check "show file location when saving draft service definitions"
Run all of the tools you want to publish (you'll get a result for each)
Get into the Service Editor by sharing a result as a service
Add all your Results into the service
Click the "x" on the service editor, and say YES to saving a draft.
Note where the draft file gets saved to.
You can re-open these and add or remove results from it later (when you need to update).
Note: The .SDDRAFT files act more like pointers, so if you start moving tools/data from where they were when you originally saved, you could break references.

3) For 10.1 sp1 we're working on a pythonic (arcpy) way to publish gp services.
It is early in the testing and looks good so far, but till its 100% I can't guarantee it'll make sp1.
In short, you could write a runner or execution script to power you through the publishing process.
The script calls each toolbox/task, runs it, assigning each to its own result object.
You take each result object and pass it into arcpy.GPCreateSDDraft([list of result objects], , , ) - new function, then Analyze it, Stage it into a .SD, and finally Upload.
This entire workflow has been well documented and exists for Maps. Like I said, we're working on it for 10.1 sp1 in terms of GP. Hopefully it'll come together like we want.
0 Kudos
KevinGooss
Regular Contributor
Thank you for the quick reply Kevin.
It looks like this may not be so bad after all. Just noticed your ArcGIS Server Administration Toolkit, that should help as well.
We do maintain a significant number of geoprocessing tasks and i think running them all in arcmap is going to be the toughest part.
but it seems like esri is moving in the right direction in terms of allowing python programmers greater control of the processes involved in make gp services.
And i also like that fact that the more 'wizard'-like tools (like publishing from arcmap) seem to use scripting and services at their heart.
thanks again for the suggestions.
0 Kudos
AaronBarkhurst
Emerging Contributor
I have to agree with KG22. This new publishing method has made deployment of our internal system a NIGHTMARE!!! This process has now introduced SO many more steps to what used to be a relatively seamless and simple process because of the number of scripts contained in my toolbox. I also have a number of scripts in a toolbox that are dependent on outputs from other scripts or json inputs from our front-end that make running each one before publishing extremely painful, and frankly, a waste of time! By doing this, ESRI has single handidly increased my workload and time it takes to do my job. I have also noticed that when publishing my scripts, ESRI takes it upon themselves to REWRITE my code in the service directory, which I have found actually causes some of my script to FAIL as GP services....WHAT!!!!!!! That is a HUGE no no to me. DO NOT CHANGE MY CODE!!!!!!!!!!!!!!!!!!! Please fix this in SP1, as I am extremely disappointed with the change!
0 Kudos
KevinGooss
Regular Contributor
Hopefully this will be addressed in sp1 with a way to deploy scripts directly from python.
The idea of esri "running" my script and determining if it will work and is optimized is anathema to the development process.
I have over 10 years experience in jumping through hoops to get software to work - that is being challenged now.
I guess one way around this (that we are kicking around now) is to publish a (basically) blank python script the way esri suggests (via map results) and then hunt down where ags has copied and hidden the script and go there and edit the script to be what you want.
That strategy is probably not going to work well if you have to modify input/output parameters.
0 Kudos
AaronBarkhurst
Emerging Contributor
KG22,

That is funny that you mentioned the workaround of publishing a blank script and then editing the file in the service directory. That is exactly what we have done as well. We just create a script that does nothing but import the arcpy module and then publish that, and edit the python file with the proper code.
0 Kudos
KevinGooss
Regular Contributor
I think we are going to go the route of converting to the new native python toolboxes rather than the older 10.0 way of py scripts tied to tools. It seems esri is going in that direction so might as well get on board. if we can publish them directly from python that will take desktop out of the picture and be very nice.
I can understand wanting the script to be runable because it makes debugging a little easier because you are in arcmap. But there are scripts that are just too complex to be setup in arcmap to run every time you want to make a small change.
0 Kudos
KevinHibma
Esri Regular Contributor
I have also noticed that when publishing my scripts, ESRI takes it upon themselves to REWRITE my code in the service directory, which I have found actually causes some of my script to FAIL as GP services....WHAT!!!!!!! That is a HUGE no no to me. DO NOT CHANGE MY CODE!!!!!!!!!!!!!!!!!!! Please fix this in SP1, as I am extremely disappointed with the change!


The only code change we make to scripts is the insertion of variables for data (and sometimes sql expressions). Since part of the new publishing processes is a mechanism to ensure all data required by the service can be found and is available, we need to update the paths.
If you're encountering an instance where we didn't update the paths to data correctly, or didnt copy data over that was required then I'd be very interested in fixing it. Is there any chance you can share the tool/script/some data with me?

Unfortunately based on timing, we're getting SP1 pretty locked down. I'd need to be able to reproduce the issue here in house and get a fix for it before Friday to have any chance at SP1 - else it would probably be next service pack.
If you can share, feel free to email me directly at khibma@esri.com
0 Kudos