... but DON'T change the displayed value in the parameter control.
For example, if a dropdown list of choices for a string parameter shows some long text, how can I send the script some shortened text or even a code for a particular choice without altering self.parameter[i].value; because when you re-set .value, the displayed value in the form changes and user sees the code instead of the more descriptive text.
Another way to ask this question is, say I use a dictionary in my script to assign a short value or code to a variable based on some long text that is coming from a parameter in the form; can I move that step to the validation?
One option would be to write the code value for a longer string to a hidden parameter, but then the tool still sends a redundant argument and if I want to make the script usable independent of a parameter form, I have to do some extra evaluation of the arguments.
Can I intercept the list of arguments just after validation, but before the script is executed?
Give your script a "main" function (call it whatever you like) that takes in parameters in their final form, then feed it processed parameters behind a main guard. For example:
import arcpy
CODE_MAP = {
"North": "N",
"South": "S",
"Other Code": "Other"
}
def main(code: str):
arcpy.AddMessage(f"The code is: {code}".)
if __name__ == "__main__":
full_code = arcpy.GetParameterAsText(0)
short_code = CODE_MAP.get(full_code, "N/A")
main(short_code)
This pattern has other advantages, like making it easy to call the function without a script tool wrapper, write tests, and so on.
Thanks, yes, I use main functions. But again, if the script is called by command line or another script and arguments are supplied positionally, the number of arguments and their indices will be different then when received from the parameter form and I won't know how to parse them.
If the issue is handling things from the command line properly, your script can look at sys.orig_argv to see if it was called from Pro or not and then use a proper argument processor to feed parameters into "main" instead of arcpy. You can also get parameters in text form using sys.argv[1:], this should be consistent regardless of who called your script file. If you really want to get into the weeds, geoprocessing modules have all the features of regular Python modules so you can write proper entry points and do python -m my_module arg1 argn on top of all the other benefits of modules.
Thanks! Good info there.
I use a lot of PYT toolboxes and if you're mainly writing python tools I'd say it's worth the switch:
from arcpy import Parameter
class MyToolbox:
def __init__(self):
self.label = "Toolbox Label"
self.alias = "Toolbox Alias"
self.tools = [MyTool]
class MyTool:
def __init__(self) -> None:
self.category = "Tool Category"
self.description = "Tool Description"
self.label = "Tool Name"
def getParameterInfo(self) -> list[Parameter]:
p1 = Parameter(
displayName="Input Features",
name="input_features",
datatype="GPFeatureLayer",
parameterType="Required",
direction="Input")
return [p1]
def isLicensed(self) -> bool:
return True
def updateParameters(self, parameters: list[Parameter]) -> None:
"""Modify the values and properties of parameters before internal validation is performed"""
...
def updateMessages(self, parameters) -> None:
"""Modify or update messages created by internal validation"""
...
def execute(self, parameters: list[Parameter], messages: list) -> None:
"""The main tool script"""
...
def postExecute(self):
"""Runs after the script has finished executing"""
...
As you can see each function is called by the ArcPro interpreter at at a set time, with the update methods being called every time a user interacts with parameters. This lets you run validation or even lock parameter visibility behind conditions.
If you need a last second validation, you can short circuit the execute method with a custom validation method:
...
def finalValidation(self, parameters: list[Parameter], messages: list) -> bool:
"""Example of an injected validation method"""
...
def execute(self, parameters: list[Parameter], messages: list) -> None:
"""The main tool script"""
if not self.finalValidation(parameters, messages):
raise ValueError("Parameter Validation failed!")
...
...
Because the parameters list is being passed as a reference to each method, you can modify them whenever you want and the changes will be applied to any method that uses them after.
Heck, I guess I should look at python toolboxes again. I tried switching a big .tbx to a python toolbox a few years ago and it was such a headache I bailed. But when ESRI released the .atbx promising "better cross-release compatibility and persistence, improved performance and scalability, and less possibility of file corruption" I figured that was the format of the future and so migrated everything to that.
I don't think I can use custom methods in a ToolValidator class for anything other than changing the behavior of the form. At least, adding a finalValidation() method at the bottom and trying to reset a parameter value did nothing.
For now, I will just use a dictionary in my script to catch the parameter form values and switch them to the same shorter values that would be used for a command line execution.