This is an older post, but I am having this issue with scripts that I run as Toolboxes. I am NOT using the arcpy.env.addOutputsToMap env at all.
I have a module that outputs time results for several tools and functions. The same tool in Arc, using only basic Python and arcpy modules, can take 2-5 times longer to run than when run from outside Arc on the same datasets.
The scripts themselves are around 1000 lines of code and contain multiple functions within one main class, as well as the "stopwatch" class and also, one "static tools" class that I use as a sort-of basic conversion calculator (get specs for a Feature Dataset; get needed standard spatial references; convert text to Bool; create a layer in-memory, etc).
What I am looking for, more than anything, is any advice or feedback that would tell me things like:
- Does the length of a sub-functions code matter when maximizing Toolbox performance?
- e.g. is it better to split larger functions into smaller ones, or vice-versa?
- Are there penalties for storing things in-memory (assuming we have plenty?)
Please, do not offer things like, "store your stuff locally." I know about this. All of these test scripts are running on a Threadripper + 3080, which has access to a dedicated write-drive and reads from an nvme RAID10 array.