Arcade FeatureSet Functions should have the option to output to RAM

581
3
05-15-2023 10:50 AM
Status: Open
jcarlson
MVP Esteemed Contributor

You can read the related blog post over here: https://community.esri.com/t5/arcgis-online-blog/improving-expression-performance-a-custom-function/...

In short, complex expressions involving multiple layers and loops can quickly lead to extremely large numbers of queries being sent to the respective services, leading to extra unnecessary work for the servers, and additional processing time for the end user.

I wrote a custom function (see linked post above) to build a 100% in-memory FeatureSet, and I have found it an improvement in nearly all cases. But it was only after years of putting up with poorly performing expressions that drove me to investigate what is really a simple solution.

It would be great if this kind of option were built in. I don't think every expression would benefit from this, so perhaps make it an optional parameter, as @JohannesLindner suggests in his response to my post. If it defaults to the current method, everybody's expressions keep working, no problem. But for those of us with more complex expressions, we could specify that we want our FeatureSet "memorized" and reap the performance benefits.

var fs = FeatureSetByPortalItem(portalObject, itemId, layerId?, fields?, includeGeometry?, memorize?)

 

3 Comments
erica_poisson

I've recently used @jcarlson's custom "Memorize" function within one of my Data Expressions at the behest of @JohannesLindner and it significantly improved the performance of my dashboard - user load times went from ~2 minutes to ~5-10 seconds. 

It would be nice if something like this was built into Arcade to help users improve performance. 

Vinzafy

Full support for this idea! The memorize function that you created has improved a data expression script to load 47 times faster than the previous method.

Having this be a built in function would be incredibly beneficial not only to users, but to the servers so they don't get bombarded with hundreds to thousands of requests every time a relevant script is run.

But for the time being, thank you for the workaround @jcarlson and additional support and suggestions @JohannesLindner.