Select to view content in your preferred language

Arcade Performance: FeatureSetByRelationshipName vs. FeatureSetByName

270
2
04-02-2026 06:50 AM
FabianHannich
New Contributor

 

I am currently working with Arcade expressions in attribute rules and have come across a performance-related question for which I haven’t yet found a clear comparison in the community.

 

Initial situation:

I am working with the following feature classes and tables:

  • Address
    Fields: aId (PK), from_number, sId (FK)
  • Street
    Fields: sId (PK), name
  • Status
    Fields: aIdORsId (FK), status, type

 

Relationships:

  • Address – Status (1:1)
  • Street – Address (1:n)
  • Street – Status (1:1)

I would like to create an attribute rule on the Status table:

If:

  • type = "Address" and
  • status = "Error"

then the status of the related Street should also be set to "Error".

 

Implementation approaches:

Option 1: Using FeatureSetByRelationshipName

var addr = First(FeatureSetByRelationshipName($feature, "Address-Status"))
var str = First(FeatureSetByRelationshipName(addr, "Street-Address"))
var stat = First(FeatureSetByRelationshipName(str, "Street-Status"))

Option 2: Using FeatureSetByName

var addr = First(FeatureSetByRelationshipName($feature, "Address-Status"))
var statFs = FeatureSetByName($datastore, "Status")
var stat = First(Filter(statFs, 'aIdORsId = @addr.sId'))

 

Question:

Which approach is more performant?

I am particularly interested in:

  • Should FeatureSetByRelationshipName generally be preferred when relationships are available?
  • How much does the number of FeatureSet calls impact performance?
  • Is a combination of FeatureSetByRelationshipName and FeatureSetByName problematic in terms of additional requests?
0 Kudos
2 Replies
VenkataKondepati
Frequent Contributor

Hi @FabianHannich,

As per my experience, option 1 is cleaner. It’s "Geodatabase-aware," and it’s significantly easier for the next person to read when your code breaks six months from now.

If you’re doing this in a loop or on a batch of 10,000 records, any FeatureSet call will hurt. But for a standard Attribute Rule? Option 1 is the professional's choice.

 

Regards,
Venkat
Book a meeting with me:Get on a Call
Follow me on: LinkedIn
DavidSolari
MVP Regular Contributor

If you aren't tallying up CPU instructions, the answer to "which option is more performant" is always "the one that ran the fastest in testing." Build up a representative dataset, move it to a testing clone of your usual RDBMS setup, load that into a copy of Pro on a free machine and then run some tests. For each attribute rule you'll want to run a bulk update, measure the time it took, reset the state of the test data outside of this time measurement, then run that entire test multiple times so you can average out external performance concerns. Then you just use whatever wins! Like @VenkataKondepati mentioned, option 2 is tougher to maintain in the future, so if that wins I recommend leaving a big fat comment in the attribute rule that explains what's going on and also includes option 1 for reference.

Some resources for building your test runs:
Update Cursor (to change an attribute and trigger the rule)
Editor (to ensure the updates are processed in the same edit session)
time.perf_counter (call this before and after the edit session, then do after - before to get a value of elapsed time. Always use counters like this instead of standard datetime functions to measure performance as they're more accurate and are unaffected by world clock synchronization)

0 Kudos