Geometry functions as calculations: What scale is used?

281
6
03-27-2025 10:43 AM
Labels (1)
AlfredBaldenweck
MVP Regular Contributor

This may be a dumb question, but I remember Arcade's geometry functions varying results by scale-- the farther out you zoom, the more it fudges the answer of things like Intersects().

My question here is: What scale is used for field calculations are the like? Or does it matter in those profiles?

 

Thanks!

0 Kudos
6 Replies
RPGIS
by MVP Regular Contributor
MVP Regular Contributor

I don't quite know myself but I assume that it is limited by the extent of the features. I don't know if, when using the intersect, it considers the extent of all input features or if it merely defines the extent by the union of the features in the result of the intersect.

0 Kudos
AlfredBaldenweck
MVP Regular Contributor

I mean, this is the kind of thing that makes me worried: Frequently asked questions | ArcGIS Arcade | Esri Developer

You can use geometry functions in any profile that includes the geometry function bundle. Keep in mind that some profiles, such as visualization, generalize geometries to improve performance. Therefore, geometry calculations will lose precision as geometries become more and more generalized (as you zoom out).

For example, calculating a the area of a parcel when zoomed out to the extent of a county will yield a less precise result than if you calculate the area of the same polygon when zoomed to the polygon's extent.

 

HaydenWelch
MVP Regular Contributor

In my experience, I haven't run into any major issues with the scale factor. At least when working with file databases, as I think feature level attribute rules operate on the feature extent.

This is likely more to do with label rules that aren't linked to specific features and instead live in the layer level.

It's worth it to do some testing on label rules and attribute rules to test this (I'll probably check it sometime this week and make a report), but if you need accurate information, probably best to precalculate with arcpy and save the results in a field and just use arcade to run regular queries and sum up those stored values.

 

For example: Each parcel has an area that is calculated by arcpy, and a field containing all intersecting parcels, then a label/attribute rule uses those IDs to query other parcels and accumulate the areas.

0 Kudos
AlfredBaldenweck
MVP Regular Contributor

Yeah, the issue here is webmapping and stuff.

DavidColey
MVP Regular Contributor

Yes I too am constantly doing stuff like this in the webmap to return info to the popup because I work for county government and everyone wants to know what their property intersects: 

var intParcel = Intersects(FeatureSetByName($map,"Active Petition", ['petitiontype', 'petitionid'], false),Buffer($feature, -10, 'feet'));
var plist = '';
var pFirst= First(IntParcel);

function GetInitDomainDct() {
    var dom = Domain(intParcel, "petitiontype");
    var cvs = dom["codedValues"];
    var dct = {};
    for (var i in cvs) {
 I kind of get around the +/- in
        var cv = cvs[i];
        dct[cv["code"]] = cv["name"];
    }
    return dct;    
}
var rt = GetInitDomainDct();

if (!IsEmpty(pFirst)){
   for (var k in intParcel)
    {
        plist += rt[k.petitiontype] + ' ' + k.petitionid + TextFormatting.NewLine;
    }
} else {
    plist = 'This parcel is not part of any current petition(s).';
}

return plist;​

 

I kind of get around the +/- inaccuracies in our parcel lines by adding the negative buffer, but what I really need is to set up a case (or a when or an if) that is scale dependent, and won't run the intersects at small scales, beyond say 1:36K or 1:18.

But I'm not sure where to get at the scale parameter when using the map profile.  If I knew where to get at that, I'd be more than willing to run some tests on my expressions and see what comes back...

HaydenWelch
MVP Regular Contributor

If you want a more accurate/quick rule, you can use arcpy to grab all the internecting parcels and write their IDs into a hidden field. That way it's only run once, and can be batch updated when needed.

Then your label rule doesn't need to do any spatial querying, and instead just feeds that list of IDs into into a regular query that isn't effected by scale.

0 Kudos