AnsweredAssumed Answered

Screening widget returning incorrect results

Question asked by on Mar 1, 2018
Latest reply on May 18, 2018 by cbiery_EMNRD

Has anybody else had issues with the screening widget report returning bizarrely high area sums? 


I'm currently in the configuration stage of a web app and am doing some testing with various zipped shapefile areas of interest (AOI). One ~300 hectare AOI polygon is returning an overlap of 17 million hectares for one of my feature layer categories (4 classes, the other 3 classes return expected results). It's not isolated to a single feature layer either - other feature layers from different source data are also returning oddball results (58K ha overlap on a 120ha polygon, 6K ha overlap on a 250 ha polygon, all different input feature layers). These weird results also happen when I draw an AOI instead of uploading a shapefile. In most cases (but not all), the high area sum is much higher than the total area of all features in the feature layer (i.e. one feature layer totals ~3.5 million ha, but I'm getting 17 million ha returned for a single class within the data). I have been over the source data with a fine-tooth comb and it's as clean as it can be, so I suspect that I'm either configuring something incorrectly or there is either something up with the screening widget itself.


I've tried the following with no luck:

  • Reprojected the source data to web Mercator prior to publishing the feature layer
  • Reprojected the AOI shapefiles to web mercator prior to using in the screening widget
  • Run topology on all my source data to ensure no overlaps
  • Re-indexed the spatial index of the source data & republished the feature layer
  • Dissolved the data on the class field to have less features in the feature layer
  • Deleted published feature layer and published a fresh feature layer
  • Split out data with a definition query on the source and published a single-class feature layer  – feature layer still gives incorrect area sum
  • Exported the features for a single class into a new feature class and published this as a feature layer
  • Simplified the data (collapsed vertices within 10m) 
  • Removed all other feature layers from the screening widget
  • Diced the data with a max 1000 vertices
  • Diced the data with a max 500 vertices
  • Uploaded the fgdb to AGO and made feature layer
  • Uploaded a subset of the data to AGO and made a feature layer


The screening widget looks like it has fantastic potential, but if I can't rely on the results I'll have to look at other ways to get a similar output. If anyone has any thoughts on other things I could try to make the screening widget results more reliable for my data, I'm all ears. I've attached a sample PDF output showing the anomalies.