Hi guys,
I have a a web map with a few layers in ArcGIS Enterprise 11.3 (also tested in 11.5), and I am using the ArcGIS JavaScript API 4.33 (latest) to display the map and it draws really slow when zooming and panning. I have similar performance when using the Portal for ArcGIS Map Viewer or when using the load web map sample available in the ArcGIS Maps SDK for JavaScript web site.
Issue:
From what I can see the app does not seem to be caching anything client-side. At least, this is the feeling I get. I was expecting the Feature Layer to cache the data it obtained when loading and not issuing additional requests unless the extent is changed. When I zoom in further, i continue to see HTTP requests going through to ArcGIS Server. This results in slow performance as it requests ArcGIS Server for data and then the renderer redraws the symbols causing it to flicker at times.
Comparison:
I have opened the exact same web map in Sweet for ArcGIS (which uses the ArcGIS JS API) and it draws at least 10x times faster. This is a big difference. Not only the load time is faster but zooming and panning is miles better. I have inspected the HTTP request/response for details and these are the only differences I can find:
Sweet
f: pbf
geometry: {"xmin":18733798.3883425,"ymin":-5853241.87796104,"xmax":18736244.373247623,"ymax":-5850795.893055916}
maxAllowableOffset: 0
orderByFields: OBJECTID ASC
outFields: *
outSR: 102100
quantizationParameters: {"mode":"edit","originPosition":"upperLeft"}
resultType: standard
spatialRel: esriSpatialRelIntersects
where: 1=1
geometryType: esriGeometryEnvelope
inSR: 102100
ArcGIS Maps SDK for JavaScript
f: pbf
geometry: {"xmin":18735632.877028253,"ymin":-5850184.396829739,"xmax":18735938.625141393,"ymax":-5849878.648716599}
orderByFields: OBJECTID
outFields: OBJECTID,Subtype
outSR: 102100
quantizationParameters: {"extent":{"spatialReference":{"latestWkid":3857,"wkid":102100},"xmin":18735632.877028253,"ymin":-5850184.396829739,"xmax":18735938.625141393,"ymax":-5849878.648716599},"mode":"view","originPosition":"upperLeft","tolerance":0.5971642834777837}
resultType: tile
returnCentroid: true
returnExceededLimitFeatures: false
spatialRel: esriSpatialRelIntersects
where: 1=1
geometryType: esriGeometryEnvelope
inSR: 10210
Additionally, I have customized the properties of the layer and set the same resultType and maxAllowableOffset but it didn't work either. I have not found a way to change the property "quantizationParameters" though.
I don't think these parameters are the issue. What Sweet for ArcGIS seems to be doing differently is that it only issues new HTTP requests WHEN the extent changes. Until then, it has everything required client-side. I don't have any layer configured to refresh the data periodically so I don't understand why things can't be cached client-side like Sweet does. I understand that some types of layer may require something more dynamic like Stream Layers but in most cases I can think of we don't require this behavior. If we want the latest data we simply refresh the page.
The Feature Layer does not seem to honor what I would expect when the mode is set to Snapshot or On-Demand, which i tried to set at the web map level but it also didn't change things.
Any idea how to improve performance here?
Many thanks,
Jose De Sousa
As far as Im aware Sweet will be pulling down the complete geometries as it is designed for topological editing, whereas the default ArcGIS JS behaviour is to provide different levels of optimisation based on zoom level (i.e. optimised for viewing).
For example see the difference in the mode quantization parameter: https://developers.arcgis.com/rest/services-reference/enterprise/query-feature-service-layer/#quanti...
The performance might be a bottleneck on the servers capability to generate optimised geometries? Is it possible for you to tune your enterprise setup?
Sweet for ArcGIS uses the ArcGIS JS API hence why it is surprising that the API works well for them but not when used directly. Esri UK seems to have tailored the JS API to work the way they want. I was hopeful someone would be able to provide more detail on how we can achieve a similar performance.
Both solutions are using a web browser to consume the map app, same browser, same laptop same backend, same everything. Like I said I have changed the resultType and allowedOffset to match what Sweet requests so the processing on the backend should be the same.
The issue is why the ArcGIS Maps SDK for JavaScript issues requests when the extent I have zoomed in is still contained in the initial extent. This is not what I expected the API to do. In version 3.x we could control the request mode and set it to On-Demand or Snapshot, meaning geometries are cached client-side. On-demand was the default behaviour I expected in 4.x. Sweet has tailored the API so it only issues requests when the extent goes beyond the initial extent as data is already cached client-side.
Sweet uses a custom implementation of the drawing/request pipeline for its feature layers. It's quite separate from the standard JavaScript API and handles its own client-side caching of features to support full topological validation. It’s more like a full reimplementation of the feature layer class rather than just changing some settings on a standard feature layer.
This kind of setup isn’t possible with the standard feature layer’s request mode, since it usually doesn’t have access to full-resolution geometry. For example, if you zoom in on a line that actually has 1000 points, but was originally sent with only 100 (to optimise performance at lower zoom levels), it will need to re-request the full data. You could watch the hasFullGeomtries property on the layer to see if these align with when additional requests are being made.
The quantization parameters are the key part of this, and are controlling what type of geometries are sent back.
Performance wise I could easily imagine a situation where it is quicker for you server to send you full geometries and then have your client handle the caching and rendering of this data, then have your server having to reprocess the data and then send it back to you at different level of optimisation. This is why I suggested that it might be worth trying to identify where the performance bottleneck is? Is it in the api or it your server struggling, and slowing down the client?
Thanks Jonathan. Based on your explanation, I would be expecting the Sweet drawing/request engine to be slower than the standard one available in the ArcGIS Maps SDK for JavaScript because it is requests for the full resolution geometries for Edit purposes. Instead it is much faster. These geometries end up being cached client-side, and that is why the browser no longer performs additional requests when you zoom in. This is the behavior I expected on the ArcGIS Maps SDK for JavaScript, since for the most part it is only concerned with viewing, and therefore should be optimized for displaying quicker not slower. Viewing should always be faster than Editing. However, this is not what is happening and the reason for this post.
Clearly, Sweet is rendering the exact same webmap, on the exact same browser 10x faster than the default engine even when requesting for the full resolution geometries as you say. In fact, this should make it slower not faster so perhaps something is going on when generalizing the geometries server side. Sweet is clearly both requesting and drawing features miles faster. The backend is the same. Buying more hardware is not the solution when the API can address this as Sweet is demonstrating. We just don't have a way to change the quantization parameters to edit and perhaps as you mentioned it is the key here. The WebMap class or the esriConfig should allow us to change this so we could get the full resolution geometries. Unless of course there is a way to tune ArcGIS Server so it generates optimized geometries much faster. I know hosted feature layers have something for this but to this exception the only options available that I am aware of are to simplify the data. In most of our use cases we don't deal with a large volume of data. Returning the full resolution geometries and caching them client-side would work much better for us as it prevents the API from issuing new requests at different zoom levels.
Regarding your last point, I'd say both. The problem from what we can see is that the map keeps making new requests as you zoom in because nothing gets cashed client-side. This then forces the drawing engine to re-render every time. The only way to avoid this is to return full resolution geometries by changing the quantization parameters from view to edit. From what I can see this can't be done which is a shame.
With the parameter set to view and resultType to tiles, the server also seems to take longer to process requests (I think with it makes more requests this way). This of course impacts performance. Maybe the additional server logic used to simplify the geometries for viewing purposes slows it down? Again, I have no way to customize these parameters. I can change the allowedOffset to 0 and the resultType to standard but this does not take the resolution of geometries to a point where it can be cached ... I'll have a look at the hasFullGeometries property but again I can't change this behavior. There should be a way to force the API to return full resolution geometries so they can be cached, at least as an option. Any ideas on how to tune ArcGIS Server to generate optimized geometries a lot faster (besides of course simplifying the data)?
Thanks for coming back to me.
Cheers,
Jose
We made the observation that when running on virtual machines with no real graphics card, rendering is bad. Maybe the SweetForArcGIS uses real graphicscards and therefore can render the layers much better.
Another idea is when using LayerViews you can reduce complexity of features geometries to get performance in contrast to original layers with their full geometries.
The ArcGIS JS API definitely works much better with graphics hardware. Its relying on WebGL unlike the previous iterations which were not using the webgl and instead using DOM element manipulation directly. Most modern libraries are moving this way, but if you need something more light weight you can use something like leaflet.
Its a pretty difficult comparison to make, there are huge performance benefits to utilising modern webgl. It would literally be impossible to render a million points in the DOM... but it does set the minimum spec to get smooth operation much higher!