Hi guys,
I have a a web map with a few layers in ArcGIS Enterprise 11.3 (also tested in 11.5), and I am using the ArcGIS JavaScript API 4.33 (latest) to display the map and it draws really slow when zooming and panning. I have similar performance when using the Portal for ArcGIS Map Viewer or when using the load web map sample available in the ArcGIS Maps SDK for JavaScript web site.
Issue:
From what I can see the app does not seem to be caching anything client-side. At least, this is the feeling I get. I was expecting the Feature Layer to cache the data it obtained when loading and not issuing additional requests unless the extent is changed. When I zoom in further, i continue to see HTTP requests going through to ArcGIS Server. This results in slow performance as it requests ArcGIS Server for data and then the renderer redraws the symbols causing it to flicker at times.
Comparison:
I have opened the exact same web map in Sweet for ArcGIS (which uses the ArcGIS JS API) and it draws at least 10x times faster. This is a big difference. Not only the load time is faster but zooming and panning is miles better. I have inspected the HTTP request/response for details and these are the only differences I can find:
Sweet
f: pbf
geometry: {"xmin":18733798.3883425,"ymin":-5853241.87796104,"xmax":18736244.373247623,"ymax":-5850795.893055916}
maxAllowableOffset: 0
orderByFields: OBJECTID ASC
outFields: *
outSR: 102100
quantizationParameters: {"mode":"edit","originPosition":"upperLeft"}
resultType: standard
spatialRel: esriSpatialRelIntersects
where: 1=1
geometryType: esriGeometryEnvelope
inSR: 102100
ArcGIS Maps SDK for JavaScript
f: pbf
geometry: {"xmin":18735632.877028253,"ymin":-5850184.396829739,"xmax":18735938.625141393,"ymax":-5849878.648716599}
orderByFields: OBJECTID
outFields: OBJECTID,Subtype
outSR: 102100
quantizationParameters: {"extent":{"spatialReference":{"latestWkid":3857,"wkid":102100},"xmin":18735632.877028253,"ymin":-5850184.396829739,"xmax":18735938.625141393,"ymax":-5849878.648716599},"mode":"view","originPosition":"upperLeft","tolerance":0.5971642834777837}
resultType: tile
returnCentroid: true
returnExceededLimitFeatures: false
spatialRel: esriSpatialRelIntersects
where: 1=1
geometryType: esriGeometryEnvelope
inSR: 10210
Additionally, I have customized the properties of the layer and set the same resultType and maxAllowableOffset but it didn't work either. I have not found a way to change the property "quantizationParameters" though.
I don't think these parameters are the issue. What Sweet for ArcGIS seems to be doing differently is that it only issues new HTTP requests WHEN the extent changes. Until then, it has everything required client-side. I don't have any layer configured to refresh the data periodically so I don't understand why things can't be cached client-side like Sweet does. I understand that some types of layer may require something more dynamic like Stream Layers but in most cases I can think of we don't require this behavior. If we want the latest data we simply refresh the page.
The Feature Layer does not seem to honor what I would expect when the mode is set to Snapshot or On-Demand, which i tried to set at the web map level but it also didn't change things.
Any idea how to improve performance here?
Many thanks,
Jose De Sousa
As far as Im aware Sweet will be pulling down the complete geometries as it is designed for topological editing, whereas the default ArcGIS JS behaviour is to provide different levels of optimisation based on zoom level (i.e. optimised for viewing).
For example see the difference in the mode quantization parameter: https://developers.arcgis.com/rest/services-reference/enterprise/query-feature-service-layer/#quanti...
The performance might be a bottleneck on the servers capability to generate optimised geometries? Is it possible for you to tune your enterprise setup?
Sweet for ArcGIS uses the ArcGIS JS API hence why it is surprising that the API works well for them but not when used directly. Esri UK seems to have tailored the JS API to work the way they want. I was hopeful someone would be able to provide more detail on how we can achieve a similar performance.
Both solutions are using a web browser to consume the map app, same browser, same laptop same backend, same everything. Like I said I have changed the resultType and allowedOffset to match what Sweet requests so the processing on the backend should be the same.
The issue is why the ArcGIS Maps SDK for JavaScript issues requests when the extent I have zoomed in is still contained in the initial extent. This is not what I expected the API to do. In version 3.x we could control the request mode and set it to On-Demand or Snapshot, meaning geometries are cached client-side. On-demand was the default behaviour I expected in 4.x. Sweet has tailored the API so it only issues requests when the extent goes beyond the initial extent as data is already cached client-side.
Sweet uses a custom implementation of the drawing/request pipeline for its feature layers. It's quite separate from the standard JavaScript API and handles its own client-side caching of features to support full topological validation. It’s more like a full reimplementation of the feature layer class rather than just changing some settings on a standard feature layer.
This kind of setup isn’t possible with the standard feature layer’s request mode, since it usually doesn’t have access to full-resolution geometry. For example, if you zoom in on a line that actually has 1000 points, but was originally sent with only 100 (to optimise performance at lower zoom levels), it will need to re-request the full data. You could watch the hasFullGeomtries property on the layer to see if these align with when additional requests are being made.
The quantization parameters are the key part of this, and are controlling what type of geometries are sent back.
Performance wise I could easily imagine a situation where it is quicker for you server to send you full geometries and then have your client handle the caching and rendering of this data, then have your server having to reprocess the data and then send it back to you at different level of optimisation. This is why I suggested that it might be worth trying to identify where the performance bottleneck is? Is it in the api or it your server struggling, and slowing down the client?
Thanks Jonathan. I was expecting the Sweet drawing/request engine to be slower than the standard one available in the ArcGIS Maps SDK for JavaScript because like you said it is requesting for the full resolution geometries. Instead it is much faster. These full resolution geometries are being cached client-side and that is why the browser no longer needs to request for anything else when you zoom in. This is the behavior I would expect.
You are right that this might be because of the mode set to edit in the quantizationParameters. However, I would have expected the ArcGIS Maps SDK for JavaScript to be faster since for the most part it is only concerned with viewing, and therefore should be optimized for displaying quicker not slower. Viewing should always be faster than Editing. However, this is not what is happening and the reason for this post.
Clearly, Sweet is rendering the exact same webmap, on the exact same browser 10x faster than the default engine even when requesting for the full resolution geometries .. this should make it slower not faster so perhaps something is going when generalizing the geometry server side. If it is the case that returning the full resolution geometries makes it faster because no extra processing needs to happen server-side and then the geometries can be cached client-side which speeds zoom and panning, than this should be the default for viewing, or at least there should be an option to configure it this way. Sweet is clearly requesting and drawing a faster both in the initial load as well as when zooming in and panning after. The backend is the same. Buying more hardware is not the solution when the API can address this, as Sweet demonstrated. We just don't have a way to change the quantization parameters and perhaps as you mentioned is the key here.
Regarding your last point, i'd say both. The problem from what we can see is that the map keeps requesting for data as you zoom in because nothing gets cashed client-side. This then forces the drawing engine to re-render every time. The server also takes longer to return results requested by the ArcGIS JavaScript API with mode set to view. This of course impacts performance. If with mode set to view is slower, than what is the point. Might as well return the full unprocessed resolution geometries and cache everything client-side. I don't know why it is slower with mode set to view. Maybe the tolerance sent with the quantization parameters and the additional processing to simplify the geometry for viewing purposes slows it down? Again, we have no way to customize these parameters. I can change the allowedOffset to 0 and the resultType to standard but this does not take the resolution of geometries to a point where it can be cached ... I'll have a look at the hasFullGeometries property but again I can't change this behavior, is there any way to force the API to get full resolution geometries so they can be cached? Or any ideas on how to boost the server performance with the mode set to view.
Thanks for coming back to me.
Cheers,
Jose
We made the observation that when running on virtual machines with no real graphics card, rendering is bad. Maybe the SweetForArcGIS uses real graphicscards and therefore can render the layers much better.
Another idea is when using LayerViews you can reduce complexity of features geometries to get performance in contrast to original layers with their full geometries.
The ArcGIS JS API definitely works much better with graphics hardware. Its relying on WebGL unlike the previous iterations which were not using the webgl and instead using DOM element manipulation directly. Most modern libraries are moving this way, but if you need something more light weight you can use something like leaflet.
Its a pretty difficult comparison to make, there are huge performance benefits to utilising modern webgl. It would literally be impossible to render a million points in the DOM... but it does set the minimum spec to get smooth operation much higher!