Hi all,
I've been building a web app using the ArcGIS Maps SDK for JavaScript (v5.x) that displays imagery footprints as polygons on a FeatureLayer using a polar stereographic projection (EPSG:3031). The core interaction is simple: hover over a polygon and highlight it.
The problem
On a MacBook with a Retina display, the hover feels noticeably sluggish. There's a visible delay between moving the mouse and the highlight updating. When I switch to a standard (non-Retina) external monitor, the same app feels much more snappy and responsive.
I've measured this by timing mapView.hitTest() calls with performance.now() across 100 samples:
| Retina (built-in) | 2 | 58.96 ms | 69 ms | 108 ms |
| External (non-Retina) | 1 | 30.03 ms | 16.9 ms | 80 ms |
The hitTest on Retina is consistently around 2× slower, and that difference is very noticeable in practice. I've put together a video showing the issue occurring in a standalone HTML example — no framework, just ArcGIS and vanilla JS — with Chrome's device emulation toggling between DPI=1 and DPI=2
Profiling in Chrome DevTools
To dig deeper I recorded Chrome Performance traces during identical hover patterns at emulated DPI settings:
The pattern is that after each highlight update renders, the SDK stops requesting new frames from the browser entirely. It only starts again once the next hitTest promise resolves. On DPI=1, that happens quickly enough that rendering stays continuous. On DPI=3, with hitTest taking over 60 ms, there are visible pauses of 100–140 ms between each highlight update — and that's what the user feels as lag.
The bottom-up CPU profile shows no expensive JS during these gaps. The main thread is genuinely doing nothing. The work appears to happen inside the ArcGIS SDK's own rendering pipeline, not in application code.
I'm not sure exactly what's happening inside the SDK at this point, but I wonder if it's related to how the WebGL canvas scales with devicePixelRatio — at DPI=2 that's 4× the pixel area compared to DPI=1, which could potentially be a factor in why hitTest takes longer?
What I'd actually prefer
For this use case of imagery footprints, I'd happily trade Retina sharpness for responsive hover interaction. Is there a supported way to cap the dpi used in the hittest somehow to speed up the checks?
Has anyone else hit this? Any suggestions welcome.
Here is a video of the running application. If I manually reduce the screen device pixel ratio (2 => 1.4), I drastically improve the interaction snappiness.
Is this expected behaviour?