We have a customer running a large set of map services (map and feature services) on the same server/host.
Comparing the runtime sdks performace on iOS to a web-app´s performance, we see that the webapp is able to refresh the map faster than the native client.
To me, it might look like the native client queues up the requests (one layer loads, before the next, etc), and I struggle figuring out why.
My hypothesis is that this might be caused by some os level limitation on the max number of connections per host, but setting the
to a high number does not seem to improve anything.
So I guess, question is:
* How does the runtime perform it´s requests - can I somehow tune the http client to open up more requests per host?
* Anything else that I might look into to improve native client´s performance
Runtime makes requests via a URLSession that it creates internally. Runtime uses the default `HTTPMaximumConnectionsPerHost` setting and does not expose a way to configure that setting.
I was able to get higher overall throughput in my unit test, using the default http client with a 5-connections-per-host limitation, by simply spreading the requests over multiple ports.
I am not able to improve performance through the runtime, though, using the same strategy. Adding multiple tiled raster layers, it looks as if the runtime waits to start to load layer 2 until layer 1 has completed - am I right?
Unfortunately, that hits us pretty hard, since we have some slow servers that spends way too much time rendering the rasters. We have about 10 raster layers that perform decently in parallell, but as it seems that the runtime is loading in sequence, it takes one minute (instead of 6 seconds) to load the map.
Hopefully, I am just way off here:)
As a workaround to a URLSession bug that starts the "timeout" countdown as soon as a request is queued, we've had to add an additional queue around URLSession. Our queue size matches URLSession HTTPMaximumConnectionsPerHost. However, we do not take the port into consideration when queueing up the requests, we only look at the host. If apple is looking also at the port, we could potentially do the same. Can you log an issue through support?
In my test, I sat up 10 tiled raster layers running on the same server. I am fully aware this setup is quite far from optimal, but unfortunately we have to live with this solution until we get access to the raw data to produce vector tiles.
My unit test adds 100 requests to the default url session, simulating a large iPad loading 10 map layers:
* Service 1: 10 random tiles
* Service 2: 10 random tiles
* Service 10: 10 random tiles
Test 1: Using default url session, one port. Sever log: Max 5 concurrent requests processing at the server side.
Test 2: Using default url session, one port per layer: 5 concurrent requests processing per port (50 concurrent requests active on server, as expected)
Test 3: Using the runtime, multiple ports, through the runtime: Max 5 concurrent requests, causing the map to first load one layer (with only 5 active concurrent requests).
In our case, making it possible to set the max num concurrent requests per host requests would be great to drastically improve user experience.