I have an application that was upgraded to the 100.4 version of the API. In testing this application I see huge difference in the overall performance of the identify. What is odd is that not all layers perform worse, some actually seem to perform better. However, the ones that perform worse are significant (4 seconds in one case, 1.5 in another). While I have not done an exhaustive evaluation of the layers themselves, it would seem that layers with more data a decrease in performance of the largest amounts.
What is going on here? Because of the magnitude of the increase with specific layers I have an identify operation go from 0.6 seconds to 7.5 seconds. It should go without saying that having a user wait 7 seconds for identify results is not acceptable
I also observe a noticeable difference in the performance of a polygon search on a large offline database although I have not logged the actual performance.
These tests are using the same offline replicas, identifying the same layers. The only difference being, I changed the API from 100.4 to 100.2 (and fixed the breaking changes)
Layer Name | Time Seconds (API = 100.4) | Time Seconds (API = 100.2) | Increase (ms) | Identify Time Change |
Gas Leaks | 0.0709587 | 0.0839489 | -1.29902 | Decrease |
Repaired Gas Leaks | 0.256843 | 0.0079967 | 24.88463 | Increase |
Change Request | 0.0069962 | 0.0179866 | -1.09904 | Decrease |
Meters | 1.5430591 | 0.0169923 | 152.60668 | Increase |
Regulator Station | 0.0189887 | 0.0169907 | 0.1998 | Increase |
Compressor Station | 0.0119933 | 0.0171815 | -0.51882 | Decrease |
Critical Valves | 0.063961 | 0.0179909 | 4.59701 | Increase |
Non Critical Valves | 0.1778903 | 0.0189875 | 15.89028 | Increase |
Test Stations | 0.0089948 | 0.0149901 | -0.59953 | Decrease |
Coupon | 0.0149913 | 0.0169892 | -0.19979 | Decrease |
Relief Valve | 0.0159903 | 0.0169946 | -0.10043 | Decrease |
Pressure Monitoring Device | 0.0199876 | 0.0179843 | 0.20033 | Increase |
Rectifier | 0.025984 | 0.0159899 | 0.99941 | Increase |
Odorizer | 0.0149906 | 0.0159895 | -0.09989 | Decrease |
Pig Structure | 0.0199865 | 0.015991 | 0.39955 | Increase |
Sniff Test Location | 0.0259856 | 0.0189887 | 0.69969 | Increase |
Locator Device | 0.0669575 | 0.1082403 | -4.12828 | Decrease |
Exposed Pipe | 0.0109939 | 0.0109939 | 0 | Decrease |
Encroachment | 0.0169904 | 0.0169896 | 8E-05 | Increase |
WOF | 0.0159894 | 0.0169907 | -0.10013 | Decrease |
TFIR | 0.0149906 | 0.01699 | -0.19994 | Decrease |
Drip | 0.0349796 | 0.0169884 | 1.79912 | Increase |
Crossing | 0.026985 | 0.0169895 | 0.99955 | Increase |
Fittings | 0.8374883 | 0.0279835 | 80.95048 | Increase |
Rectified Pipe | 4.190445 | 0.0079933 | 418.24517 | Increase |
Gas Pipe Casing | 0.0169889 | 0.0169911 | -0.00022 | Decrease |
Customer Service Zones | 0.0169903 | 0.0189886 | -0.19983 | Decrease |
Shop Location | 0.0159899 | 0.0189872 | -0.29973 | Decrease |
Total | 7.5633898 | 0.6271185 |
Thanks
-Joe
Joe,
A couple of questions:
- What layer type(s) is the identify operation running against?
- What specific identify operation and overloads are you using?
Thanks
Mike
private async Task IdentifyByBoundMap(MapView mapView, GeoViewInputEventArgs args, double tolerance, List<IdentifyLayerResult> identifyLayerResults)
{
var start = DateTime.Now;
Log.Info($"Start Time: {start}");
//This is so result set is in map TOC order
for (int i = Map.OperationalLayers.Count - 1; i >= 0; i--)
{
FeatureLayer mapLayer = Map.OperationalLayers[i] as FeatureLayer;
if ( mapLayer?.PopupDefinition == null || mapLayer.IsVisible == false ) continue;
if ( mapLayer.FeatureTable.TableName == null ) continue;
var startLayer = DateTime.Now;
var result = await mapView.IdentifyLayerAsync(mapLayer, args.Position, tolerance, false, 5);
var endLayer = DateTime.Now - startLayer;
Log.Info($"{mapLayer.Name}: {endLayer.TotalSeconds}");
identifyLayerResults.Add(result);
}
var timeSpanAll = DateTime.Now - start;
Log.Info($"Identify All Time: {timeSpanAll.TotalSeconds}");
}
The tolerance is 10.
As mentioned below, it would seem that line features (Db about 650 MB) are the ones that show the extreme increase. Meters which is a point asset but is large (over 1 GB) shows a large increase but nothing like the lines. I also did identify on other .geodatabase files for linear assets and these also show a 4 second time. We have multiple line layers that are the same actual data, just symbolized differently. Also because of how server synchronizes data we are required to have all the layer the data on the client or data won't sync correctly from the server
The entire class is attached if that would be helpful. This is implemented as a TriggerAction attached to the MapView
So these are all local feature layers? If so, we should have some good improvements for Update 5 regarding query performance on local data, which would likely help identify too. Also see https://community.esri.com/message/821017-query-performance#comment-824863
That is nice to hear but my question would still be do you understand what would have changed to have that significant a hit on performance. I am looking at a 500x increase in time to identify these specific layers (lines in large offline replicas seem to be the huge hit) after upgrading the API. If folks are unclear on what the issue is I am concerned on if it will be resolved. Currently am in a position that we can not deliver the application using the 100.4 API because of this issue.
Michael Branscomb I sent this info in email to Morten also. But what I have found during further testing is the the huge hit is associated to us having a definition expression on the layer. I also noticed in other testing how much impact a definition expression has in general on redraw.
We are required to have definition expression attached to every layer because of the shortcomings in the ArcGIS Server sync model. [The issue is that if a value in the database is changed to no longer meet the maps's definition expression or the query used when the replica is generated it will not sync. This is considered to be 'works as designed' and as far as I know there is no plan to change] Because of this we are required to bring all features onto the client and apply a definition expression so that the features will still sync.
Doing a little more evaluation and it seems that the most incredible negative difference is with certain line features, the databases in these cases are also rather large at about 650 MB,
Is 100.4 sending different http requests to the server?
(You can compare traffic using ArcGISHttpClientHandler.HttpRequestBegin)
This is a local replica, not a ServiceTable. I guess I just have to wait on 100.5 and hope for the best, but I find it concerning that no one has indicated that they understand why the performance is so extremely degraded. As of now cannot release the upgraded application.
OK so I've been running various Identify Performance Benchmark against Geodatabase for v100.2, 100.2.1, 100.3, 100.4 and latest 100.5 build, and I'm not able to reproduce the big slowdown.
I'm actually seeing IdentifyLayers to become about 15% faster in 100.2.1, and a small performance drop in IdentifyLayer at 100.3, but nothing that matches the size of performance change you're experiencing.
That leads me to believe it might be data specific - is there any chance this data could be shared with us, so we can try and reproduce?
I'm using BenchmarkDotNet to run against 38 various layers all in a local geodatabase, using the following code:
[Benchmark, System.STAThread]
public Task<int> IdentifyAllLayers()
{
return RunOnUIThread(async () =>
{
var result = await mapView.IdentifyLayersAsync(new System.Windows.Point(WindowWidth / 2, WindowHeight / 2), 10, false).ConfigureAwait(false);
if (result.Count == 0)
throw new System.Exception("Expected one layer returning");
return result.Count;
});
}
[Benchmark, System.STAThread]
public Task<int> IdentifyEachLayer()
{
return RunOnUIThread(async () =>
{
int count = 0;
foreach (var layer in map.OperationalLayers.OfType<FeatureLayer>())
{
var result = await mapView.IdentifyLayerAsync(layer, new System.Windows.Point(WindowWidth / 2, WindowHeight / 2), 10, false);
count += result.GeoElements.Count;
}
if (count == 0)
throw new System.Exception("No identify results");
return count;
});
}
Btw I suggest you use IdentifyLayersAsync to identify all layers in the map, as this seems to be a lot faster than doing an identify on each layer one by one (In my case about a factor of 10 !).