Select to view content in your preferred language

Performance Issues with large number of feature Layers

1133
3
Jump to solution
11-16-2023 09:12 PM
MasakiHirano
Emerging Contributor

Hello everyone,

I am developing an iOS application using the ArcGIS SDK (version 100.15.0) and am encountering performance issues under specific conditions, for which I am seeking advice.

Problem Details:

  • My app handles approximately 1800 FeatureLayers from 9 different mmpks, with each mmpk containing between 20 to 1000 layers. These layers include various types, such as those showing routes to specific locations or guide points, and can overlap in display.
  • Depending on the scale, up to about 140 layers can be displayed at a time. It's important to note that not all layers are loaded at once; only those within the current viewing range are added. As the viewing range changes due to scrolling or moving to a different location, new layers within this range are added, and those outside are removed.
  • However, when interacting with these layers, the app's performance significantly degrades, especially noticeable during map rotation operations. The CPU usage peaks at around 480%, and memory usage reaches about 4GB, leading to crashes due to insufficient memory.

Development Environment:

  • Xcode 14.3.1
  • iOS 16
  • Device Used: iPad Pro 12.9 inch

Questions:

  • Could the sheer number of layers within each mmpk be the cause of these performance issues?
  • Is merging layers a potential solution?
  • What is the recommended number of layers within a single mmpk?
  • Additionally, I feel there may be an issue with the granularity of the layers. What is the recommended size for each layer in terms of square kilometers?

If anyone has experienced similar issues or knows potential solutions, your insights and suggestions would be greatly appreciated.

Thank you very much.

0 Kudos
1 Solution

Accepted Solutions
Nicholas-Furness
Esri Regular Contributor

Hi,

Thanks for the question and the detailed info.

In general, a map that gets into the hundreds of layers at a time raises some questions about the purpose or composition of the map. Given that a layer is meant to represent a type or category of data rather than a specific piece of data, you can see that 100 layers is a lot for a user to take in. Layers are database tables, not table rows, and you should really aim to consolidate data of one type into one table (i.e. layer).

Often when you have that many types of data, they are for visual reference and best consolidated into a single  layer (or basemap layer). That can be efficiently packaged up as a vector tile layer and often removes a lot of complexity.

You mentioned a few things that I'd like to ask about:

  • "approximately 1800 FeatureLayers from 9 different mmpks, with each mmpk containing between 20 to 1000 layers" - are you opening all the maps within these 9 MMPKs and selectively moving layers to your app's visible map?
  • How large are the MMPKs?
  • Are the data layers in each MMPK the same but include data for different geographies? Or are they 9 completely different sets of layers?
  • "only those [layers] within the current viewing range are added" - do you mean visible geographic extent, or are you also considering scale range? Layers can be configured with a minScale and maxScale. I'm not sure that would help with performance in this case, but it's worth considering alongside other recommendations as it could simplify your code logic. Also, remember that Runtime will only load the data it needs for the visible extent. As you pan around the map, it will load more as needed. Unless you have other operational reasons to segregate your data by geography, you don't need to break it up that way for Runtime (in fact it can often complicate map content), though it should be an acceptable approach as long as you're removing layers you no longer display.
  • When a layer or map is not being displayed, make sure you're not holding on to references to it.
  • Consider the geometry types of these feature layers. Point layers render on the GPU in dynamic mode by default, and polyline/polygon layers render on the CPU in static mode. When adjacent layers in the layer stack all render in static mode, that static rendering is consolidated efficiently. However, if you have static and dynamic layers intermingled, each static layer is rendered independently and that's much more computationally expensive.

    You can explore a couple of things here:
    - A general good rule of thumb is order your layers by type. Polygons on the bottom, then polylines, then points. That aligns nicely with removing interleaving of static and dynamic layers.
    - You can control the rendering mode of feature layers (though there are some cases, like heat map renderers, where you can't), so if you do need to retain some interleaving of polyline or polygon layers between point layers, you can explicitly set the rendering mode on those polyline/polygon layers to dynamic to see if that helps. In a lot of situations, dynamic mode works well on polyline and polygon layers.
  • If you have multiple layers showing the same kinds of data with the same symbology/renderers, then that can be wasteful. It's much more efficient if that data is merged into one layer, and at that point Runtime can render all that data with one renderer instance and do a lot more resource sharing within the rendering pipeline. If you are able to join multiple layers showing the same kind of data into one layer, you can use displayFilters and/or definitionExpressions to limit the features that are displayed at any given time.
  • If you have picture marker symbol based off bitmaps/images, how large are those bitmaps? Each bitmap must be kept in memory in the GPU, and possibly also in the CPU as part of the renderer definition. If you have multiple layers using the same bitmap, that is wasteful (see previous bullet - if you can consolidate like layers into one, that will probably benefit you a lot). Likewise, if the bitmap is higher resolution than is needed for displaying the symbol, that can also lead to waste.
  • Consider the renderers defined on each layer. When a layer is loaded and displayed, that renderer is created and lives in memory. For example, do you have multiple unique value renderers that use large bitmaps? It's not that unique value renderers themselves are expensive (in fact, they are a very efficient way to render given that we can do a lot of resource sharing in the GPU), but if you have higher-resolution bitmaps in use then a UVR is a quick way to multiply that problem.
  • Also think about the spatial reference of the feature layer data in the MMPKs. If you're using a Web Mercator basemap (as all our standard basemaps are) but the data is in another spatial reference, then it must be projected on the fly. With this much data this could be impacting performance.

You can see more discussion on this and other performance considerations here: https://developers.arcgis.com/ios/programming-patterns/performance-considerations/

To answer your questions:

  1. Could the sheer number of layers within each mmpk be the cause of these performance issues?
    Yes.
  2. Is merging layers a potential solution?
    Definitely. Also consider whether any layers can be consolidated into a single vector tile layer. Remember you can't interrogate vector tile layers to read attributes though - they're purely for visual reference.
  3. What is the recommended number of layers within a single mmpk?
    We can't really give a hard number. It depends a lot on how many are visible at any one time with scale ranges, and what type of layer it is, and the complexity/density of the data. But from experience I would say that if you have over 150 layers, you should really be looking at how you're authoring your map. However if layers have complex renderers or complex data, you could start seeing the impact sooner. Personally, I start to ask myself questions if my map has over 50 layers. Not from a performance perspective, but from a "just what is my map supposed to be doing" perspective. Sometimes the answer is "Yeah, it does need all those", but often it's a chance to simplify. It's also a bit different since (if I understand correctly) you're not displaying all the layers at once and are consolidating things from multiple MMPKs into one Map. It's what ends up in that map that is probably more important here.
  4. What is the recommended size for each layer in terms of square kilometers?
    There isn't a limit. They can be global, and I don't think the geographic extent of your data is a concern here. The spatial indexing on the data ensures that data is read from the MMPK's internal data store very efficiently by geography. However, make sure that your layers have suitable minScale and maxScale defined so that data isn't displayed at inappropriate zoom levels. Also note that when accessing an ArcGIS service, data is generalized for display by the server when it's requested, but in an MMPK the data in the MMPK is raw and no generalization happens until the data has been read from the MMPK and passed to the GPU for rendering. Vector tile layers can help with this. Also, if the data needs to remain as feature layers and you need to see complicated polylines and polygons at a large range of scales, consider including a generalized copy of that data in the MMPK as a separate layer. Use scale ranges on the layers so that when zoomed out you view the generalized data and when zoomed in you view the original denser data. You'll take a hit on the size of the MMPK, but will buy yourself a better experience viewing the data. It's not typically necessary, but worth bearing in mind.

Hopefully the above ideas will help you reduce the number of layers you're working with, which is almost certainly the primary thing you should focus on. But I am guessing a lot about what you might be doing and how your data is authored.

If you need more help then (depending on where you are globally) I would probably recommend a Professional Services engagement to dig deeper into your data workflows and use cases. Or if you have an Esri account manager or local distributor that you're working with, perhaps reach out to them and see if they have a preferred approach to getting some one-on-one help.

View solution in original post

3 Replies
Nicholas-Furness
Esri Regular Contributor

Hi,

Thanks for the question and the detailed info.

In general, a map that gets into the hundreds of layers at a time raises some questions about the purpose or composition of the map. Given that a layer is meant to represent a type or category of data rather than a specific piece of data, you can see that 100 layers is a lot for a user to take in. Layers are database tables, not table rows, and you should really aim to consolidate data of one type into one table (i.e. layer).

Often when you have that many types of data, they are for visual reference and best consolidated into a single  layer (or basemap layer). That can be efficiently packaged up as a vector tile layer and often removes a lot of complexity.

You mentioned a few things that I'd like to ask about:

  • "approximately 1800 FeatureLayers from 9 different mmpks, with each mmpk containing between 20 to 1000 layers" - are you opening all the maps within these 9 MMPKs and selectively moving layers to your app's visible map?
  • How large are the MMPKs?
  • Are the data layers in each MMPK the same but include data for different geographies? Or are they 9 completely different sets of layers?
  • "only those [layers] within the current viewing range are added" - do you mean visible geographic extent, or are you also considering scale range? Layers can be configured with a minScale and maxScale. I'm not sure that would help with performance in this case, but it's worth considering alongside other recommendations as it could simplify your code logic. Also, remember that Runtime will only load the data it needs for the visible extent. As you pan around the map, it will load more as needed. Unless you have other operational reasons to segregate your data by geography, you don't need to break it up that way for Runtime (in fact it can often complicate map content), though it should be an acceptable approach as long as you're removing layers you no longer display.
  • When a layer or map is not being displayed, make sure you're not holding on to references to it.
  • Consider the geometry types of these feature layers. Point layers render on the GPU in dynamic mode by default, and polyline/polygon layers render on the CPU in static mode. When adjacent layers in the layer stack all render in static mode, that static rendering is consolidated efficiently. However, if you have static and dynamic layers intermingled, each static layer is rendered independently and that's much more computationally expensive.

    You can explore a couple of things here:
    - A general good rule of thumb is order your layers by type. Polygons on the bottom, then polylines, then points. That aligns nicely with removing interleaving of static and dynamic layers.
    - You can control the rendering mode of feature layers (though there are some cases, like heat map renderers, where you can't), so if you do need to retain some interleaving of polyline or polygon layers between point layers, you can explicitly set the rendering mode on those polyline/polygon layers to dynamic to see if that helps. In a lot of situations, dynamic mode works well on polyline and polygon layers.
  • If you have multiple layers showing the same kinds of data with the same symbology/renderers, then that can be wasteful. It's much more efficient if that data is merged into one layer, and at that point Runtime can render all that data with one renderer instance and do a lot more resource sharing within the rendering pipeline. If you are able to join multiple layers showing the same kind of data into one layer, you can use displayFilters and/or definitionExpressions to limit the features that are displayed at any given time.
  • If you have picture marker symbol based off bitmaps/images, how large are those bitmaps? Each bitmap must be kept in memory in the GPU, and possibly also in the CPU as part of the renderer definition. If you have multiple layers using the same bitmap, that is wasteful (see previous bullet - if you can consolidate like layers into one, that will probably benefit you a lot). Likewise, if the bitmap is higher resolution than is needed for displaying the symbol, that can also lead to waste.
  • Consider the renderers defined on each layer. When a layer is loaded and displayed, that renderer is created and lives in memory. For example, do you have multiple unique value renderers that use large bitmaps? It's not that unique value renderers themselves are expensive (in fact, they are a very efficient way to render given that we can do a lot of resource sharing in the GPU), but if you have higher-resolution bitmaps in use then a UVR is a quick way to multiply that problem.
  • Also think about the spatial reference of the feature layer data in the MMPKs. If you're using a Web Mercator basemap (as all our standard basemaps are) but the data is in another spatial reference, then it must be projected on the fly. With this much data this could be impacting performance.

You can see more discussion on this and other performance considerations here: https://developers.arcgis.com/ios/programming-patterns/performance-considerations/

To answer your questions:

  1. Could the sheer number of layers within each mmpk be the cause of these performance issues?
    Yes.
  2. Is merging layers a potential solution?
    Definitely. Also consider whether any layers can be consolidated into a single vector tile layer. Remember you can't interrogate vector tile layers to read attributes though - they're purely for visual reference.
  3. What is the recommended number of layers within a single mmpk?
    We can't really give a hard number. It depends a lot on how many are visible at any one time with scale ranges, and what type of layer it is, and the complexity/density of the data. But from experience I would say that if you have over 150 layers, you should really be looking at how you're authoring your map. However if layers have complex renderers or complex data, you could start seeing the impact sooner. Personally, I start to ask myself questions if my map has over 50 layers. Not from a performance perspective, but from a "just what is my map supposed to be doing" perspective. Sometimes the answer is "Yeah, it does need all those", but often it's a chance to simplify. It's also a bit different since (if I understand correctly) you're not displaying all the layers at once and are consolidating things from multiple MMPKs into one Map. It's what ends up in that map that is probably more important here.
  4. What is the recommended size for each layer in terms of square kilometers?
    There isn't a limit. They can be global, and I don't think the geographic extent of your data is a concern here. The spatial indexing on the data ensures that data is read from the MMPK's internal data store very efficiently by geography. However, make sure that your layers have suitable minScale and maxScale defined so that data isn't displayed at inappropriate zoom levels. Also note that when accessing an ArcGIS service, data is generalized for display by the server when it's requested, but in an MMPK the data in the MMPK is raw and no generalization happens until the data has been read from the MMPK and passed to the GPU for rendering. Vector tile layers can help with this. Also, if the data needs to remain as feature layers and you need to see complicated polylines and polygons at a large range of scales, consider including a generalized copy of that data in the MMPK as a separate layer. Use scale ranges on the layers so that when zoomed out you view the generalized data and when zoomed in you view the original denser data. You'll take a hit on the size of the MMPK, but will buy yourself a better experience viewing the data. It's not typically necessary, but worth bearing in mind.

Hopefully the above ideas will help you reduce the number of layers you're working with, which is almost certainly the primary thing you should focus on. But I am guessing a lot about what you might be doing and how your data is authored.

If you need more help then (depending on where you are globally) I would probably recommend a Professional Services engagement to dig deeper into your data workflows and use cases. Or if you have an Esri account manager or local distributor that you're working with, perhaps reach out to them and see if they have a preferred approach to getting some one-on-one help.

MasakiHirano
Emerging Contributor

Thank you for the detailed reply. I apologize for the late response.

To summarize, I was able to dramatically improve performance and successfully resolve the issue by recreating the mmpk file.

Layers are database tables, not table rows, and you should really aim to consolidate data of one type into one table (i.e. layer).

Indeed, that makes sense. Your words have given me confidence.

The mmpk in question was created by my client, but I requested them to recreate the mmpk file.

First, let me explain why there were so many layers initially. The mmpk file was created from a csv file using ArcGIS Pro's XY Table To Point (Data Management). This csv file contained a large number of XY coordinates, but since XY Table To Point creates one layer per coordinate, a large number of layers were created.

After merging these layers into one using Merge Layers, performance was successfully improved.

Additionally, there were 9 mmpks that were geographically divided, but as there was no benefit to this division, the mmpks themselves were also merged into a single layer. (This was simply done by copying and pasting each mmpk's layers into a single map in the ArcGIS Pro map editing screen.)

As a result, what was previously 1800 layers in 9 mmpks became less than 10 layers in a single mmpk. The size of the mmpk was also reduced from a total of 161 MB to 2 MB.

0 Kudos
Nicholas-Furness
Esri Regular Contributor

Excellent. That's great to hear. Thanks for confirming.

Seems strange to me that the Pro tool should create one layer per row. Perhaps the issue is with how the CSV file is being parsed. Either way, glad you've got a working solution!