Data Optimization: A few large or many small polygons

4686
5
Jump to solution
05-18-2015 09:10 PM
Highlighted
New Contributor II

I would like to know the communities thoughts on a question...

I intend to push out a dynamic map service that will likely have 5000+ hits daily.  This data contains polygons roughly 1 sq. mi in size, distributed across a State.  These polygons typically contain many vertices and I want to optimize this layer to draw quickly.  All other things being equal, would it be best to dissolve the polygons into a few Large polygons or maintain the polygons in their current form? 

The reason I ask is because I am not sure how Arc Server pushes out data to the view extent of a web map.  If the data is dissolved into one large parcel, my initial thought was that it would draw more quickly, but maybe that is not the case.  Does ArcServer need to load the entire large polygon, even if the majority is out of the map view?  If that is the case, then a request to display many more smaller polygons may be a better solution, gathering and rendering items that are limited to the map view.

Thanks!

Reply
0 Kudos
1 Solution

Accepted Solutions
Highlighted
Esri Esteemed Contributor
The reason I ask is because I am not sure how Arc Server pushes out data to the view extent of a web map.  If the data is dissolved into one large parcel, my initial thought was that it would draw more quickly, but maybe that is not the case.  Does ArcServer need to load the entire large polygon, even if the majority is out of the map view?

I'm not aware of ANY mapping package that can only read a fragment of a single feature.  Features are the basic unit of I/O in all GIS packages.

ArcGIS Server has two fundamental modes of operation:

  1. Get a map request and follow the drawing rules to render the map
  2. Get a map request and stitch together the pre-rendeerd tiles in the cache to make the map

The fastest response will always be from a cache, but that requires that you construct one, which isn't always an option (if the data being returned is updated frequently), and maps will only be generated at a fixed interval of map scales.

The Sharing Content on the Web 2-day Esri training class contains a number of tips and tricks on making the fastest possible web services.  They include using scale dependency, using the Esri Optimized symbol set, and either splitting large, complicated features into smaller ones or joining many simple features with identical symbology into larger clusters of features to reduce feature retrieval time.  The class also teaches how to evaluate performance of a web service using the publishing tools.

The Dicing Godzillas blog entry contains good information about splitting up large features. The Dice tool isn't the only option for feature splitting, though -- you can also use a a regular grid (fishnet or county boundaries in PLSS states).  If you modify your polygon features, and you don't want visual artifacts as in the blog, you can use two feature classes (polygons and lines) to render features (I've used this technique on a global scale, cutting countries into 5-degree tiles and rendering the polygons without outline and the country boundaries as a separate layer, UNIONed by tile).

As an answer to the "few large or many small" question, I'd reply "a handful of medium" is best.

- V

View solution in original post

Reply
0 Kudos
5 Replies
Highlighted
Regular Contributor

J.,

First of all, please refer to Is your data clean?  and see if your data is clean.

Secondly, on your intention 'These polygons typically contain many vertices and I want to optimize this layer to draw quickly', pls use the GP tools 'Generalization toolset',

  1. An overview of the Generalization toolset—Help | ArcGIS for Desktop  and
  2. An overview of the Generalization toolset—Help | ArcGIS for Desktop

Thirdly, when your vector data moved in GDB (ArcSDE, FGDB), you have many space to optimize there, at least, 'spatial indexing' must be done before serving for dynamic uses.

Reply
0 Kudos
Highlighted
MVP Esteemed Contributor

Polygons with many vertices do have to be loaded to draw or even examine shape is simply a field in a database row read in as a binary object into memory. So yes, they will slow you down -- you are better off keeping them small, and if you have very complex salamander polygons you may want to split them up with the Dice tool.

You also may be able to tune your feature class to draw even faster with careful control of spatial index values. (ArcSDE supports three levels of tiles, so it even more tunable than fGDB, which only supports one.) Generalizing your data is also an option that should always be considered when performance is a major consideration.

Reply
0 Kudos
Highlighted
Honored Contributor

Hi Jeffrey,

Depending on the usability, you could use either or both the larger and smaller polygons maintaining scale dependency. Optimizing spatial index is also helpful, as suggested by larry zhang . The generalization tools are suggested if they do not change the shape of the features. Also if you are planning to label features, I would suggest the use of Annotation (also scale dependent) instead of dynamic labels.

Highlighted
Esri Esteemed Contributor
The reason I ask is because I am not sure how Arc Server pushes out data to the view extent of a web map.  If the data is dissolved into one large parcel, my initial thought was that it would draw more quickly, but maybe that is not the case.  Does ArcServer need to load the entire large polygon, even if the majority is out of the map view?

I'm not aware of ANY mapping package that can only read a fragment of a single feature.  Features are the basic unit of I/O in all GIS packages.

ArcGIS Server has two fundamental modes of operation:

  1. Get a map request and follow the drawing rules to render the map
  2. Get a map request and stitch together the pre-rendeerd tiles in the cache to make the map

The fastest response will always be from a cache, but that requires that you construct one, which isn't always an option (if the data being returned is updated frequently), and maps will only be generated at a fixed interval of map scales.

The Sharing Content on the Web 2-day Esri training class contains a number of tips and tricks on making the fastest possible web services.  They include using scale dependency, using the Esri Optimized symbol set, and either splitting large, complicated features into smaller ones or joining many simple features with identical symbology into larger clusters of features to reduce feature retrieval time.  The class also teaches how to evaluate performance of a web service using the publishing tools.

The Dicing Godzillas blog entry contains good information about splitting up large features. The Dice tool isn't the only option for feature splitting, though -- you can also use a a regular grid (fishnet or county boundaries in PLSS states).  If you modify your polygon features, and you don't want visual artifacts as in the blog, you can use two feature classes (polygons and lines) to render features (I've used this technique on a global scale, cutting countries into 5-degree tiles and rendering the polygons without outline and the country boundaries as a separate layer, UNIONed by tile).

As an answer to the "few large or many small" question, I'd reply "a handful of medium" is best.

- V

View solution in original post

Reply
0 Kudos
Highlighted
New Contributor II

Thank you EVERYONE!  This community is always willing to help out and I appreciate all the insight that has been provided.  I hope to see you all at the ESRI conference

Reply
0 Kudos