I haven't done the math yet, but I'm guessing there is an optimal set of overview levels for Mosaic Datasets that can directly align with the standard ArcGIS / Bing / Google tile cache levels (web mercator). Has anyone looked into this? Is it the default levels that ArcMap generates anyway?
Little confused by post as overviews and caches are different. I spent lots of time trying to get overviews to work correctly, esp with nested Mosaic datasets. You really have very little control. Drove me nuts. In the end I just built Tile Caches instead. Wish I would have done it sooner. They are at least 10 times faster, 90% smaller, and super easy to manage since I can just copy and paste them around to all my servers. I just unzip the tpk then access them through the file system like any other file (created layer files for ease). No need for image server or any of that. I do also upload the tpk to AGO for when other offices need something not local and for any webmaps. It has worked out very well for us. Have saved thousands in backup costs, now have tons of server space, can give all imagery to mobile users (since its 60gb instead of 600), and can now keep yearly vintages.
I use the default Cache levels as listed here. If you need custom the rule of though is to half at each level. For NAIP 1 meter imagery I go down to Level 17 with no real lose of quality.
For full details on creating caches see this great post
Doug, thanks for the reply.
Yes, I know overviews and caches are different. Since caches are using overviews as their sources at higher levels, then I was wondering if there was an existing overview scheme that closely aligned with the tile schemes such that you could cut down on image degradation due to reprocessing.
The lower levels of the ArcGIS/Bing/Google tile schema (e.g. 13-19) may use the original data via the mosaic dataset, but the upper level, e.g 7 through 12, would be better to draw from pre-computed overviews. So i was thinking it would make sense to have the overviews computed at those levels.
It's probably more of an academic question than a practical one.
My MDs already had overviews when i built the caches since we used to use them. But for new projects I am just using Raster Catalogs instead. No reason to build overviews or even pyramids if I am just going to build the caches anyway. When caching 2 TB of imagery it would take days to build all those overviews. I have seen no difference between using a MD with overviews as a source and a RC as a source with nothing built.
In the end trying to do all the math and manual editing of the MD attribute table would take forever (I have like 50 some MDs). The main thing I watch out for is the pixel size of the final cache level. If that is greater than the original imagery they are useless.