hello there,
thank you for your reply
those were valid points you shared. i moved everything to a new more powerful server and number of completed tiles became stable at different attempts.
caching went on fine though some errors were encountered that i am looking into at the moment.
however, i must say that overall caching experience is not satisfying especially when you dig down in scales for basemaps.
our adopted scale scheme is:
1128
2256
4513
9027
18055
36111
72223
144447
288895
577790
1155581
2311162
4622324
9244648
18489297
issues we are facing :
1- no informative explanation why caching fails. reported error messages could be little bit more specific "cause-wise" then just failed to cache at extent XXX
2- cache status GUI in both ArcCatalog and Server Admin page keeps displaying tile generation is in progress even though cache have already reported itself to have failed in arcCatalog and dispite multiple attempts to cancel cach. i am yet to find a way to get arcgis server understand that the cache processes was terminated/or failed. i would love to have a tool to force kill an indefinitely running caching service. the only way i think is to manually update the status.gdb geodatabase, which is what i am thinking of trying out.
3- cache import/export operations take considerably alot of time to execute even for smal subsets and even for a whole cache which is equivalent to a copy past operation with physical files.
4- cache update status tools takes considerably alot of time to execute and update cache status gdb. also, you cannot specify which level to update and you have to update all, which is not the case most of the times.
5- still need a mechanism to double check the cache completed. as we are using control shapes to govern what is being cached at different scale levels, system generated "expected number of tiles" is bound to differ from completed tiles as later should be less. however, by how much less or what should the number be or how to check if cache was actually completed as intended is still not clear how to determin other than manually going through each inch of the map and see if we have an image displayed. this can be exhustive in caching basemaps of the whole contry.
6- even in the system generated expected number of files, at the lowest level for example at scale 1000, logically we would expect it to have the largest number of tiles as proven when caching previous levels, however, it is expected by the system to be less than the previous scale level. this adds ambiguity and some uncertainty in the results.
if you have insight on any of these issues or a related issue, i would be extremely thankful.
regards,