Map Cache Error

4014
5
03-06-2013 03:38 AM
AaronDrake
New Contributor
I have a scheduled task (executing a python script) that runs at night.  It updates our base map' cache.  Recently, I have been getting the following error:

WAITFailed to cache extent: 782923.307292 1389101.085069 790032.248264 1396210.026042 at scale 250 ;Failed.
Field is not editable.
The index was either too large or too small.
Failed to execute (Manage Map Cache Tiles).
Failed.
Failed to execute (ManageMapServerCacheTiles).


Not real sure what would be causing this.  I am using a shapefile for the cache extent.  It has worked in the past, but now it kicks out an error...
Tags (2)
0 Kudos
5 Replies
MattiasEkström
Occasional Contributor III
I'm picking up this old unanswered thread since I have almost the same problem.
I'm getting this error:
Error executing tool.: Failed to cache extent: 153555,381636 6369419,262764 157889,000720 6373752,881847 at scale 500 The index was either too large or too small. Failed to execute (Manage Map Cache Tiles).

I'm also using a shapefile for cache extent that worked before.
0 Kudos
DavidColey
Frequent Contributor
I too am receiving very similar errors when using an area of interest file gdb polygon:

Data access failure, layer = Roads and Streets, error string = Underlying DBMS error [Microsoft SQL Server Native Client 10.0: The query processor could not start the necessary thread resources for parallel query execution.] [PROD.GIS.Streets]§TilesWorker: Data access failure, layer = Roads and Streets, error string = Could not access data for layer or table Roads and Streets during drawing
Failed to cache extent: 504171.875000 1009371.875000 538294.791667 1043494.791667 at scale 1200

Failed to cache extent: 606569.270833 992302.604167 623630.729167 1009364.062500 at scale 600
Data access failure, layer = Roads and Streets, error string = Underlying DBMS error [Microsoft SQL Server Native Client 10.0: The query processor could not start the necessary thread resources for parallel query execution.] [PROD.GIS.Streets]§TilesWorker: Data access failure, layer = Roads and Streets, error string = Could not access data for layer or table Roads and Streets during drawing
Failed to cache extent: 623635.937500 1043502.604167 640012.968249 1060564.062500 at scale 600
The index was either too large or too small.

However, as you can read the errors also involve TileWorker Process and Data access failures from SDE.  Once I dumped our streets into a file gdb in a shared directory, no errors.  I've been fighting data access failures since we went to 10.1.1 and sql geometry.  We have spent hours tuning and testing, and yet the same errors keep cropping up.  Our database server is connected to our san via failover nic cards, we have 16 cores on two cpus with 65gb ram.

This should not be happening!

Thanks
David Coley
Sarasota County
ArcServer Admin / Developer
0 Kudos
NinaRihn
Occasional Contributor III
I'm picking up this old unanswered thread since I have almost the same problem.
I'm getting this error:
Error executing tool.: Failed to cache extent: 153555,381636 6369419,262764 157889,000720 6373752,881847 at scale 500 The index was either too large or too small. Failed to execute (Manage Map Cache Tiles).

I'm also using a shapefile for cache extent that worked before.


I had this error last week and I realized that for one of the layers in the service, I had added global IDs a few weeks ago so there was a change in the schema.. I tried deleting the cache and that failed.. then I tried resharing the service, overwriting the existing and that failed... then I deleted the service and its cache, and reshared the MXD.  then the cache ran successfully.   has there been any even minor change in the schema of any of your datasets?
0 Kudos
DavidColey
Frequent Contributor
Actually, no.  I don't think the absence or precence of a global ID on the AOI layer should affect data access or tile worker processes.  But different index types could (ie sql geometry spatial indices vs an FDO_index in a file gdb).  So after my original reply it occured to me that since the caching errors are occuring when using an area of interest, and for the first time I am seeing a TileWorker error along with an index error, I believe that different index types are not being handled by the Tile Worker process. 

So potential solution then:  Either place all data to be cached in an fdgb, including the AOI poly; OR place the AOI poly in sde in sde as well, where it can utilize the same type and size of spatial index. 

This was never a problem for us when using Arc Binary storage type, but since we moved to Sql Geometry apparently it is.
Thanks
David
0 Kudos
MattiasEkström
Occasional Contributor III
I had this error last week and I realized that for one of the layers in the service, I had added global IDs a few weeks ago so there was a change in the schema.. I tried deleting the cache and that failed.. then I tried resharing the service, overwriting the existing and that failed... then I deleted the service and its cache, and reshared the MXD.  then the cache ran successfully.   has there been any even minor change in the schema of any of your datasets?


There might have been a minor change in the schema, depending on what a minor change in schema could be... We do have a strange problem where some shapefiles occasionally lose their spatial index. Nothing has changed in the shape file for the Area of interest, but a dataset in the mapservice I'm trying to cache might have lost its spatial index. Could that make the caching fail completely?
0 Kudos