Hi I have a MapServer service whose arcsoc crashes due to not enough java heap space. This happens when a /query request is made for all features, geometry, and attributes and the return format is pjson. The max records returned for the service is the default 2000.
I have found that if I set the javaHeapSize for the service to 312 (MB) that the query will be processed by the service and data begins to download.
If in addition orderByFields are specified in the query, I have to increase the javaHeapSize value to 1024 (MB).
I only have a max of two instances for this service so increasing the javaHeapSize to 1024 still appears to keep memory utilization for the machine in a healthy zone. It doesn't appear that the entire 1024 MB are used right away; just perhaps that the arcsoc has more "headspace" to execute larger queries.
I'm wondering if others have had to increase the javaHeapSize beyond the 64MB default. Who can report the biggest, baddest javaHeapSize of them all??
Tim
Solved! Go to Solution.
I managed dozens of ArcGIS Server deployments, stand-alone and federated. Generally, the default works for a vast majority of services, but there are a few where the value has to be increased. Typically the reason the service crashes is due to overly dense geometries being serialized by someone scraping the API to download the data.
Typically we will try doubling it once (128 MB) to see if that resolves the matter. If not, we double it again (256 MB), and if that doesn't work then we start alternating between reducing record count by half and doubling the javaHeapSize. So, if 256 MB doesn't work then we cut the max record count to 1000 (the old default for over a decade). If that doesn't work we try increasing heap again to 512 MB and finally cutting max record count to 500.
My general belief is that a service needing a javaHeapSize greater than 512 MB probably needs to have its data restructured on the back end. I can't think of any instances where I have allowed a javaHeapSize greater than 512 MB.
C'mon who has the biggest javaHeapSize setting?? 😛
I managed dozens of ArcGIS Server deployments, stand-alone and federated. Generally, the default works for a vast majority of services, but there are a few where the value has to be increased. Typically the reason the service crashes is due to overly dense geometries being serialized by someone scraping the API to download the data.
Typically we will try doubling it once (128 MB) to see if that resolves the matter. If not, we double it again (256 MB), and if that doesn't work then we start alternating between reducing record count by half and doubling the javaHeapSize. So, if 256 MB doesn't work then we cut the max record count to 1000 (the old default for over a decade). If that doesn't work we try increasing heap again to 512 MB and finally cutting max record count to 500.
My general belief is that a service needing a javaHeapSize greater than 512 MB probably needs to have its data restructured on the back end. I can't think of any instances where I have allowed a javaHeapSize greater than 512 MB.
Hi Joshua,
You're on top of the leaderboard with javaHeapSize=512MB. Congrats!
I'm going to use your method to try to keep the javaHeapSize value within 512MB.
I also think we may have one or two extremely complex polygons that need simplification.
Thanks for relating your deep experience.