Now my problem is: When I am adding more than 40 tile layers in map, the server CPU utilization is reached 100% and I am getting the timeout exceeded error for some services. But some services are added in map and after sometimes CPU utilization becomes normal when zooming and panning the layers on map. Because added services are served by the server from tile cache. Also when querying the features from many layers, the CPU utilization is keep on increasing and finally not responding.
I am very new to ArcGIS Server, publishing and optimizing the services by referring some ESRI documents. Also I am not sure whether publishing 1000 service in ArcGIS server with below configuration is right way. Hence please help me to get rid of this issue by providing your valuable solutions. Thanks to all.
My Geodatabase is:
- Version 10.3.1
- Residing at Oracle 11g 64 bit Linux OS.
- 36 Datasets and 1000 Featureclasses
My Server Configuration is:
- Server running as a virtual machine
- 64 GB of RAM
- Processor: Intel(R) Xeon(R) CPU E5-2690 2.9 GHz
- Core: 4
- Hard Disk C Drive 35 GB Free space
- Arcgisserver directory residing at G Drive which has 3 GB of Free space out of 15 GB
My Service properties are:
- High isolation service
- Instance per process is 1
- Pooling - Min number of instance as 0 and Max number of instance as 1 per machine
- Layer: Tile map layer
- Cache built status: Completed for all layers.
- Application server max heap size is 1GB
- SOC max heap size is 1GB
Kindly suggest me..
Thanks for your response. Yes we've upgraded the processor core from 4 to 16 cores. Now it seems to be fast rendering on map. But still it has some slow while processing a querytask(like identity) only at first time. If we execute the same again, we are getting quick result.
You could group n number of layers into a single service. I have a few services that have 30+ layers in them. I don't know if there is a limit on the number of layers per service. A few years ago I was told 25 was the recommended max. Not sure if that is still true or not.
Also, I would set your min instances = max. If you hit a service that isn't active, it will use considerable CPU to spin up the ARCSOC process. If your memory can handle it, keep them alive indefinitely.
If you add more layers to a service, you should bump up the min/max instances per machine so it can handle more requests.
You could also try low isolation and increase instances per process. There are caveats to this, but it has worked well for us.
Since we need to turn off/on individual layer separately, grouping is not possible as per specification and also we are using tiltemap services.
We have 35 GB of free space in memory out of 64 GB of RAM. Hence we can set min instances = max. But we have max-idle-time is 1800sec (30mins). In such case, whether it will become inactive after 30mins? If so, at this time also it will use considerable CPU to spin up the ARCSOC process. Is n't?
Excuse me for digging up an old post, Matthew, but I'm having a similar issue to Dharma, and looking for clarification of something you mentioned.
Is there any documentation explaining the trade off between publishing layers to individual services versus combining them into a single feature service? I published all of our layers as individual services so they would have the most display flexibility and could be used on an as-needed basis in whatever maps our staff decide to make. But when I launch our web map that has all 27 of these feature services added, the ArcSOCs go through the roof and max out our ArcGIS Server's CPU. (This is with appropriate scale ranges defined - they don't all come on at once). I'm wondering if your solution - publishing all of the layers in a few services - will limit alleviate this issue.
FYI: ESRI had previously told me that the query load would be the same between these 2 scenarios. But I wonder if they just meant from a database/record retrieval standpoint, and did not consider the CPU ramifications of potentially launching so many more ArcSOC processes for the same task.
I'm not aware of any documentation.
I think CPU would be less if layers were grouped under a single service in your scenario.
I think it would work like this -
27 different services. You load your map - each of those 27 services are being hit simultaneously (that might not be technically true, but close enough for example) and stressing the cpu at the same time.
27 layers under 1 service. You load your map - the single service can only serve 1 request at a time (or however many instances you gave that service) so they get queued up and will go through the requests one after the next. So the cpu load would be the equivalent of one request rather than 27.. but the caveat is that it takes longer and depending on how much traffic you get, some requests could take a while and some could completely timeout. There's probably a sweet spot somewhere in between.
If you have AGO or Portal, you can import each layer from a service as a 'Feature Layer' and create a catalog for users to grab from.
arcgis/rest/services/someService/FeatureServer/0 is used to create a distinct Feature Layer in AGO
arcgis/rest/services/someService/FeatureServer/1 is it's own distinct Feature Layer in AGO, etc.