ArcSOC EXE processes piling up?

6838
5
10-24-2017 08:33 AM
TristanKnowlton
Occasional Contributor II

My organization's ArcGIS server hosts several Flash/Flex viewer maps.  It seems that the server creates new ArcSOC EXE processes each time someone accesses one of the maps.  The ArcGIS documentation says that these processes are supposed to be reused and eventually killed, but they never seem to be killed on their own.  Eventually they start taking up too much memory and our maps stop working until I go into the Task Manager and kill them manually.  How do I stop this from happening?

5 Replies
MichaelVolz
Esteemed Contributor

What version of AGS are you currently on?

Does this occur for all AGS services or just those hitting specific datasources (e.g. SDE database, file gdb, shapefile, etc.)?

TristanKnowlton
Occasional Contributor II

The AGS version is 10.5.  Most of the data in these maps is stored in SQL databases that are located on the same server.  They are file geodatabases.  However, new processes don't seem to be created on the server if I access the same data in ArcMap for desktop.  There are a few folders that are registered as data stores, as well, the most notable of which is where our imagery is stored.

0 Kudos
MichaelVolz
Esteemed Contributor

How long have you noticed this phenomenon occurring?

Is this something that is occurring in 10.5, but did not occur with a previous version of AGS?  I ask because I am running scripts to update a cached mapservice and cache processors are not getting killed automatically in a multi-site load-balanced environment.

Is your AGS site a multi-server site with load-balancing?

0 Kudos
JonathanQuinn
Esri Frequent Contributor

If you were to look at the min and max instances under the Pooling tab for a particular service, and then look at the Command Line column within the Task Manager, are the number of running instances in the Task Manager larger than the max instances set for the service?  There are a few timeouts for services, including an idle timeout.  By default, it's 30 minutes, so once a client is done with a service, it sits around for 30 minutes:

A third timeout dictates the maximum time an idle instance can be kept running. When services go out of use, they are kept running on the server until another client needs the instance. A running instance that is not in use still consumes some memory on the server. You can minimize your number of running services and therefore conserve memory by shortening this idle timeout, the default of which is 1,800 seconds (30 minutes). The disadvantage of a short idle timeout is that when all running services time out, subsequent clients need to wait for new instances to be created.  

You can try to decrease the idle timeout to reduce the time the instance will remain up, (while understanding the disadvantage of doing so).

TristanKnowlton
Occasional Contributor II

Jonathan, I think the idle time might be the culprit.  I'm going to try lowering that and see what happens.