I have a problem.
I am running a field survey involving 100 surveyors (so far, the highest number of surveyors are just 40 at the same time), 40 QC personnels (the highest number achieved so far).
The table itself has 36K points and 120 attributes that need to be updated attributively. Attributes are updated based on the question asked. So not all 120 attributes are updated actively at one time.
The fieldmap has 3 layers (from the same table) with each layer has different filter. The combined data (if all layers are selected) are much less than 36K. I utilize visibility scale too to minimize load.
The surveyors are working from morning till afternoon. At nite, QC personnel will begin to work using ArcGIS Pro and they access the data using Feature Service (they do not access background layers using Map Service. Instead they are using offline data to reduce server load). They use Query Definition for certain attributes with certain order that are in line with the indexes on rdbms postgresql 13.
The apps also has a background layer (map service) with 36K polygons. I set the visibility of this layer minimum at city level and maximum at building level
The configuration of the feature service *now* (been running for 1 hour now and there are 83 data inserted from 23 surveyors) are as follows :
- Shared Instance (previously prior to server down was Dedicated, max instance 9, min instance 7, max time client can use a service 660s, idle time instance be kept running 1800)
- all antialising are off. I assume this antialising contributes to more processing when turned on.
- recycle every 24H at 00:00AM
With server config settings as follows:
- Web server maximum heap size (in MB): -1
- SOC maximum heap size (in MB): 128 (previously prior to server down was 512)
Why was it prone to crashing down ? The server (the whole server, literally) crashed every once for 5 days for 2 week straight. But when I changed it to 512M of SOC maximum heap size, it crashed 5 times in 1 day !
I was thinking that with dedicated instance and large heap memory will be more stable. But it *seems* it has adverse effect.
My system design, running on AWS : 1 server for egdb (Linux-amd64-6.5.0-1014-aws, vcpu 2, 8GB RAM, 500GB HDD), 1 server for Arcgis Server and Portal (4 vCPU running on 2 physical processor, 16GB of RAM, 500GB of HDD) and 1 server for License running on windows (in case you want info).
My arcgis server statistics looks like this :
Need some advice to run the whole thing smoothly, at least without crashing.