In thinking about our next upgrade from 11.2 to 11.5 (probably to be released around 2025 Esri UC), I am looking for feedback from others. Below is a high-level overview of a proposed production environment. For some background, we are a mid-sized city of about 65-70k. We have about 6-8 admins/SDE editors. We use Field Maps with feature service replicas. We integrate with a CMMS system, a permitting system, and a few others.
Below is a mix of the current and proposed production environment for our next upgrade, which will be to 11.5 as this is the next long-term support release planned by Esri. Looking for feedback if anyone has any suggestions. It can be that a resource below might be not enough or too much.
Server Function | Windows Server Edition | SQL Server Edition | RAM | CPU | C Drive (OS and software) | D Drive (Data Storage) | Notes |
Database Server | 2022 | 2022 | 16 GB | 4 Cores | 100 GB | 1 TB | |
Portal for ArcGIS and ArcGIS Data Store | 2022 | N/A | 16 GB | 4 Cores | 100 GB | 200 GB | Portal and Data Store on same machine |
ArcGIS Server (Hosting) | 2022 | N/A | 16 GB | 4 Cores | 100 GB | 100 GB | |
ArcGIS Server (Federated/Internal) | 2022 | N/A | 24 GB | 4 Cores | 100 GB | 200 GB | Also used to run nightly Python scripts, thinking about migrating to notebook server. |
ArcGIS Server (Federated/External) | 2022 | N/A | 48 GB | 6 Cores | 149 GB | 1.34 TB | |
ArcGIS Monitor | 2022 | N/A | 16 GB | 4 Cores | 125 GB | N/A |
Leave C to the OS, install software to D and partition out E for data.
Dedicate datastore to its own VM or co-locate with hosting if you have to.
Thank you @AngusHooper1, I hadn't thought about a 3-way partition.
I like very much your structure; this matches mine quite well. I would suggest you think about the number of services you plan to be using across your various AGS instances keeping in mind each SOC takes up a standard chunk of RAM for Isolated Pool services, and using Shared Pool has a performance hit.
I would take into consideration if you have any large processing that could put a bigger load on any one set of services; such as large geocoding processing or analysis services that you would potentially want to have that load run on a seperate AGS to not impact the load on your day to day transactional users of your internal or external mapping services.
Thank you @DEWright_CA, we have a few hefty map services that contain 100-200 layers for our main internal and external city mapping applications. They can get resource intensive, so bumping up the RAM on both on internal and external federated AGS machines may be something to consider.
is a high availability enterprise setup an option for you?
@BillFox, I don't think that is an option for us, we are probably too small of an organization to have HA available. We don't have too much downtime, and usually, we can get ArcGIS Security Patches, Windows Updates, etc. accomplished after hours.
Hi,
Why are there so few CPU cores? Is it because of license limitations? I don't think they can support concurrent access.
Hello @Lerman, I based my CPU Core count on our current infrastructure. It is not because of license limitations, but based on conserving resources when our last upgrade was implemented. Would you recommend a specific higher number of cores, and if so what would you recommend? Thank you for your feedback.
Hi @Brian_McLeer ,
I think 4 CPUs is very little and can only handle four requests at the same time and cannot support high concurrency. I think the number of CPUs, should be considered for concurrent access. If you are not sure about concurrent access, maybe you can set it to 12 or more and then subsequently adjust it according to the situation. Of course, if you increase the number of CPUs, the memory should be increased accordingly.
Also, I noticed that you are currently on version 11.2 of the environment, which is currently 4 CPUs per machine, which would meet the demand, correct? If that's the case, then the concurrent access is particularly small, and 11.5 should have no problem following this environment.