Select to view content in your preferred language

How to scale a hosting server?

1639
6
09-11-2020 05:00 AM
by Anonymous User
Not applicable

We have a single ArcGIS server on a VM which is also our hosting server for ArcGIS Portal.

Performance has degraded, so we shutdown the VM and doubled the resources (cores from 4 to 8, memory from 16 to 32).  There's *no change* in performance for our hosted feature services (the only thing we use).

We can't add another server to the site, we just want our existing server to have more power.

This document says we can scale a hosting server by adding more resources:

https://www.esri.com/content/dam/esrisites/en-us/media/whitepaper/arcgis-enterprise-architecting-you... 

This discussion suggests there may be licensing issues involved:

https://www.esri.com/content/dam/esrisites/en-us/media/whitepaper/arcgis-enterprise-architecting-you... 

How do get ArcGIS Server to utilize additional cores and memory?

Thanks for your thoughts!

0 Kudos
6 Replies
George_Thompson
Esri Notable Contributor

So all of the ArcGIS Enterprise components are on the same machine (Web Adaptors, Portal, Server, Data Store)?

How are the resources on the host machine when experiencing the performance issues?

--- George T.
by Anonymous User
Not applicable

Hi George Thompson

Thanks for your thoughts on this!

Our Portal and Data Stores are on separate machines -- and we have a single ArcGIS server on its own.  I mentioned "hosting server for AcrGIS Portal" to indicate the role of this ArcGIS server in our enterprise stack.

When we double the resources on our VM, they all sit there largely unused!.

Memory is not exhausted at the original 16GB, but the original 4 cores can peak at 100% utilization under heavy usage of our hosted feature services (which are actually queries into the ArcGIS relational and spatiotempoal BDS).

Doing some rough JVM profiling, I can see CPU time is being spent stopping a feature service (presumably "unused" at that moment) to start up another one that it needs.

With 8 cores and 32GB, we simply see more idle time on the CPUs and more free memory -- but no change in performance.

Per Shane Miles‌ insights, we have tried increasing the number of arcsoc processes (doubling to 16 from 😎 in the shared pool.  This will *slightly* increase memory usage, but makes no different to performance, and no difference to CPU utilization.

I just can't seem to make the connection in the documentation between *hosted feature services* and the *arcsoc* processes which seem to handle feature service requests?

Thank you again for your time on this!

-Donovan

0 Kudos
ShaneMiles
Esri Contributor

Hi Donovan Artz‌,

In addition to George Thompson's questions which will help understand your system configuration ,there is a number of ways you can tweak your server for optimised utilisation. I would look into the settings surrounding Pooling, Parameters and Processes on your server Overview of geoprocessing service settings—ArcGIS Server | Documentation for ArcGIS Enterprise . These can help tweak your services to get an ideal ratio of service requests to available hardware resources. In addition to this ther is a great resource on understanding shared instances, see Introducing shared instances in ArcGIS Server . Hope this helps.

Shane

by Anonymous User
Not applicable

Thank you, Shane Miles‌,

I really appreciate your time in pointing me in the right direction!  Your insight about shared instances looks on paper to be a perfect fit for our problem -- we just can't see any way to make it work (yet) with hosted feature services.

We can imagine an architecture (outside our capability right now) with *multiple* ArcGIS servers -- and load balancing requests among these separate machines...

but as long as we have available memory and CPU on our ArcGIS server, it seems almost certain I'm just missing something simple!

I *will* re-read the linked documentation very carefully, and will do so with anything else that you have time to share!

Thank you!

-Donovan

0 Kudos
ChristopherPawlyszyn
Esri Contributor

Hello Donovan Artz‌,

To Shane Miles‌ points, hosted services use a different provider so are not tuned in the same way as non-hosted services. Shared instances are meant for very specific use-cases, primarily for services where you would have typically set the minimum to zero because they were rarely used (less than one request per minute), or cached services that only serve static content (source: Anticipate and accommodate users—ArcGIS Server | Documentation for ArcGIS Enterprise). Non-hosted services could certainly benefit from the tips he suggested.

This presentation may be useful to you as well: ArcGIS Enterprise: Tuning and Scaling - YouTube 

I'd be curious if you are possibly throwing resources at the wrong component, have you checked the resource utilization on the ArcGIS Data Store machine (including CPU, memory, and disk utilization) during the high-volume events? That tends to be a bottleneck when a large number of queries are submitted to the database tier at the same time by hosted services. An additional consideration would be whether you are running the Spatiotemporal Data Store on a dedicated machine following best practice recommendations or on the same machine as another ArcGIS Enterprise component.

Hope that helps,

Chris


-- Chris Pawlyszyn
by Anonymous User
Not applicable

Hello Christopher Pawlyszyn‌,

Thanks very much for your thoughts!

We *do* have our SpatioTemporal Big Data Store on a dedicated machine, and even under heavy load, it never comes close to even half CPU utilization, with more than half RAM available as well.  IO wait stays near 0.

Our ArcGIS server, on the other hand, can reach full utlization.  

Maybe there is an application-layer configuration or metric in the ArcGIS Data Store that can allow *it* to use more resources and address more queries?  Or maybe an OS resource I haven't thought of?

Thanks so much for your insight, and I will watch the video you recommend!

-Donovan