Select to view content in your preferred language

Do Hosted Feature Layers consume ArcGIS Server resources?

1717
3
Jump to solution
12-14-2022 12:14 PM
David_Brooks
MVP Regular Contributor

Is it the Datastore or the ArcGIS Server that's doing the heavy lifting when serving up Hosted Feature Services to the Portal? I'm analysing the performance of our system, and looking at ways of spreading the load across our available machines. I read an article that said that the ArcGIS Server ArcSOC for a Hosted Feature Service is just a lightweight REST endpoint and the datastore does the hard work. But I'm not too sure about that.


David
..Maps with no limits..
1 Solution

Accepted Solutions
Scott_Tansley
MVP Regular Contributor

It's a mix.  If you consider a traditional map service, then (in dedicated instances), publishing a service will create an ArcSOC.exe and start consuming memory.  If you start using it and max instances are higher than 1 then it may create more ArcSOC.exe's, and this starts to consume an increasing but linear amount of memory.  The CPU loading is dependent on how often it's called.  The memory usage relates to the fact that it needs to 'make an image' from the data.  The internal workflow will be something like:

  • receive web request
  • determine layers, extents, filters, etc...
  • request data from the 'geodatabase'
  • receive data
  • burn layers to a PNG/JPEG image
  • respond to web request by streaming image

Clearly, this is a gross simplification.  In a hosted feature service, the burn layers step is replaced by 'format data' (JSON/PBF).  The manipulation of the text (data) is a much lighter weight computational operation.  Importantly it doesn't need a SOC for every service instance.    In effect a server that is only used for hosting services needs much less memory than a traditional ArcGIS Server.

In terms of which is doing most, then the Data Store will be doing the data heavy lift, with the Hosting Server doing the conversion to JSON/PBF.  Obviously, if a little bit of data is requested then neither has a high workload, but as larger amounts of data re requested then both start to ramp up.

I think it would be fair to say that both are working in tandem.  The data stores workload would be somewhat comparable to an Enterprise Geodatabase, but a specific hosting server will need less resources than a traditional ArcGIS Server.  Many of my smaller clients run Enterprise Portal, Hosting Server, and Data Store (the Base Deployment) on a single machine, and then traditional server roles on individual machines as required.  

Sorry, that probably hasn't answered your question, but hopefully paints a picture to increase understanding?

Scott Tansley
https://www.linkedin.com/in/scotttansley/

View solution in original post

3 Replies
Scott_Tansley
MVP Regular Contributor

It's a mix.  If you consider a traditional map service, then (in dedicated instances), publishing a service will create an ArcSOC.exe and start consuming memory.  If you start using it and max instances are higher than 1 then it may create more ArcSOC.exe's, and this starts to consume an increasing but linear amount of memory.  The CPU loading is dependent on how often it's called.  The memory usage relates to the fact that it needs to 'make an image' from the data.  The internal workflow will be something like:

  • receive web request
  • determine layers, extents, filters, etc...
  • request data from the 'geodatabase'
  • receive data
  • burn layers to a PNG/JPEG image
  • respond to web request by streaming image

Clearly, this is a gross simplification.  In a hosted feature service, the burn layers step is replaced by 'format data' (JSON/PBF).  The manipulation of the text (data) is a much lighter weight computational operation.  Importantly it doesn't need a SOC for every service instance.    In effect a server that is only used for hosting services needs much less memory than a traditional ArcGIS Server.

In terms of which is doing most, then the Data Store will be doing the data heavy lift, with the Hosting Server doing the conversion to JSON/PBF.  Obviously, if a little bit of data is requested then neither has a high workload, but as larger amounts of data re requested then both start to ramp up.

I think it would be fair to say that both are working in tandem.  The data stores workload would be somewhat comparable to an Enterprise Geodatabase, but a specific hosting server will need less resources than a traditional ArcGIS Server.  Many of my smaller clients run Enterprise Portal, Hosting Server, and Data Store (the Base Deployment) on a single machine, and then traditional server roles on individual machines as required.  

Sorry, that probably hasn't answered your question, but hopefully paints a picture to increase understanding?

Scott Tansley
https://www.linkedin.com/in/scotttansley/
ZacharyHart
Regular Contributor

This might be the most detailed explanation of resource use in AGS I've ever read...this is gold. Copied and saved for reference!

David_Brooks
MVP Regular Contributor

@Scott_Tansley this is really helpful to me. I understood the requirements for traditional Map Services, but your explaining regarding Hosted services now makes absolute sense. Thank you very much!👍


David
..Maps with no limits..