10.5.1 Memory Usage

1554
17
05-21-2019 01:15 PM
DaveTenney
Occasional Contributor III

All,

  i was curious if anyone else encountered an interesting behavior once upgrading desktop and server to 10.5.1....

We have noticed that after upgrading and running the "manage map server cache tiles", esri ultimately maxes out the memory, and this usually happens 90min after starting the tool. We have run this process many times on the data and not until after we upgraded to 10.5.1 are we now experiencing this.

Machine Specs

Intel Xeon CPU E5-4620 v2

Windows Server 2012 R2

16gb Ram

4 Cores

thanks,

dave

0 Kudos
17 Replies
George_Thompson
Esri Frequent Contributor

What version where you on before the upgrade?

How many services are on the machine?

--- George T.
0 Kudos
DaveTenney
Occasional Contributor III

10 services total: 2 cached map service, 1 dynamic map, 7 locator services

we were running 10.2.2, when we upgraded we rebuilt all the services manually using 10.5.1. we were able to rebuild the services manually just fine. now we create an area of interest that is built from a delta check and we get to about level 16 out of 19 and the cachingtools begin consume massive amounts of memory. 

we start seeing entries in the cachingtools logs that mention "out of memory", then the process fails. with a similar message:

Data access failure, layer = Image, error string = Could not access data for layer or table Image during drawing§TilesWorker: Data access failure, layer = Image, error string = Could not access data for layer or table Image during drawing§TilesWorker: Data access failure, layer = Image, error string = Could not access data for layer or table Image during drawing Failed to cache extent:........

0 Kudos
George_Thompson
Esri Frequent Contributor

Thanks for that info, how many instances on the caching services?

How many instance are running on the other 10 services?

I have seen other posts (cannot find right now) that users had something similar with newer versions consuming more memory than previous ones.

Is the machine (VM or physical) actually out of memory when the error occurs?

--- George T.
0 Kudos
DaveTenney
Occasional Contributor III

   Caching services were set to a max of 3, we decided to take the n-1 approach (n = # of cores) in order to prevent the resources from being maxed out on the machine. As for all the services, they run at a max of 2. 

   yes, i can sit and watch the gptool process run at the same time tracking the machine stats and before too long, the memory just starts jumping up like crazy and ultimately maxing out.

0 Kudos
George_Thompson
Esri Frequent Contributor

Thanks for that information and that is a great approach. My general thoughts based on that information is that the machine may need more RAM after the upgrade. Would it be possible to increase the RAM to 32GB and test the workflow again?

I know that you mention the OS as WS2012R2, it is also updated to at least the April 2017 patch?

--- George T.
0 Kudos
DaveTenney
Occasional Contributor III

here is the part in which i get a little confused by everything...

   we can run the process manually on level 18 and 19 and recreate all tiles just fine. the minute we utilize an area of interest for those same levels is when we see the memory usage skyrocket.

   we had to prove this process in 2 other environments before we could even implement all this in production. this is a behavior we did not see in any of those 2 environments.

im not sure where the patches stand for the OS.

0 Kudos
MichaelSchoelen
Occasional Contributor III

Could it be a setting with the pooling parameters?

0 Kudos
DaveTenney
Occasional Contributor III

Michael,

  i doubt it is pooling as we have taken a conservative approach with the caching services. 

we are running the gptools at a max of 3 on a 4 core machine, we did this because we did not want to take the chance to max out the machine. unfortunately, we are now seeing something that runs wild with the available memory.

thanks

dave

0 Kudos
MichaelVolz
Esteemed Contributor

What kind of data is changing in your problematic cached mapservice (e.g. parcels, road centerlines, other)?

How often are you upating the cache?

On average, how many features are changing per cache update?

What is your largest scale that you are caching?  Can you see if the problem still occurs if you don't include the 3 or 4 largest scales, just as a test?

Is this process occurring when there is low or no load on the AGS server?

Dynamic mapservices perform much better in 10.5.1 so you might consider removing layers from cache and make them dynamic.

0 Kudos