DataStore Running out of SWAP in AWS

1047
3
05-04-2021 06:43 AM
JohnSteed1
New Contributor III

Configuration:
We are running an ArcGIS Enterprise 10.7.1 deployment on Redhat Linux in AWS.  Two Servers, One Portal, One DataStore (configured as a tile cache and relational database), Two WebAdapters (Portal and Server), and a GeoEvent Server.  All components are on separate VMs.

Issue:

We are experiencing performance issues with our web applications where we are seeing layers either not load, or take a very long time to do so.  Server logs are throwing a lot of the following two errors:
Update for the object was not attempted. Object may not exist.
FATAL: remaining connection slots are reserved for non-replication superuser connections

After some research, it looks like we are running out of SWAP space.

Question:

How is SWAP best configured for the DataStore to use it less or not at all (if possible).

0 Kudos
3 Replies
ChristopherPawlyszyn
Esri Contributor

Have you checked the number of active connections to the relational datastore during these times? The Linux AMIs I use on Linux typically don't have any swap space defined within the OS and the error message seems to imply that you're hitting the 150 maximum connection threshold for the data store as opposed to an underlying memory/swap limitation. May be off the mark but certainly worth looking into; unless you are running out of available RAM on the instance during increased demand I wouldn't expect a swap partition to make that much of a difference.

changedbproperties | ArcGIS Data Store command line utility reference 


-- Chris Pawlyszyn
0 Kudos
JohnSteed1
New Contributor III

Thanks @ChristopherPawlyszyn !
We'll look into this and see where we are at.  We aren't running out of RAM, so the connections seem like that might be a culprit.
We have two Servers for this DataStore.  Are there 150 total connections for the DS, despite having two Servers?

0 Kudos
ChristopherPawlyszyn
Esri Contributor

The total connections are a limitation of the relational data store by default, without any regards to how many clients are connecting to the database. If you have a large number of hosted services with a high number of concurrent requests then this maximum can certainly be playing a role in the behavior and I think the error message you listed seems to be in-line with that theory.


-- Chris Pawlyszyn
0 Kudos