Why do we use DynamoDB for Server Configuration Store in AWS cloud?

2945
5
Jump to solution
03-11-2021 09:49 AM
WhereMatters
Occasional Contributor

When deploying multi machine AGS sites in AWS, why do we use DynamoDB for the server config store instead of a simple fileshare?

Is it only because DynamoDB is highly available by design as compared a fileshare?

or is it because ArcGIS Server requires the millisecond latency and high throughput that DynamoDB provides?

Is the configuration store accessed that frequently by AGS? The storage space requirements are quite small for a config store anyway.

If we use Amazon FSx for the fileshare, do we still need DynamoDB just for the config-store? We could use FSx for storing everything including portal content, config store, server directories etc., as FSx too is highly available by design.

0 Kudos
1 Solution

Accepted Solutions
DavidHoy
Esri Contributor

@WhereMatters 

I just re-read my post and realise I have misled you.

In our large AWS hosted HA Enterprise site, we don't use FSx for ArcGIS Server Config files, only for the Server Directories (and Portal Content).

We do use DynamoDB for the AGS Config - and for the very reason you suggested - to ensure millisecond read-write consistency. When publishing, it is important to ensure the updates to config tables happen rapidly as there are a few individual service requests that are managed at this time, and any inconsistency may cause corruption. DynamoDB provides this fast update, we can't be certain that FSx will do so.

so, to summarise, FSx good for the Server Directories (arcgiscache, arcgisoutput, arcgissystem).

- and for file based registered data stores.

But DynamoDB is the best HA solution for the AGS Configuration.

FSx can also be used for Portal Content, but it is cheaper to use S3 for this.

 

sorry if I have caused any confusion.

View solution in original post

5 Replies
DavidHoy
Esri Contributor

Hi Anand,

We do have a large ArcGIS Enterprise site with multiple machines that is currently using FSx for shared Server Directories, including config.

It is working well under high loads. 

However, FSx is not an officially certified solution as AWS has indicated to Esri that it uses DFS technology behind the scenes, and DFS replication has been known to not always provide the sub millisecond read-write accessibility that ArcGIS Server requires for config store.

But my understanding is that DFS-R will only slow down when used for cross region duplication, which is not the case when using FSx within one region across multiple availability zones which would generally have very low latency.

So far, we have seen no issues caused by inconsistent read-writes, and I am happy to suggest FSx as a good solution for HA in an AWS deployment.

DavidHoy
Esri Contributor

@WhereMatters 

I just re-read my post and realise I have misled you.

In our large AWS hosted HA Enterprise site, we don't use FSx for ArcGIS Server Config files, only for the Server Directories (and Portal Content).

We do use DynamoDB for the AGS Config - and for the very reason you suggested - to ensure millisecond read-write consistency. When publishing, it is important to ensure the updates to config tables happen rapidly as there are a few individual service requests that are managed at this time, and any inconsistency may cause corruption. DynamoDB provides this fast update, we can't be certain that FSx will do so.

so, to summarise, FSx good for the Server Directories (arcgiscache, arcgisoutput, arcgissystem).

- and for file based registered data stores.

But DynamoDB is the best HA solution for the AGS Configuration.

FSx can also be used for Portal Content, but it is cheaper to use S3 for this.

 

sorry if I have caused any confusion.

WhereMatters
Occasional Contributor

Thank you David! My original intent of the question was to understand if using DynamoDB is a 'must have' or a 'nice to have'. It looks like it is 'recommended' for the config-store due to its obvious benefits of faster read-writes as compared to FSx.

Also these cloud native stores are only available in the cloud. When it comes to on-premises deployments we have no other option than a simple fileshare for storing everything.

DavidHoy
Esri Contributor

You are correct for in-house HA - you need to use a fileshare - the trap here is to ensure your file server or NAS or SAN (and I am assuming windows) is configured using SMB with OpLocks and caching disabled - which is not the default.

If OpLocks is enabled, there can be significant contention for fileshare between different fileshare clients..

It depends which version of the SMB protocol is used - if SMB1 (which should be deprecated - then disable OpLocks) if SMB2 or SMB3 set LeasingMode = "none"  - see https://docs.microsoft.com/en-us/powershell/module/smbshare/set-smbshare?view=win10-ps


If using an NFS file share with a Linux Hosted Server, then you will need to ensure the directory is mounted with noac or actimeo=0 option (not the default - see https://linux.die.net/man/5/nfs)



The need for these fine tuning configurations gets a mention in the ArcGIS Server help - https://enterprise.arcgis.com/en/server/latest/deploy/windows/choosing-a-nas-device.htm

MichaelKarikari1
New Contributor III

Has anyone had success migrating an existing filesystem-based arcgis server site to one that uses DynamoDB as cloudstore. All documentation I have seen points to deploying new instance via CF templates. Also, what further complicates is if the existing instances are hosting server for Portal

0 Kudos