Hi All,
This question has been asked before, but I thought I would get a refresh of current thinking.
The Esri supplied Cloudformation template for an HA AGS server site with multiple AGS machines utilises an EC2 instance in Autorecovery mode acting as a file share for use by the AGS site for the Server Directories. (with S3 & DynamoDB for configuration)
Like this
But Autorecover only works in a single Availability Zone and if the entire AZ is lost – theoretically your site will die and only be recoverable from any snapshot backups you have configured for the EBS volumes attached to the File Server.
The ESRi Australia Managed Cloud Services team Is using ObjectiveFS across two Linux EC2 (in different AZ) to provide a Samba share that can be used for the Server Directories, but this seems like a lot of config overhead.
There is another Esri supplied pattern illustrated when using AGS in Docker (experimental at 10.5.1 and not recommended for Production) that uses EFS as the storage for Server directories (and config)
https://s3.amazonaws.com/arcgisstore1051/7333/docs/ReadmeECS.html
Like this
My question – and I am asked by clients fairly regularly, is – why don’t we recommend EFS as the HA File Store for Server directories?
I am aware that the Esri recommendation is to have a file store that provides low latency high volume read/write performance.
Is EFS not fast enough? (that is what I have been telling people up till now). Are there any benchmarks that give some performance comparisons?
Why is it OK to provide an EC2 instance with Autorecover as an alternative HA option when this would fail in the event of an AWS availability zone outage?
And, as a bonus question,
What is the equivalent answer for Azure?