This question has been asked before, but I thought I would get a refresh of current thinking.
The Esri supplied Cloudformation template for an HA AGS server site with multiple AGS machines utilises an EC2 instance in Autorecovery mode acting as a file share for use by the AGS site for the Server Directories. (with S3 & DynamoDB for configuration)
But Autorecover only works in a single Availability Zone and if the entire AZ is lost – theoretically your site will die and only be recoverable from any snapshot backups you have configured for the EBS volumes attached to the File Server.
The ESRi Australia Managed Cloud Services team Is using ObjectiveFS across two Linux EC2 (in different AZ) to provide a Samba share that can be used for the Server Directories, but this seems like a lot of config overhead.
There is another Esri supplied pattern illustrated when using AGS in Docker (experimental at 10.5.1 and not recommended for Production) that uses EFS as the storage for Server directories (and config)
My question – and I am asked by clients fairly regularly, is – why don’t we recommend EFS as the HA File Store for Server directories?
I am aware that the Esri recommendation is to have a file store that provides low latency high volume read/write performance.
Is EFS not fast enough? (that is what I have been telling people up till now). Are there any benchmarks that give some performance comparisons?
Why is it OK to provide an EC2 instance with Autorecover as an alternative HA option when this would fail in the event of an AWS availability zone outage?
And, as a bonus question,
What is the equivalent answer for Azure?
Solved! Go to Solution.
We have a solution at the government agency I work for that needed to move to both HA and DR configurations (that San Antonio data center lightning strike caused a 15 hour outage). We were avoiding the same single point of failure you mentioned that stems from the file share. Auto-recover doesn't work when the data center is down. We deploy multi-AZ primary and multi-AZ dr environments so if a single data center gets the cooling systems shocked to death, or whatever, the other AZ is still humming along. If the whole region goes down then the ip forwards to the DR environment on the other side of the country.
At the time we began that project EFS could not handle the volume of small, fast locks required by arcgis, and thus it was an unsupported configuration. So we moved to test using SoftNAS which works well. Because we need to stay on supported configurations (what's the point of premier support if you don't) we engaged with ESRI professional services to get SoftNAS 'blessed' and also noted that since the time of that docker instance that you mentioned EFS has been improved by AWS. The improvements that were made to EFS allow for the many short and fast locks that ArcGIS needs and the professional services team blessed it for use in prod. e.g. it is a supported option for the file share now. Professional services tested and both SoftNAS and EFS meet the need now. Since we deploy primarily to gov cloud on aws we opted to go with EFS.
I'm just starting to look at this same question and came across your post. The performance of the EFS looks theoretically better than using EBS on a single VM (which is what I understand the cloudformation template uses). https://docs.aws.amazon.com/efs/latest/ug/performance.html I plan to spin up both configs as sandboxes and I'll get some rough performance numbers. Hopefully someone else here can give you a more accurate response, but it's been a couple months so maybe not.
Also, Azure (premium) files (File Storage | Microsoft Azure ) would be the equivalent. I've just starting using those on a different project for all shared folders. The initial impression is that they're holding up fine, but I don't have any hard numbers as of yet.
Following up on this, it looks like AWS EFS is not supported by Windows: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
There's a Windows file share service called FSx that looks like it's a little slower, and more expensive than EFS: https://aws.amazon.com/fsx/windows/?nc=sn&loc=1 It's also not available in my region, which is a deal breaker for me. At the moment that means I'm sticking with the VM file server.
Thanks for doing the experiment Josh.
Oneof my customers may be shifting their environment to RHEL chiefly to be able to take advantage of EFS
another alternative we have been using is Two File Server VMs ((Linux) with ObjectiveFS installed to make them stay synchronised. Then use Samba to provide a SMB / NFS file share that can be mounted as a drive on each AGS VM.
This seems to be working as fast as a directly attached EBS volume and no latency problems on write operations
The Linux file server seems like a solid option. I assume it's at least twice the price of what using EFS would be though? Definitely more setup and maintenance to deal with.
I am not certain, but
As OFS is based on using S3 as its backend storage location, and I think EFS is using EBS - I think OFS actually works out cheaper for a small number of fileserver nodes (that only need to be small EC2 Linux instances).
Of course, you will need to do the maths for your own situation.
We are looking at using FSX as a file share for ArcGIS Server config. If we were to get it working, would this be a supported deployment?
Also, do you know of anyone using FSX.as a file share.
sorry I missed the notification that you had posted.
We are trialing the use of FSx for another large site in NSW.
But we are using this only for the ArcGIS Server System directories, still using DynamoDB for ArcGIS Config store.
So far, the simple testing we have done shows no significant performance hits for simple map export and feature query.
So far as I am aware, there is not an "unsupported" flag on using this - the doc only specifies that a file store for system directories must provide high performance read/write and not wait until change is committed across all nodes (which is why you cant use a DFS based system, FSx supports acting as a node within a DFS cluster, but doesn't use DFS concepts itself).
As I am sure you are now aware, all AWS instances need to be part of the same Active Directory domain as the FSx share (we are using AWS Directory Service for this)