Select to view content in your preferred language

Azure Files premium file share - SLOW File Geodatabase Performance

2250
4
08-29-2023 07:50 AM
Labels (2)
danbecker
Frequent Contributor

Has anyone attempted/used an Azure Files premium file share for storing File Geodatabases and Pro projects?

We have an Azure Virtual Desktop deployment for ArcGIS Pro workstations. These workstations are in the same Azure region as the above premium file share. Workstations map the premium file share via a SMB private endpoint connection, it appears like any other on-prem network drive. 

We can transfer a 450MB+ file (.zip, .tif, ect) between local managed drive <--> the premium file share using file explorer in seconds. This single, large file transfers either direction in < 1 second. I briefly saw the transfer dialog window and it was at 498MB/sec.

But the same file explorer transfer with a 2GB File Geodatabase (contains 5,527 files) takes 5-10 minutes! Same is true when expanding a feat. dataset in the FGDB, the Pro catalog window spinning wheel appears for 30+ seconds before anything happens. Opening the attribute table, calculating a field, ect. everything is super slow, Pro appears to hang frequently. 

Has anyone experienced this? There seems to be a performance issue when thousands of small files are manipulated in Azure Files. 

4 Replies
zporteous
New Contributor

I have noticed in our organization slow performance with Azure File Storage when it comes to GDBs after a recent migration. Very unfortunate. 

0 Kudos
danbecker
Frequent Contributor

I can't believe this isn't documented, or discussed more..?

We abandoned Azure Files and deployed an Azure NetApp Files 2TB smb volume, premium tier. Throughput given that tier and volume size is 128 MiB/s, equivalent to an on-prem 1000 Mbps network. 

 

And, the performance shows, Id say it's faster then our 10k RPM SSD SATA disks on a 1000mbos network.

 

This is our identical architecture:

https://learn.microsoft.com/en-us/azure/architecture/example-scenario/data/esri-arcgis-azure-virtual...

The only issue we have are random "general function failures" in Pro 3.1.2. Happens when the project is using FGDBs. SDE GDBs and you never get that failure. I'd say that error was happening 1-4 times /hr on Azure Files, and now once per day on NetApp files. It seems to happen when you open an attribute table. There's no way for Pro to recover, nothing draws, ect until you close Pro, then re-open..back to normal.

So close to a REALLY nice, highly scalable virtual platform. 

danbecker
Frequent Contributor

We deployed a new Azure Virtual Desktop host pool that contains NVIDIA GPUs and Pro 3.2.1.

This general function failures STILL happen, only when accessing FGDBs stored in Azure NetApp files.

Anyone experience this, ESRI, any advice here?

0 Kudos
danbecker
Frequent Contributor

We deployed a new Azure Virtual Desktop host pool that only contained Entra ID joined session host VMs, same NVIDIA GPUs. upgraded to Pro 3.2. Since the VMs were no longer AD joined, we had to migrate all the AD Group Policy Objects to InTune, which really wasn't that difficult given the GPO import tools in InTune.

FsLogix profile containers were switched away from Azure Files Premium and are being stored in an Azure NetApp Volume, just like our GIS file-based data. 

Entra ID-joined session hosts can only authenticate with SMB shares using Kerberos authentication, not NTLM (on-prem AD). So, Kerberos auth. and ACLs (permissions) were configured per Azure docs. for both the NetApp \profile and \gis volumes. 

For the results: ALL PROBLEMS ARE SOLVED! 

Added bonus is that all MS office apps, including the Remote Desktop client are SSO using Entra ID (cloud) credentials. The VMs, Pro, and FGDB access is 100% improved. 

I wish I had a better explanation, but our problems with FGDB performance in AVD was 100% caused by authentication issues with traditional on-prem AD (which was a VM in Azure, not really on-prem). 

With the sputtering/performance issues resolved, I have no doubt that better performance could be achieved by increasing the NetApp capacity pool quota (i.e. monthly A$ure invoice). Our current cap. pool quota is 2TB which comes with 128 MiB/s throughput, divided how ever we want between \profile and \gis volumes. Another 1TB would add +64 MiB/s throughput. 

Azure NetApp is quite amazing, so simple. 

0 Kudos