We recently moved our geodatabase into MS SQL Server in Azure, and we are experiencing strange issues.
I am trying to export a large featureclass into an FGDB in ArcGIS Pro, which is installed on the desktop machine, it will take many hours and only export a small amount of records. (2900 of 180,000 records of the featureclass)
If the featureclass is not that big, let’s say only hundreds of records, it can be exported very quickly.
After monitoring the SQL Server in Azure, the DB guy told me there were some small activities, but nothing major to stress the server. On my client machine, I can see lots of network activities, but the progress bar barely move.
I tried to use ArcGIS Python API to load data from featureclass, the script will hang.
Never experience this issue before on the on-prem SQL Server.
If anyone can share some experience here, that would be much appreciated.
Solved! Go to Solution.
update:
It turns out deploying the SDE database in the cloud is only one of the causes.
The biggest cause is:
The Text columns is extremely long (>1 billion), this is a result of creating these layers at the backend using SQL directly, and ArcGIS translates the nVarchar(MAX) to Text (length: 1073741822 ). I created a new featureclass based on one of the original layers but with limited length for the Text columns, then loaded original layer data into it. The performance of this new featureclass is far better, I can copy the whole layer in less than 1 minute instead of 4 hours.
A useful link:
Hope this is useful for others.
... following
Is the ArcGIS Pro client also in the same region as the Azure SQL DB?
Hi George,
The ArcGIS Pro is on my desktop.
Today I also tested on a VM which I am not sure if it is in the same region as the DB.
The strange thing is, it was using lots of network bandwidth (>5MPS ) , and for 3 hours it only copied 13K records out of 180K records of a POINT featureclass.
This only happens when I copy and paste featureclass in ArcGIS.
If I directly copy the featureclass table (using SQL or pyodbc+Python), it will only take minutes.
I just don't understand what it was doing with 5MPS bandwidth for 3 hours...
Thanks, Simo
Just called the ICT department, and was told the VM is actaully on-prem! that partially explains it.
Yep, if the client is not in the same cloud / region the performance will be bad.
Connections from ArcGIS software to databases in the cloud must originate from machines in the same cloud. For best performance, connections should originate from machines in the same cloud region. In the majority of cases, connection and query performance is poor when you connect from ArcGIS clients installed on-premises to databases in the cloud or make connections between clients and databases in different cloud regions, and connections may time out.
https://enterprise.arcgis.com/en/system-requirements/latest/windows/databases-in-the-cloud.htm
I would guess all the information that you saw being "transferred" was the chattiness of the Pro client.
Recommendation - Create a Pro client in the same cloud region and retest.
Thanks George.
We may have to move back to on-prem spatial database, since we need to use the Pro on the desktop machines.
Yep, that is an option. If you can publish the data out as services, then you could leave the DB in Azure.
There are LOTS of considerations when moving any part of your GIS environment to the cloud.
update:
It turns out deploying the SDE database in the cloud is only one of the causes.
The biggest cause is:
The Text columns is extremely long (>1 billion), this is a result of creating these layers at the backend using SQL directly, and ArcGIS translates the nVarchar(MAX) to Text (length: 1073741822 ). I created a new featureclass based on one of the original layers but with limited length for the Text columns, then loaded original layer data into it. The performance of this new featureclass is far better, I can copy the whole layer in less than 1 minute instead of 4 hours.
A useful link:
Hope this is useful for others.