Azure SQL SDE fetch_buffer_size

1511
7
Jump to solution
05-09-2020 06:22 AM
spiskulaspiskula
New Contributor III

Hello,

I have a SDE database stored in Azure SQL. In there I have several Feature Class with 100s of millions of features in each FC. I copy those to a local FGDB with use of arcpy.Append_management. I have observed that for various Feature Classes I get different data transfer speed at which the features are streamed from SQL to my VM. As part of investigation i noticed that after running this diagnostic SQL on the DB while the data is streamed:

SELECT *
FROM sys.dm_exec_cursors (0) c
CROSS APPLY sys.dm_exec_sql_text (c.sql_handle) t 

Cursors related to faster running transfers have higher value of  'fetch_buffer_size' column.

Although in all cases the data transfer is handled by arcpy.Append_management, requests for different FCs get different values of that buffer (20/40/70). Is there any way to impact/enforce higher value for that fetch_buffer_size parameter? All FCs sit in the same schema in the same database, yet transfer one FC is copied at 2M/s, the other 6MB/s, and I would like to understand how could i maximise the transfer speed.

Cheers

0 Kudos
1 Solution

Accepted Solutions
ChetDobbins
Esri Contributor

Before anything else, let's try just setting the ATTRBUFSIZE to 1000000 and see how that improves the behavior. That is going to be the new default setting in future releases so we know it has been pretty well tested. Once you have verified the difference with that change please reply and we can see what other steps to take next.

View solution in original post

7 Replies
ChetDobbins
Esri Contributor

It is possible that you are noticing the effects of the ATTRBUFSIZE parameter that is specified in the SDE_server_config table. Could you provide more information about the data you are working with? I suspect that the data that copies faster either has less columns or just smaller sized columns (think NVARCHAR(max) versus INT). Also please confirm what release of ArcGIS you are using?

0 Kudos
spiskulaspiskula
New Contributor III

Hello Chet,

As for SDE:'10.6.0 Geodatabase' / Release '106001'

ATTRBUFSIZE : 50000

MAXARRAYBYTES : 550000

Is it safe just to double values of both of the above or just one of them ? Not sure how best to come up/compute  higher-but-safe values. The purpose of the database is purely for feature staging, so I am happy to spend all of the resources on reading the records from SQL to FGDB. Since the DB is in Azure SQL i sometimes see many PAGEIOLATCH operations, so was thinking that by increasing buffer size, once the sql process is holding a data page, it could read as many records as it can before it has to release the page.

The Feature Classes vary between themselves by a couple of columns - is there a query i could run to get a summary of row size to compare? 

Thank you for looking into this.

0 Kudos
ChetDobbins
Esri Contributor

Before anything else, let's try just setting the ATTRBUFSIZE to 1000000 and see how that improves the behavior. That is going to be the new default setting in future releases so we know it has been pretty well tested. Once you have verified the difference with that change please reply and we can see what other steps to take next.

spiskulaspiskula
New Contributor III

Chet,

Can i just use SQL update to modify ATTRBUFSIZE  or do i need to get hold of sdeconfig ?

Also I wanted to check that 

let's try just setting the ATTRBUFSIZE to 1000000

Is actually meant to be 1 million and no changes are needed to MAXARRAYBYTES ?

I saw some legacy doco which talks about dependency between the two, just want to make sure please.

0 Kudos
ChetDobbins
Esri Contributor

Yes, updating with SQL is fine for this change, and yes the update would be just setting the ATTRBUFSIZE = 1000000 (1 million). No need to change the MAXARRAYBYTES for now. We can look at other changes if this change doesn't provide enough improvement.

spiskulaspiskula
New Contributor III

I set the vale as you suggested and I could see that the fetch_buffer_size has increased from 20 to 100, so looks great. In addition I have performed Alter Index REORGANIZE on the index associated with ObjectID on the feature class to help reduce data fragmentation. From some tests I've run the speed of data copying from SQL to FGDB increased 2x times, so the result looks quite good. Thank you for your help!

simoxu
by MVP Regular Contributor
MVP Regular Contributor

Hi, I am trying to copy / export about 180K records from a freatureclass in Azure SQL Geodatabase to a local FGDB, It only exports a very small amount of records before ArcGIS Pro becomes dead after hours.

In my geodatabase config table, the settinig is already ATTRBUFSIZE = 1000000 (1 million), my SDE schema version is 10.8.1

According to ESRI support article, you have to intall ArcGIS Pro in Azure vm to use Azure enterprise geodatabase...

https://community.esri.com/t5/geodatabase-questions/move-egdb-to-azure-cloud/m-p/1072113

 

I am confused... if you could give some advice, that would be great. Thanks

0 Kudos