I am currently starting a project of migration of an ESRO Oracle geodatabase environment into an SQL Server 2017 -> then SQL Azure geodatabases.
There is choice and option to make regarding the spatial storage type. With Oracle with were always using the ST_Geometry storage , works fine , perf ok.
Now with SQL Server and especially pushing later to Azure Cloud SQL geodatabase we have the option between "Microsoft Geography" and "Microsoft Geometry" storage.
The Microsoft Geography format is attractive as we are dealing with multi countries, corporate data sets , so we are anyway always storing the spatial data in Lat,Lon geographic coordinates (WGS 84). And then it allows T-SQL access of the spatial data.
BUT I have noticed that on big layers with huge high resolution polygons with lots of points the display performance via ArcGIS desktop 10.6 can be 2 times slower with the "Microsoft Geography" storage type !
Question: Is this normal ? and is this a well know and documented behaviour ?
In SQL, both geometry and geography storage formats are significantly less responsive than SDE Binary. If all you're seeing is 2 times slower, please let us know your hardware setup (!!!), as typical performance hit is 6-10 times slower.
For your application, you will want to spend a LOT of time tweaking the spatial index, perhaps use several indexes. With point in polygon queries, you will see mind-boggling differences in speed just by changing cells per object up or down.
At the moment we are early stage, but we have one existing Oracle 11G server and a SQL Server 2016 , both on same hardware (VM) , 4CPU , 16GB , same data center, same network. And Layers imported into SQL have by default the auto grid 16 cells per object grid config.
And we can see that standard layers behave like 2 times slower and layers with 1+ million small polygons or layer with few hundreds huge polygons (100000+ points per polygons) are behaving a lot slower (5-6) times than Oracle.
So the difference looks like really how ArcGIS consume spatial layers from SQL or ODBC driver 13.1 ?
Changing the spatial indexes using variaitons on SQL does not change the perf
CREATE SPATIAL INDEX SIndx_***
BOUNDING_BOX = ( xmin=-180, ymin=-90, xmax=180, ymax=90 ),
GRIDS = (LOW, LOW, MEDIUM, HIGH),
CELLS_PER_OBJECT = 256,
PAD_INDEX = ON );
Even, interesting, if I put a where clause on the layer to grab only few features, the SQL is still slower.
It looks like the SQL retrieval is slower or the fetching is bigger or slower.
Is it possible to use SQL Memory Optimized tables with ESRI geodatabase ?
Sadly, M$ SQL does not support spatial data types for in memory optimization. There is no limit to the number of spatial indexes you can throw on the table, and query bounding boxes "may" utilize different indexes based on estimated plan efficiency. One way to test this is to add a few indexes, with cells per object at low, medium, and high values, do a point in poly intersect query, then look at the execution plan to see which index gets hit. Drop that index, run the same query, and compare the two execution plan results. Through this shoot in the dark approach, you'll get close. You can also force the use of a specific index in the query syntax. I have found this to helpful in chasing spatial index performance: Using the spatial index diagnostic stored procedures - Bob Beauchemin
Do you see any performance increase by fetching the entire table as an indexed view?