Skip navigation
All People > tpcolson > Finding Sasquatch with ArcGIS Pro

Many of us have one or two things about Pro that is a "fish or cut bait" barrier to migrating to Pro. For me, it's ArcGIS Pro Publish Services To ArcGIS Server. My thoughts, and those of many others, on that topic are well and clearly laid out in that idea, so no rehashing the debate here. But what's really frustrating the very many folks that are impacted by not being able to perform that critical task is the lack of any meaningful response to the clearly articulated concerns, use cases, needs for the requirement, and impact of not having it that have been repeated in the over 150 comments provided. After a "few beers" last night, a conspiracy theory developed that involved geography, learning how to symbolize things in Pro, and the top 100 Ideas by number of votes. Read on....

 

In the Kingdom of Sasquatchia, there are 100 states and the Constitution's only Amendment "Thee Shall Not Conform to the Agile Process" has survived numerous Supreme Court challenges and two referendums. A bill to codify "Allow ArcGIS Pro To Publish to Server Without Requiring Portal" has broad bipartisan support but not enough votes to override what the President (Big Foot) has been Tweeting "Is a terrible idea! Terrible! Bad Deal! Veto!!!". Concerned about chances of reelection and a trade war with the Boundless Union of Republics, Congress turned to the same pollsters that predicted the outcome of the most recent election. 

 

 

Their conclusion: There's no way to target campaign ads based on the geographic distribution of states that are successful in getting ideas adopted. Red and orange states are randomly scattered and green-ish states offer no clue based on their location as to why ideas languish or are not considered. The statisticians where consulted: 

 

 

When mapping the number of Vote normalized by number of Views we clearly see very few red states achieved success with a massive number of votes (over 100000) but then a some of the orange states achieved success with 3000 or less votes. Different statisticians were consulted: 

 

 

Which substantiates the theory that "Greater number of votes correlates to an idea being implemented". Cartographers were consulted: 

And they produced a map showing just those states with a status of "New"

 

Compared to all others states.

 

The Pro Chart Wizard was consulted:

 

 

 

 

And the campaign strategy manifests itself! Ideas still considered New also have a very high number of views but not a lot of votes when compared to Implemented Ideas with fewer views. 

 

I often here people say "Don't complain about politics if you don't vote". There's a lot of eyes on ArcGIS Pro Publish Services To ArcGIS Server yet, despite having the greatest number of views and comments of any of the top 100 Ideas (by number of votes), it has the least likely chance of becoming implemented, even less so than Google Maps Basemap . 

 

Get out the Vote! 

 

Let's make ArcGIS Pro Publish Services To ArcGIS Server the number one voted Idea on Geonet. Vote with your keyboard, not your mouse, and be sure to include your use case with your upvote! As Pro shows, ideas with a lot of votes get implemented! 

 

And if you concluded that not a single statistic in the maps show makes one iota of sense, just remember, Arc GIS Pro, not Russians, can swing the outcome of an election.....

 

 

Data used to create these wonderful maps in Pro is located at https://bigfoot.maps.arcgis.com/home/item.html?id=8c72924d814e434b8a5fcb08f29ba758 

Have you ever gotten this Error: The selected field is a required field and cannot be deleted ? Scratching my head, wondering how I checked "Field IsRequired". The official solution, export the feature class to another feature class and do some field mapping magic, then delete the original, import the changed one......not. See, all of my feature classes are baked into an enterprise geodatabase with all kinds of dependencies and relationships (that's kinda the point of an enterprise geodatabase, right?). 

 

There's a quicker and easier way to resolve this issue. I spend a lot of time editing metadata in its native XML (faster, and I can automate things easier), so why not edit a feature class definition, which is also in XML? Feature class definitions live in a XML column (Definition) in the GDB_ITEMS table. 

 

Click on the xml in the definition column, and you'll see the raw schema for the feature class. Look for the name of the attribute that allegedly won't let you delete it: 

<GPFieldInfoEx xsi:type="typens:GPFieldInfoEx">
<Name>GIS_LOCATION_ID</Name>
<ModelName>GIS_LOCATION_ID</ModelName>
<FieldType>esriFieldTypeGUID</FieldType>
<IsNullable>false</IsNullable>
<Required>true</Required>
<Editable>false</Editable>
</GPFieldInfoEx>

Note the Required tag. Ok, so we've figured out what's causing the problem, now, how to fix it. Updating  XML in SQL is pretty straightforward, if you're replacing the entire string. Updating specific nodes and elements in SQL is a little trickier (but possible). In the code below, note the =N'  before the xml definition. I've left out a lot of the other xml, but what I did was copied the entire XML definition and pasted it after the =N' . There's a ' at the end. Also note the WHERE [Name] = 'WETLANDS.DBO.GRSM_HYDRO_WETLAND_PT' which constrains the update to just that feature class with the supposedly undeletable attribute.

 

UPDATE [dbo].[GDB_ITEMS]
SET [Definition] =N'<DEFeatureClassInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:typens="http://www.esri.com/schemas/ArcGIS/10.5" xsi:type="typens:DEFeatureClassInfo">
<CatalogPath>\WETLANDS.DBO.GRSM_HYDRO_WETLAND_PT</CatalogPath>
<Name>WETLANDS.DBO.GRSM_HYDRO_WETLAND_PT</Name>
<ChildrenExpanded>false</ChildrenExpanded>
<DatasetType>esriDTFeatureClass</DatasetType>
<DSID>435</DSID>
<Versioned>false</Versioned>
<CanVersion>true</CanVersion>
<ConfigurationKeyword />
<RequiredGeodatabaseClientVersion>10.2</RequiredGeodatabaseClientVersion>
<HasOID>true</HasOID>
<OIDFieldName>OBJECTID</OIDFieldName>


blah blah blah

<GPFieldInfoEx xsi:type="typens:GPFieldInfoEx">
<Name>GIS_LOCATION_ID</Name>
<ModelName>GIS_LOCATION_ID</ModelName>
<FieldType>esriFieldTypeGUID</FieldType>
<IsNullable>true</IsNullable>
</GPFieldInfoEx>

blah blah blah

<ChangeTracked>true</ChangeTracked>
<FieldFilteringEnabled>false</FieldFilteringEnabled>
<FilteredFieldNames xsi:type="typens:Names" />
</DEFeatureClassInfo>'
WHERE [Name] = 'WETLANDS.DBO.GRSM_HYDRO_WETLAND_PT'

You'll see, somewhere in the middle, I changed the attribute restrictions to be just like any other attribute: one I can delete. 

 

Close and restart Arc Catalog to see the change (and delete the attribute). It goes without saying that you should pull a full back up of your SQL before doing this, and should test this on a non-production database, first! Also note this won't work on a GlobalID field (e.g if you have a GLobalID attribute but it's not named GlobalID), there's a few more changes you have to make to alter that one, and you shouldn't view this method as a way to alter the table schema physical design. 

 

Disclaimer: Editing the underlying GDB tables in this manner is not not supported by ESRI..with that out of the way, there's a lot of DB administration you can pull off with manipulation of the XML definitions. I've used this with great success in exporting a "workspace", editing the XML to implement some find and replace changes, then import as a new GDB. 

This is really just to help me "bookmark" all of the SSL issues, as we're having many of them, hope it helps other folks out!

 

  1. Secure SSL services in Web App builder failing.
  2. ArcGIS Portal 10.3 - SSL Certificate for internal network QUESTIONS
    1. Includes a little-known REST checker that test certificate status: cool! https://%3Cserver.domain.com%3E:7443/arcgis/sharing/checkUrl.jsp?url=%3CinsertURLthatisboundtothecertyouneedtotest
  3. ArcGIS Portal 10.3 - SSL Certificate for internal network QUESTIONS
    1. Poster can't seem to get a cert working, explored using their own CA server
  4. Setting up SSL with a signed certificate in a multi-tier server architecture
    1. Take home message: use proper DNS Aliasing!
  5. SSL and esri/request
    1. Seems that enabling "HTTPS" only causes some app errors downstream
  6. Anyone else using SSL with AWS EC2 and elastic load balancer?
    1. Don't use Amazon ECW if you need PKI!
  7. Sharing Web GIS Services? Always enable TLS | ArcGIS Blog
    1. Another test of SSL functionality SSL Server Test (Powered by Qualys SSL Labs)
  8. how to use portalpy with win-authenticated SSL
    1. PKI- Portal doesn't work with python
  9. ssl certificate to publish the geoprocessing tool
    1. Enabling SSL breaks GP Tools
  10. ArcGIS Server with Reverse Proxy using IIS and SSL
    1. The horrors of reverse proxy
      1. 40694 - Set up a reverse proxy with ArcGIS 10.1 for Server on IIS ARR with SSL
  11. How many SSL certificates does ArcGIS Server need?
    1. Certs must use FQDN
  12. Issues enabling SSL and accessing with HTTPS
    1. Another confirmation that wild card certs don't work
  13. Is it necessary to enable SSL if the ArcGIS Web Adaptor forwards request 2 ServerSite
    1. SSL is required to encrypt passwords
  14. Creating SSL Certificates for ArcGIS Server - do i need to use AGS Admin Interface
  15. CA Signed SSL Installation with Web Adpater on IIS 7.5
    1. remember, ArcGIS Server is a separate Apache Web Server: Needs a trusted cert as well! AGIS Cert needs to be signed by trusted CA
  16. Latency on get token request via SSL (ArgGIS server 10.1 SP1)
    1. SSL kills performace. What a surprise.....
  17. Wildcard SSL installation help!
  18. Adding unsecured services
    1. This is a big problem: If organization requires HTTPS only, can't consume HTTP content in portal
  19. PKI auth. In portal with AD
  20. How to connect to a secure ArcGIS Server using HTTPS and a PKI (X509) Certificate
  21. Federating an ArcGIS Server site with your portal

There are about as many opinions and resources on SQL Spatial Index Tuning as there are on why the use of a Sequential GUID is good or bad. There's The Black Art Of Spatial Index Tuning In SQL Server | boomphisto , ArcGIS Help 10.1, Spatial Indexing Overview , Spatial Indexing: From 4 Days to 4 Hours - CSS SQL Server Engineers - Site Home - MSDN Blogs, sp_help_spatial_geography_histogram and Indexing geography data in SQL Server Denali | Alastair Aitchison , Basic Multi-Level Grids - Isaac @ MSDN - Site Home - MSDN Blogs , http://social.technet.microsoft.com/wiki/contents/articles/9694.tuning-spatial-point-data-queries-in-sql-server-2012.asp… ,  and of course, if you've ever struggled with a poorly-performing spatial index and posted about it on the internet, chances are you got a reply from this guy.

 

Unfortunately, most of us are not database programmers, nor do we dream in SQL syntax. SQL is like learning French by reading a Chinese dictionary. I don't know about you, but I can't make heads or tails of one single line of SQL code. I'm happy to copy other peoples SQL snippets and hack them up 'till I get them to work. I particularly love this statement from ArcGIS Help: "If you create your data through ArcGIS for Desktop, the spatial grid index is calculated for you." That's like saying "If you put your key in the ignition, your car will drive to the store, get milk, and NOT kill a raccoon on the way". Unless your a Massachusetts driver, there are obviously a few things you have to tweak, such as turn the steering wheel and press the gas paddle in endlessly unique combinations in order to get to the milk store and swerve to avoid the raccoon. The same goes for SQL Spatial Indexes in a SDE Database. Just because the software enables them out of the box by no means implies that the index is optimized for your particular data environment.

 

Out of the box, ArcGIS creates a spatial index with 16 Cells Per Object with all four levels set to Medium Grid Levels.

 

Notice how the storage type is Goemetry, which some of you are discovering is the "New" ESRI default storage format.

 

Let's see how the "Out of the Box" spatial index performs. One way to test a spatial index "Quick and Easy" is with the Spatial Index Stored Procedures.

 

CREATE SPATIAL INDEX 
--THIS IS THE DEFAULT NAME OF THE INDEX
--CREATED BY ARCGIS
[S1169_idx] 
ON 
--AND THIS IS THE NAME OF TABLE
--SUPPLIED BY THE USER IN ARCGIS
--AT CREATION TIME
[dbo].[TEST_GEOM]
(
  [SHAPE]
)USING  GEOMETRY_GRID 
WITH (BOUNDING_BOX =(227166.13, 3925740.74, 314851.6915, 3968047.64), 
GRIDS =(
LEVEL_1 = MEDIUM,
LEVEL_2 = MEDIUM,
LEVEL_3 = MEDIUM,
LEVEL_4 = MEDIUM), 
CELLS_PER_OBJECT = 16, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF,
--DROP_EXISTING = ON ALLOWS US TO DO THIS ALL
--DAY WITHOUT MESSING WITH DROPPING THE IDX 
DROP_EXISTING = ON, 
ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
GO
DECLARE @geom geometry
--THE POLYGON IS AN AREA SMALLER THAN THE BOUNDING BOX
SET @geom = GEOMETRY::STGeomFromText('POLYGON ((247804.201 3943957.896, 29932.568 3943963.210, 247671.344 3942876.441, 247684.630 3943652.325,247804.201 3943957.896))', 26917)jjj
exec sp_help_spatial_geometry_index 'TEST_GEOM', 'S1169_idx', 1, @geom

There's a WHOLE lot of output. The first thing my eye spies is

Primary_Filter_Efficiency19.634703196347

and

Internal_Filter_Efficiency0

I'm no expert on SQL or spatial indexes, but I don't think those numbers look good. No wonder my draw times are slow, spatial queries are sluggish, and in general, the database is performing horribly. Every single ArcGIS-created table has the same index parameters!

 

Just for giggle, lets change the cells per object by altering just this in the above code snippet:

CELLS_PER_OBJECT = 4096

 

And look at the increase in efficiencies!

 

Internal_Filter_Efficiency76.7441860465116
Primary_Filter_Efficiency

91.4893617021277

 

Again, I'm claiming to be no expert at SQL Spatial Index Tuning, but I think I'm on to something here. As it turns out, my test data consists of "Very Complex Line Strings", which, if you're not living under the GIS rock, that's what all of your data is. Coincidentally, using value of 8192 for Cells Per Object in this scenario is a good starting point.

 

But there's a lot (lots and lots) of values between 16 and 8092, and then there's all the permutations of Low, Medium, and High that could be tested to determine the spatial index that is likely to give you the best performance most of the time. What if there was a way to automatically test spatial index settings and magically determine which parameters best fit your data scenario?

 

Enter the SDE Hacker....

 

I stumbled across this geospatial - Selecting a good SQL Server 2008 spatial index with large polygons - Stack Overflow  post where someone had posted a SQL stored procedure that will loop through Cell Sizes and Grid Levels to report Spatial Query results for every permutation (user supplied). Being the tinker that I am, I of course broke the code trying to add a bunch of output. Finally got it working. Here it is:

 

--ORIGINAL SQL CODE FROM
--http://stackoverflow.com/users/2250424/greengeo
--MODIFIFIED TO INCLUDE SPATIAL INDEX HELP OUTPUT
--IN TABLE
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  --LOOK FOR THE EXCLAMATION POINTS IN THIS CODE
  --THERE ARE SEVERAL USER-SUPPLIED VARIABLES THAT MUST
  --BE INPUT FOR THIS SP TO WORK
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!


USE DATBASE
GO



CREATE PROCEDURE dbo.sp_tune_spatial_index
(
  @tabnm VARCHAR(MAX), -- This parameter stores the name of the spatial table for which you are tuning the index
  @idxnm VARCHAR(MAX), -- This parameter stores the name of the spatial index of the named table
  @min_cells_per_obj INT, -- Minimum Cells Per Object to test on. Suggested to start at 2.
  @max_cells_per_obj INT, -- Maximum Cells Per Object to test on.

  /* The test requires two geometry instances to use in test query 1 and 2.
  The first one should cover the area of default extent. The second should
  cover an area roughly the size of the area shown when zoomed in, panning
  around. It is required that the variable store a string that will create
  the geometry instance since this will be done within the procedure and 
  cannot be a variable of type: GEOMETRY. The SRID of these instances must
  match that of the table you are testing. */
  @testgeom1 VARCHAR(MAX), -- This parameter stores the first geometry instance creation string that will be used in the test
  @testgeom2 VARCHAR(MAX) -- This parameter stores the second geometry instance creation string that will be used in the test



)

AS

SET NOCOUNT ON;

/* Prior to running this procedure, two tables are required. These tables are 
  created here to prepare for running the procedure. */

PRINT 'Checking for required tables...'
IF EXISTS(SELECT 1 FROM sysobjects WHERE name IN ('cell_opt_perm', 'spat_idx_test_result'))
  BEGIN
  PRINT '... The "cell_opt_perm" and "spat_idx_test_result" tables exist.'
  END
ELSE
BEGIN
  PRINT '... Creating "cell_opt_perm" and "spat_idx_test_result" tables.'
  CREATE TABLE cell_opt_perm(
  [perm_id] [smallint] NOT NULL,
  [permutation] [nvarchar](4) NOT NULL,
  [level1] [nvarchar](6) NOT NULL,
  [level2] [nvarchar](6) NOT NULL,
  [level3] [nvarchar](6) NOT NULL,
  [level4] [nvarchar](6) NOT NULL
  )

  INSERT INTO cell_opt_perm ([perm_id], [permutation], [level1], [level2], [level3], [level4])
  VALUES (1,'LLLL','LOW','LOW','LOW','LOW'),
  (2,'LLLM','LOW','LOW','LOW','MEDIUM'),
  (3,'LLLH','LOW','LOW','LOW','HIGH'),
  (4,'LLML','LOW','LOW','MEDIUM','LOW'),
  (5,'LLMM','LOW','LOW','MEDIUM','MEDIUM'),
  (6,'LLMH','LOW','LOW','MEDIUM','HIGH'),
  (7,'LLHL','LOW','LOW','HIGH','LOW'),
  (8,'LLHM','LOW','LOW','HIGH','MEDIUM'),
  (9,'LLHH','LOW','LOW','HIGH','HIGH'),
  (10,'LMLL','LOW','MEDIUM','LOW','LOW'),
  (11,'LMLM','LOW','MEDIUM','LOW','MEDIUM'),
  (12,'LMLH','LOW','MEDIUM','LOW','HIGH'),
  (13,'LMML','LOW','MEDIUM','MEDIUM','LOW'),
  (14,'LMMM','LOW','MEDIUM','MEDIUM','MEDIUM'),
  (15,'LMMH','LOW','MEDIUM','MEDIUM','HIGH'),
  (16,'LMHL','LOW','MEDIUM','HIGH','LOW'),
  (17,'LMHM','LOW','MEDIUM','HIGH','MEDIUM'),
  (18,'LMHH','LOW','MEDIUM','HIGH','HIGH'),
  (19,'LHLL','LOW','HIGH','LOW','LOW'),
  (20,'LHLM','LOW','HIGH','LOW','MEDIUM'),
  (21,'LHLH','LOW','HIGH','LOW','HIGH'),
  (22,'LHML','LOW','HIGH','MEDIUM','LOW'),
  (23,'LHMM','LOW','HIGH','MEDIUM','MEDIUM'),
  (24,'LHMH','LOW','HIGH','MEDIUM','HIGH'),
  (25,'LHHL','LOW','HIGH','HIGH','LOW'),
  (26,'LHHM','LOW','HIGH','HIGH','MEDIUM'),
  (27,'LHHH','LOW','HIGH','HIGH','HIGH'),
  (28,'MLLL','MEDIUM','LOW','LOW','LOW'),
  (29,'MLLM','MEDIUM','LOW','LOW','MEDIUM'),
  (30,'MLLH','MEDIUM','LOW','LOW','HIGH'),
  (31,'MLML','MEDIUM','LOW','MEDIUM','LOW'),
  (32,'MLMM','MEDIUM','LOW','MEDIUM','MEDIUM'),
  (33,'MLMH','MEDIUM','LOW','MEDIUM','HIGH'),
  (34,'MLHL','MEDIUM','LOW','HIGH','LOW'),
  (35,'MLHM','MEDIUM','LOW','HIGH','MEDIUM'),
  (36,'MLHH','MEDIUM','LOW','HIGH','HIGH'),
  (37,'MMLL','MEDIUM','MEDIUM','LOW','LOW'),
  (38,'MMLM','MEDIUM','MEDIUM','LOW','MEDIUM'),
  (39,'MMLH','MEDIUM','MEDIUM','LOW','HIGH'),
  (40,'MMML','MEDIUM','MEDIUM','MEDIUM','LOW'),
  (41,'MMMM','MEDIUM','MEDIUM','MEDIUM','MEDIUM'),
  (42,'MMMH','MEDIUM','MEDIUM','MEDIUM','HIGH'),
  (43,'MMHL','MEDIUM','MEDIUM','HIGH','LOW'),
  (44,'MMHM','MEDIUM','MEDIUM','HIGH','MEDIUM'),
  (45,'MMHH','MEDIUM','MEDIUM','HIGH','HIGH'),
  (46,'MHLL','MEDIUM','HIGH','LOW','LOW'),
  (47,'MHLM','MEDIUM','HIGH','LOW','MEDIUM'),
  (48,'MHLH','MEDIUM','HIGH','LOW','HIGH'),
  (49,'MHML','MEDIUM','HIGH','MEDIUM','LOW'),
  (50,'MHMM','MEDIUM','HIGH','MEDIUM','MEDIUM'),
  (51,'MHMH','MEDIUM','HIGH','MEDIUM','HIGH'),
  (52,'MHHL','MEDIUM','HIGH','HIGH','LOW'),
  (53,'MHHM','MEDIUM','HIGH','HIGH','MEDIUM'),
  (54,'MHHH','MEDIUM','HIGH','HIGH','HIGH'),
  (55,'HLLL','HIGH','LOW','LOW','LOW'),
  (56,'HLLM','HIGH','LOW','LOW','MEDIUM'),
  (57,'HLLH','HIGH','LOW','LOW','HIGH'),
  (58,'HLML','HIGH','LOW','MEDIUM','LOW'),
  (59,'HLMM','HIGH','LOW','MEDIUM','MEDIUM'),
  (60,'HLMH','HIGH','LOW','MEDIUM','HIGH'),
  (61,'HLHL','HIGH','LOW','HIGH','LOW'),
  (62,'HLHM','HIGH','LOW','HIGH','MEDIUM'),
  (63,'HLHH','HIGH','LOW','HIGH','HIGH'),
  (64,'HMLL','HIGH','MEDIUM','LOW','LOW'),
  (65,'HMLM','HIGH','MEDIUM','LOW','MEDIUM'),
  (66,'HMLH','HIGH','MEDIUM','LOW','HIGH'),
  (67,'HMML','HIGH','MEDIUM','MEDIUM','LOW'),
  (68,'HMMM','HIGH','MEDIUM','MEDIUM','MEDIUM'),
  (69,'HMMH','HIGH','MEDIUM','MEDIUM','HIGH'),
  (70,'HMHL','HIGH','MEDIUM','HIGH','LOW'),
  (71,'HMHM','HIGH','MEDIUM','HIGH','MEDIUM'),
  (72,'HMHH','HIGH','MEDIUM','HIGH','HIGH'),
  (73,'HHLL','HIGH','HIGH','LOW','LOW'),
  (74,'HHLM','HIGH','HIGH','LOW','MEDIUM'),
  (75,'HHLH','HIGH','HIGH','LOW','HIGH'),
  (76,'HHML','HIGH','HIGH','MEDIUM','LOW'),
  (77,'HHMM','HIGH','HIGH','MEDIUM','MEDIUM'),
  (78,'HHMH','HIGH','HIGH','MEDIUM','HIGH'),
  (79,'HHHL','HIGH','HIGH','HIGH','LOW'),
  (80,'HHHM','HIGH','HIGH','HIGH','MEDIUM'),
  (81,'HHHH','HIGH','HIGH','HIGH','HIGH')

  CREATE TABLE spat_idx_test_result(
  [perm_id] [int] NOT NULL,
  [num_cells] [int] NOT NULL,
  [permut] [nvarchar](4) NOT NULL,
  [g1t1] [bigint] NULL,
  [g1t2] [bigint] NULL,
  [g1t3] [bigint] NULL,
  [g1t4] [bigint] NULL,
  [g2t1] [bigint] NULL,
  [g2t2] [bigint] NULL,
  [g2t3] [bigint] NULL,
  [g2t4] [bigint] NULL,
  [PF_EFF][float] NULL,
  [IF_EFF][float] NULL,
  [GRIDL1] [int] NULL,
  [GRIDL2] [int] NULL,
  [GRIDL3] [int] NULL,
  [GRIDL4] [int] NULL,
  [TPIR] [bigint] NULL,
  [TPIP] [bigint] NULL,
  [ANOIRPBR] [bigint] NULL,
  [TNOOCILFQ] [bigint] NULL,
  [TNOOCIL3FQ] [bigint] NULL,
  [TNOOCIL4FQ] [bigint] NULL,
  [TNOOCIL0II] [bigint] NULL,
  [TNOOCIL4II] [bigint] NULL,
  [TNOIOIL3FQ] [bigint] NULL,
  [TNOIOIL4FQ] [bigint] NULL,
     [ITTCNTLGP] [float] NULL,
  [INTTTCNTLGP] [float] NULL,
  [BTTCNTLGP] [float] NULL,
  [ACPONTLGP] [float] NULL,
  [AOPG] [float] NULL,
  [NORSBPF] [bigint] NULL,
  [NORSBIF] [bigint] NULL,
  [NOTSFIC] [bigint] NULL,
  [NORO] [bigint] NULL,
  [PORNBPF] [float] NULL,
  [POPFRSBIF] [float] NULL


  )

  INSERT INTO dbo.spat_idx_test_result
  VALUES (0,16,0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
END


/* delete all rows from "spat_idx_test_result" table. This makes it ready to stuff in new results.
  !!!WARNING!!! if your test was interupted, the table will be cleared out and the test will
  begin from the beginning. You could try to modify this to start where you left off but
  I didn't have time and this worked well enough for me. */
DELETE FROM spat_idx_test_result
  WHERE perm_id != 0

/* set up counters */
DECLARE @a1 INT
DECLARE @a2 INT
DECLARE @a3 INT
DECLARE @a4 INT

/* set up variables to hold high/medium/low values and permutation to use in rebuilding
   the spatial index and recording stats */
DECLARE @lev1 VARCHAR(6)
DECLARE @lev2 VARCHAR(6)
DECLARE @lev3 VARCHAR(6)
DECLARE @lev4 VARCHAR(6)
DECLARE @permut VARCHAR(6)
DECLARE @num_cell VARCHAR(4)
DECLARE @time_str VARCHAR(20)
DECLARE @perm_id VARCHAR(20)
DECLARE @x xml
DECLARE @pf_eff FLOAT



/* create variables to hold timestamps for beginning and ending of test queries */
DECLARE @start_t DATETIME
DECLARE @end_t DATETIME
DECLARE @elapse_t INT



/* begin looping through cell option permutations */
SET @a1 = @min_cells_per_obj
WHILE @a1 <= @max_cells_per_obj
  BEGIN
  SET @a2 = 1
  PRINT 'Started Testing for ' +CAST(@a1 AS VARCHAR(10)) +' cells per object'
  WHILE @a2 < 82
  BEGIN
  SELECT @lev1 = level1, @lev2 = level2, @lev3 = level3, @lev4 = level4 FROM cell_opt_perm WHERE perm_id = @a2
  SET @permut = '''' +(SELECT permutation FROM cell_opt_perm WHERE perm_id = @a2) +''''


  EXEC
  ('
  CREATE SPATIAL INDEX ' +@idxnm +' ON ' +@tabnm +' 
  (
  [SHAPE]
  )
  USING  GEOMETRY_GRID 
  WITH
  (

  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  --MAKE SURE YOU EDIT THE BOUNDING BOX TO BE EXACTLY EQUAL TO THE BOUNDING
  --BOX OF YOUR SPATIAL TABLE 
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  BOUNDING_BOX =(227166.13, 3925740.74, 314851.6915, 3968047.64),
  GRIDS =(LEVEL_1 = ' +@lev1 +' ,LEVEL_2 = ' +@lev2 +' ,LEVEL_3 = ' +@lev3 +' ,LEVEL_4 = ' +@lev4 +' ), 
  CELLS_PER_OBJECT = ' +@a1 +' ,
  PAD_INDEX  = OFF,
  SORT_IN_TEMPDB = OFF,
  DROP_EXISTING = ON,
  ALLOW_ROW_LOCKS  = ON,
  ALLOW_PAGE_LOCKS  = ON,
  FILLFACTOR = 100
  )
  ON [PRIMARY]
  '
  )


  PRINT 'Re-built index to ' +@permut
  SET @a3 = 1
  SET @a4 = 1
  WHILE @a3 < 5
  BEGIN
  SET @start_t = GETDATE()
  EXEC
  (
  'CREATE TABLE #tmp_tab (shp GEOMETRY)
  DECLARE @g1 GEOMETRY
  SET @g1 = ' +@testgeom1 +'
  INSERT #tmp_tab (shp)
  SELECT
  r.Shape AS shp
  FROM
  ' +@tabnm +' r
  WHERE
  r.SHAPE.STIntersects(@g1) = 1
  DROP TABLE #tmp_tab'
  )
  SET @end_t = GETDATE()
  SET @elapse_t = (SELECT DATEDIFF(MS, @start_t, @end_t))
  SET @num_cell = CAST(@a1 AS VARCHAR(6))
  SET @time_str = CAST(@elapse_t AS VARCHAR(20))
  IF @a3 = 1
  BEGIN
  IF (SELECT TOP 1 perm_id FROM spat_idx_test_result) IS NULL
  BEGIN
  SET @perm_id = 1


  END
  ELSE
  BEGIN
  SET @perm_id = CAST((SELECT MAX(perm_id+1) FROM spat_idx_test_result) AS VARCHAR(20))
  END


  EXEC


  (
  '


  INSERT INTO spat_idx_test_result (perm_id, num_cells, permut, g1t' +@a3 +')
  VALUES (' +@perm_id +', ' +@num_cell +', ' +@permut +', ' +@time_str +')'
  )
  END
  ELSE


  EXEC
  (
  '

  UPDATE spat_idx_test_result
  SET
  num_cells = ' +@num_cell +',
  permut = ' +@permut +',
  g1t' +@a3 +' = ' +@time_str +'
  WHERE perm_id = ' +@perm_id 


  )
  SET @a3 = @a3 + 1
  END
  WHILE @a4 < 5
  BEGIN

  SET @start_t = GETDATE()
  EXEC
  (
  'CREATE TABLE #tmp_tab (shp GEOMETRY) 
  DECLARE @g2 GEOMETRY
  SET @g2 = ' +@testgeom2 +'
  INSERT #tmp_tab (shp)
  SELECT
  r.Shape AS shp
  FROM
  ' +@tabnm +' r
  WHERE
  r.SHAPE.STIntersects(@g2) = 1
  DROP TABLE #tmp_tab'
  )
  SET @end_t = GETDATE()
  SET @elapse_t = (SELECT DATEDIFF(MS, @start_t, @end_t))
  SET @num_cell = CAST(@a1 AS VARCHAR(6))
  SET @time_str = CAST(@elapse_t AS VARCHAR(20))
  EXEC
  (
  '
  DECLARE @geom geometry
  DECLARE @x xml
  DECLARE @PFVALUE float
  DECLARE @IFVALUE float
  DECLARE @GRIDL1VALUE int
  DECLARE @GRIDL2VALUE int
  DECLARE @GRIDL3VALUE int
  DECLARE @GRIDL4VALUE int
  DECLARE @TPIRVALUE bigint
  DECLARE @TPIPVALUE bigint
  DECLARE @ANOIRPBRVALUE bigint
  DECLARE @TNOOCILFQVALUE bigint
  DECLARE @TNOOCIL0IIVALUE bigint
  DECLARE @TNOOCIL4IIVALUE bigint
  DECLARE @TNOOCIL3FQVALUE bigint
  DECLARE @TNOOCIL4FQVALUE bigint
  DECLARE @TNOIOIL3FQVALUE bigint
  DECLARE @TNOIOIL4FQVALUE bigint
  DECLARE @ITTCNTLGPVALUE float
  DECLARE @INTTTCNTLGPVALUE float
  DECLARE @BTTCNTLGPVALUE float
  DECLARE @ACPONTLGPVALUE float
  DECLARE @AOPGVALUE float
  DECLARE @NORSBPFVALUE bigint
  DECLARE @NORSBIFVALUE bigint
  DECLARE @NOTSFICVALUE bigint
  DECLARE @NOROVALUE bigint
  DECLARE @PORNBPFVALUE float
  DECLARE @POPFRSBIFVALUE float
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  --MAKE SURE YOU EDIT GEOMETRY VARIABLE BELOW TO REPRESENT A POLYGON
  --THAT IS WITHIN YOUR BOUNDING BOX
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  SET @geom = GEOMETRY::STGeomFromText(''POLYGON ((247804.201 3943957.896, 29932.568 3943963.210, 247671.344 3942876.441, 247684.630 3943652.325,247804.201 3943957.896))'', 26917)
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  --MAKE SURE YOU SPECIFY THE NAME OF YOUR SPATIAL TABLE
  --AND THE NAME OF THE SPATIAL INDEX
  -- IN THE sp_help_spatial_geometry_index_xml VARIABLES
  --!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  exec sp_help_spatial_geometry_index_xml TEST_GEOM , S1169_idx , 1, @geom, @x output
  SET @PFVALUE =  @x.value(''(/Primary_Filter_Efficiency/text())[1]'', ''float'')
  SET @IFVALUE =  @x.value(''(/Internal_Filter_Efficiency/text())[1]'', ''float'')
  SET @GRIDL1VALUE =  @x.value(''(/Grid_Size_Level_1/text())[1]'', ''int'')
  SET @GRIDL2VALUE =  @x.value(''(/Grid_Size_Level_2/text())[1]'', ''int'')
  SET @GRIDL3VALUE =  @x.value(''(/Grid_Size_Level_3/text())[1]'', ''int'')
  SET @GRIDL4VALUE =  @x.value(''(/Grid_Size_Level_4/text())[1]'', ''int'')
  SET @TPIRVALUE =  @x.value(''(/Total_Primary_Index_Rows/text())[1]'', ''bigint'')
  SET @TPIPVALUE =  @x.value(''(/Total_Primary_Index_Pages/text())[1]'', ''bigint'')
  SET @ANOIRPBRVALUE =  @x.value(''(/Average_Number_Of_Index_Rows_Per_Base_Row/text())[1]'', ''bigint'')
  SET @TNOOCILFQVALUE =  @x.value(''(/Total_Number_Of_ObjectCells_In_Level0_For_QuerySample/text())[1]'', ''bigint'')
  SET @TNOOCIL0IIVALUE =  @x.value(''(/Total_Number_Of_ObjectCells_In_Level0_In_Index/text())[1]'', ''bigint'')
  SET @TNOOCIL4IIVALUE =  @x.value(''(/Total_Number_Of_ObjectCells_In_Level4_In_Index/text())[1]'', ''bigint'')
  SET @TNOOCIL3FQVALUE =  @x.value(''(/Total_Number_Of_ObjectCells_In_Level3_For_QuerySample/text())[1]'', ''bigint'')
  SET @TNOOCIL4FQVALUE =  @x.value(''(/Total_Number_Of_ObjectCells_In_Level4_For_QuerySample/text())[1]'', ''bigint'')
  SET @TNOIOIL3FQVALUE =  @x.value(''(/Total_Number_Of_Interior_ObjectCells_In_Level3_For_QuerySample/text())[1]'', ''bigint'')
  SET @TNOIOIL4FQVALUE =  @x.value(''(/Total_Number_Of_Interior_ObjectCells_In_Level4_For_QuerySample/text())[1]'', ''bigint'')
  SET @ITTCNTLGPVALUE =  @x.value(''(/Interior_To_Total_Cells_Normalized_To_Leaf_Grid_Percentage/text())[1]'', ''float'')
  SET @INTTTCNTLGPVALUE =  @x.value(''(/Intersecting_To_Total_Cells_Normalized_To_Leaf_Grid_Percentage/text())[1]'', ''float'')
  SET @BTTCNTLGPVALUE =  @x.value(''(/Border_To_Total_Cells_Normalized_To_Leaf_Grid_Percentage/text())[1]'', ''float'')
  SET @ACPONTLGPVALUE =  @x.value(''(/Average_Cells_Per_Object_Normalized_To_Leaf_Grid/text())[1]'', ''float'')
  SET @AOPGVALUE =  @x.value(''(/Average_Objects_PerLeaf_GridCell/text())[1]'', ''float'')
  SET @NORSBPFVALUE =  @x.value(''(/Number_Of_Rows_Selected_By_Primary_Filter/text())[1]'', ''bigint'')
  SET @NORSBIFVALUE =  @x.value(''(/Number_Of_Rows_Selected_By_Internal_Filter/text())[1]'', ''bigint'')
  SET @NOTSFICVALUE =  @x.value(''(/Number_Of_Times_Secondary_Filter_Is_Called/text())[1]'', ''bigint'')
  SET @NOROVALUE =  @x.value(''(/Number_Of_Rows_Output/text())[1]'', ''bigint'')
  SET @PORNBPFVALUE =  @x.value(''(/Percentage_Of_Rows_NotSelected_By_Primary_Filter/text())[1]'', ''float'')
  SET @POPFRSBIFVALUE =  @x.value(''(/Percentage_Of_Primary_Filter_Rows_Selected_By_Internal_Filter/text())[1]'', ''float'')


  UPDATE spat_idx_test_result
  SET
  num_cells = ' +@num_cell +',
  permut = ' +@permut +',
  g2t' +@a4 +' = ' +@time_str +',
  PF_EFF = @PFVALUE,
  IF_EFF = @IFVALUE,
  GRIDL1 = @GRIDL1VALUE,
  GRIDL2 = @GRIDL2VALUE,
  GRIDL3 = @GRIDL3VALUE,
  GRIDL4 = @GRIDL4VALUE,
  TPIR = @TPIRVALUE,
  TPIP = @TPIPVALUE,
  ANOIRPBR = @ANOIRPBRVALUE,
  TNOOCILFQ = @TNOOCILFQVALUE,
  TNOOCIL0II = @TNOOCIL0IIVALUE,
  TNOOCIL4II = @TNOOCIL4IIVALUE,
  TNOOCIL3FQ =  @TNOOCIL3FQVALUE,
  TNOOCIL4FQ =  @TNOOCIL4FQVALUE,
  TNOIOIL3FQ = @TNOIOIL3FQVALUE,
  TNOIOIL4FQ = @TNOIOIL4FQVALUE,
  ITTCNTLGP = @ITTCNTLGPVALUE,
  INTTTCNTLGP = @INTTTCNTLGPVALUE,
  BTTCNTLGP = @BTTCNTLGPVALUE,
  ACPONTLGP = @ACPONTLGPVALUE,
  AOPG = @AOPGVALUE,
  NORSBPF = @NORSBPFVALUE,
  NORSBIF = @NORSBIFVALUE,
  NOTSFIC = @NOTSFICVALUE,
  NORO = @NOROVALUE,
  PORNBPF = @PORNBPFVALUE,
  POPFRSBIF = @POPFRSBIFVALUE
  WHERE perm_id = ' +@perm_id
  )
  SET @a4 = @a4 + 1
  END
  SET @a2 = @a2 + 1
  END
  SET @a1 = @a1 + 1

  END
PRINT 'Testing of ' +@tabnm +' spatial index: ' +@idxnm +' is complete!'
GO

The hacking that I did involved using the xml output of the index help stored procedure to write some more descriptive results to the output in addition to query time.

 

The stored procedure is executed like so:

 

DECLARE @BOUNDING VARCHAR(MAX) 
SET @BOUNDING = 'GEOMETRY::STGeomFromText(''POLYGON ((226805.072 3975572.527, 318101.215 3975985.165, 317894.896 3922239.073, 225360.839  3926571.772 , 226805.072 3975572.527))'', 0)'

DECLARE @QUERY VARCHAR(MAX) 
SET @QUERY = 'GEOMETRY::STGeomFromText(''POLYGON ((247804.201 3943957.896, 29932.568 3943963.210, 247671.344 3942876.441, 247684.630 3943652.325, 247804.201 3943957.896))'', 26917)'

EXEC sp_tune_spatial_index 'TEST_GEOM', 'S1169_idx', 4096, 4096, @BOUNDING, @QUERY 
GO

In this example I'm only testing one Cell Size: 4096, but you could use any value range such as 8, 16.

 

Results can be nicely reviewed with the following:

 

SELECT  
    perm_id as 'Permutation #'
      ,num_cells 'Cells Per Object'
      ,permut as 'Grids'
      ,g1t1 as 'ms to query entire geometry (Level 1)'
      ,g1t2 as 'ms to query entire geometry (Level 2)'
      ,g1t3 as 'ms to query entire geometry (Level 3)'
      ,g1t4 as 'ms to query entire geometry (Level 4)'
      ,g2t1 as 'ms to execute spatial query (Level 1)'
      ,g2t2 as 'ms to execute spatial query (Level 2)'
      ,g2t3 as 'ms to execute spatial query (Level 3)'
      ,g2t4 as 'ms to execute spatial query (Level 4)'
      ,PF_EFF as 'Primary Filter Efficiency'
      ,IF_EFF as 'Internal Filter Efficiency'
      ,GRIDL1 as 'Grid Size Level 1'
      ,GRIDL2 as 'Grid Size Level 2'
      ,GRIDL3 as 'Grid Size Level 3'
      ,GRIDL4 as 'Grid Size Level 4'
      ,TPIR as 'Total Primary Index Rows'
      ,TPIP as 'Total Primary Index Pages'
      ,ANOIRPBR as 'Average Number of Index Rows Per Base Row'
      ,TNOOCILFQ as 'Total Number of Object Cells in Level 0 For Query Sample'
   ,TNOOCIL3FQ as 'Total Number of Object Cells in Level 3 For Query Sample'
   ,TNOOCIL4FQ as 'Total Number of Object Cells in Level 4 For Query Sample'
      ,TNOOCIL0II as 'Total Number of Object Cells In Level 0 In Index'
      ,TNOOCIL4II as 'Total Number of Object Cells In Level 4 In Index'
   ,TNOIOIL3FQ as 'Total Number Of Interior ObjectCells In Level 3 For QuerySample'
   ,TNOIOIL4FQ as 'Total Number Of Interior ObjectCells In Level 4 For QuerySample'
   ,ITTCNTLGP as 'Interior To Total Cells Normalized To Leaf Grid Percentage'
   ,INTTTCNTLGP as 'Intersecting To Total Cells Normalized To Leaf Grid Percentage'
   ,BTTCNTLGP as 'Border To Total Cells Normalized To Leaf Grid Percentage'
   ,ACPONTLGP as 'Average Cells Per Object Normalized To Leaf Grid'
   ,AOPG as 'Average Objects PerLeaf GridCell'
   ,NORSBPF as 'Number Of Rows Selected By Primary Filter'
   ,NORSBIF as 'Number Of Rows Selected By Internal Filter'
   ,NOTSFIC as 'Number Of Times Secondary Filter Is Called'
   ,NORO as 'Number Of Rows Output'
   ,PORNBPF as 'Percentage Of Rows NotSelected By Primary Filter'
   ,POPFRSBIF as 'Percentage Of Primary Filter Rows Selected By Internal Filter'
FROM spat_idx_test_result
ORDER BY PF_EFF

 

 

 

 

 

 

Permutation #Cells Per ObjectGridsms to query entire geometry (Level 1)ms to query entire geometry (Level 2)ms to query entire geometry (Level 3)ms to query entire geometry (Level 4)ms to execute spatial query (Level 1)ms to execute spatial query (Level 2)ms to execute spatial query (Level 3)ms to execute spatial query (Level 4)Primary Filter EfficiencyInternal Filter EfficiencyGrid Size Level 1Grid Size Level 2Grid Size Level 3Grid Size Level 4Total Primary Index RowsTotal Primary Index PagesAverage Number of Index Rows Per Base RowTotal Number of Object Cells in Level 0 For Query SampleTotal Number of Object Cells in Level 3 For Query SampleTotal Number of Object Cells in Level 4 For Query SampleTotal Number of Object Cells In Level 0 In IndexTotal Number of Object Cells In Level 4 In IndexTotal Number Of Interior ObjectCells In Level 3 For QuerySampleTotal Number Of Interior ObjectCells In Level 4 For QuerySampleInterior To Total Cells Normalized To Leaf Grid PercentageIntersecting To Total Cells Normalized To Leaf Grid PercentageBorder To Total Cells Normalized To Leaf Grid PercentageAverage Cells Per Object Normalized To Leaf GridAverage Objects PerLeaf GridCellNumber Of Rows Selected By Primary FilterNumber Of Rows Selected By Internal FilterNumber Of Times Secondary Filter Is CalledNumber Of Rows OutputPercentage Of Rows NotSelected By Primary FilterPercentage Of Primary Filter Rows Selected By Internal Filter
0160NULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULLNULL
64096LLMH906670664616302368.253968253968369.76744186046511616642562127866101110398322127672355989.354757720311310.56992216921920.07532011046949542.04256410256410.489580718051726330334396.769230769230847.6190476190476
704096HMHL1561361301465033363368.253968253968369.7674418604651256642561679767239401398062797653559NULL000006330334396.769230769230847.6190476190476
714096HMHM2332262031664340406668.253968253968369.767441860465125664256641574324728013980521574103559NULL000006330334396.769230769230847.6190476190476
724096HMHH3763102632665363735668.253968253968369.76744186046512566425625630661191515713981523061763559NULL000006330334396.769230769230847.6190476190476
524096MHHL1561161331404633434068.253968253968369.7674418604651642562561679767239401398062797653559NULL000006330334396.769230769230847.6190476190476
534096MHHM2432162101864343433368.253968253968369.767441860465164256256641574324728013980521574103559NULL000006330334396.769230769230847.6190476190476
544096MHHH3662562632665643565068.253968253968369.76744186046516425625625630661191515713981523061763559NULL000006330334396.769230769230847.6190476190476
304096MLLH866370664026302068.253968253968369.76744186046516416162562127866101102399422127672342985.853780671006514.071106659990.07511266900350532.048205128205130.4882323485227846330334396.769230769230847.6190476190476
124096LMLH837363634323262070.491803278688572.0930232558139166416256212786610195399922127672333183.29582395598916.62915728932230.07501875468867222.050769230769230.4876219054763696131304396.871794871794950.8196721311475
264096LHHM1561401431405040403078.181818181818241.86046511627911625625664797672394014008772797653576NULL000005518374397.179487179487232.7272727272727
274096LHHH1531431701635636463078.181818181818241.86046511627911625625625615743247280140137221574203576NULL000005518374397.179487179487232.7272727272727
254096LHHL1168683804336403378.181818181818241.86046511627911625625616407691232013996902407673576NULL99.84294838236830.1553065996579760.0017450179736851329.38769230769230.034027850486865518374397.179487179487232.7272727272727
434096MMHL766363633330403378.181818181818241.86046511627916464256164076912320140038324076735762428.915662650602469.87951807228921.204819277108430.042564102564102623.49397590361455518374397.179487179487232.7272727272727
444096MMHM1531431331264340433078.181818181818241.8604651162791646425664797672394014008772797653576NULL000005518374397.179487179487232.7272727272727
454096MMHH2332232231764340503078.181818181818241.8604651162791646425625615743247280140137221574203576NULL000005518374397.179487179487232.7272727272727
394096MMLH1069690964320332084.313725490196174.4186046511628646416256407691232013323749240767229335589.49053080821559.895972259269140.6134969325153371.92256410256410.5201387036543085132194397.384615384615462.7450980392157
334096MLMH1109390964320332684.313725490196174.4186046511628641664256407691232014253669240767325322787.953120741346411.42000545107660.6268738075769961.881538461538460.5314799672935415132194397.384615384615462.7450980392157
794096HHHL2462061731764640462684.313725490196174.4186046511628256256256161574334728013667NULL21573863291NULL25.067362347542174.92502070856620.00761694389168706107.7220512820510.00928315036799365132194397.384615384615462.7450980392157
804096HHHM3803502932464343464684.313725490196174.41860465116282562562566430661291515713667NULL23056973291NULL000005132194397.384615384615462.7450980392157
814096HHHH5504864305166060505684.313725490196174.4186046511628256256256256560683167128713667NULL25565943291NULL60.206410709002639.77529503067860.0182942603187489717.6133333333330.00139350811021725132194397.384615384615462.7450980392157
574096HLLH1139690904623331684.313725490196174.41860465116282561616256407691232013313749240767229333989.063750333422210.32275273406240.6134969325153371.92256410256410.5201387036543085132194397.384615384615462.7450980392157
94096LLHH1139083733620302684.313725490196174.41860465116281616256256407691232014303653240767325329190.09033670955389.280043799616750.6296194908294551.873333333333330.5338078291814955132194397.384615384615462.7450980392157
14096LLLL533636362616101384.313725490196174.418604651162816161616423015216358242286229000005132194397.384615384615462.7450980392157
154096LMMH1108383804626362684.313725490196176.7441860465116166464256407691232014033681240767325289078.511274110296120.86389568052160.6248302091822871.887692307692310.5297473512632445133184397.384615384615464.7058823529412
214096LHLH1069690804020302084.313725490196176.74418604651161625616256407691232013133776240767229307381.382415254237318.00847457627120.6091101694915251.936410256410260.5164194915254245133184397.384615384615464.7058823529412
224096LHML907073735323402691.489361702127793.023255813953516256641621278661019862342221276986132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
234096LHMM11010090905336403391.489361702127776.74418604651161625664644076912320111422941240767986214773.002380142808626.14756885413130.8500510030601841.508205128205130.6630397823869434733144397.589743589743670.2127659574468
244096LHMH1561431431236030433391.489361702127776.7441860465116162566425679767239401119628682797659862161000004733144397.589743589743670.2127659574468
164096LMHL907370735026402691.489361702127793.02325581395351664256162127866101137023422212761370132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
174096LMHM1039683734626332691.489361702127776.744186046511616642566440769123201152925652407671370179770.058479532163729.35672514619880.5847953216374271.315384615384610.7602339181286554733144397.589743589743670.2127659574468
184096LMHH1561461201035033433091.489361702127776.74418604651161664256256797672394011583251227976513701854000004733144397.589743589743670.2127659574468
194096LHLL765656534016332391.489361702127776.744186046511616256161611543375122993221154122942045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
204096LHLM807070704623302691.489361702127793.023255813953516256166421278661012293462221276229244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
24096LLLM604036403620262691.489361702127776.7441860465116161616646617223161241266156986000004733144397.589743589743670.2127659574468
34096LLLH735656605020302391.489361702127746.5116279069767161616256115433751940522115416359288.647581441263611.10562685093780.2467917077986182.077948717948720.4812438302073054720274397.589743589743642.5531914893617
44096LLML564043403316201691.489361702127776.74418604651161616641666172231724732661572218000004733144397.589743589743670.2127659574468
54096LLMM735660564330302391.489361702127776.74418604651161616646411543375172152421154172101266.404199475065633.46456692913390.1312335958005250.7815384615384621.279527559055124733144397.589743589743670.2127659574468
104096LMLL634036362620201691.489361702127776.74418604651161664161666172231724732661572218000004733144397.589743589743670.2127659574468
114096LMLM735653534030301691.489361702127776.74418604651161664166411543375172152421154172101266.404199475065633.46456692913390.1312335958005250.7815384615384621.279527559055124733144397.589743589743670.2127659574468
74096LLHL736056564023332091.489361702127776.744186046511616162561611543375132593221154132542045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
84096LLHM906673664023302391.489361702127793.023255813953516162566421278661013253462221276325244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
584096HLML937670765326403091.489361702127793.023255813953525616641621278661019862342221276986132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
594096HLMM11693103935626432691.489361702127776.74418604651162561664644076912320111582931240767986227277.516206073012621.97202320027290.5117707267144321.503076923076920.6653019447287614733144397.589743589743670.2127659574468
604096HLMH1561431431436036463091.489361702127776.7441860465116256166425679767239401121628692797659862486000004733144397.589743589743670.2127659574468
614096HLHL116100961005333502691.489361702127746.511627906976725616256164076912320140464124076735922458.536585365853639.02439024390242.43902439024390.02102564102564147.56097560975614720274397.589743589743642.5531914893617
624096HLHM1501401401435633432691.489361702127746.51162790697672561625664797672394014044432797653592NULL000004720274397.589743589743642.5531914893617
634096HLHH1561461461463633463391.489361702127746.51162790697672561625625615743247280140473521574203592NULL000004720274397.589743589743642.5531914893617
644096HMLL635656533320363091.489361702127793.023255813953525664161621278661012182342221276218132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
654096HMLM11610086734030332391.489361702127776.7441860465116256641664407691232013763648240767218287678.837719298245620.75109649122810.4111842105263161.870769230769230.5345394736842114733144397.589743589743670.2127659574468
664096HMLH1561161061134336402391.489361702127776.744186046511625664162567976723940144235812797652183089000004733144397.589743589743670.2127659574468
674096HMML1131031001035333403391.489361702127776.744186046511625664641640769123201130527192407671012185568.223611621919831.51894078705410.2574475910261121.394358974358970.7171754321441714733144397.589743589743670.2127659574468
684096HMMM1531231061435336432691.489361702127776.7441860465116256646464797672394011445257927976510121964000004733144397.589743589743670.2127659574468
694096HMMH2361901631464340464091.489361702127776.7441860465116256646425615743247280115042518215742010122231000004733144397.589743589743670.2127659574468
734096HHLL1139390835036633691.489361702127776.74418604651162562561616407691232016923079240767420213169.210782721662930.56187073725240.2273465410847681.578974358974360.6333225073075674733144397.589743589743670.2127659574468
744096HHLM1209690934036433391.489361702127776.744186046511625625616647976723940185229182797654202300000004733144397.589743589743670.2127659574468
754096HHLH2362202102105346533091.489361702127776.744186046511625625616256157432472801909286221574194202512000004733144397.589743589743670.2127659574468
764096HHML1701461461435030333391.489361702127793.02325581395352562566416797672394013433338279765244022964.689265536723230.79096045197744.519774011299440.1815384615384625.50847457627119474074397.589743589743685.1063829787234
774096HHMM2362202202005636363691.489361702127793.0232558139535256256646415743247280134483232157410244022457.881136950904425.581395348837216.53746770025840.1984615384615385.03875968992248474074397.589743589743685.1063829787234
784096HHMH3603533363536040503391.489361702127793.023255813953525625664256306611915157134563142306171244025600000474074397.589743589743685.1063829787234
134096LMML666056504326302391.489361702127776.74418604651161664641611543375132593221154132542045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
144096LMMM867366664330332391.489361702127793.02325581395351664646421278661013253462221276325244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
554096HLLL765650604326302691.489361702127776.744186046511625616161611543375122993221154122942045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
564096HLLM907366634326301691.489361702127793.023255813953525616166421278661012293462221276229244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
344096MLHL907076705333332391.489361702127793.02325581395356416256162127866101137023422212761370132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
354096MLHM10610393834333403391.489361702127776.744186046511664162566440769123201154825472407671370193676.01099332548123.40007852375340.5889281507656071.306153846153850.7656065959952894733144397.589743589743670.2127659574468
364096MLHH1531431431405636363091.489361702127776.74418604651166416256256797672394011597248027976513702049000004733144397.589743589743670.2127659574468
374096MMLL705653464023302691.489361702127776.74418604651166464161611543375122993221154122942045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
384096MMLM866360563020332091.489361702127793.02325581395356464166421278661012293462221276229244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
284096MLLL564040362620201391.489361702127776.74418604651166416161666172231724732661572218000004733144397.589743589743670.2127659574468
294096MLLM766056564316302091.489361702127776.74418604651166416166411543375172152421154172101266.404199475065633.46456692913390.1312335958005250.7815384615384621.279527559055124733144397.589743589743670.2127659574468
404096MMML867683636030403091.489361702127793.02325581395356464641621278661019862342221276986132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
414096MMMM1168376704630403391.489361702127776.7441860465116646464644076912320111752912240767986238882.005494505494517.47939560439560.515109890109891.493333333333330.6696428571428574733144397.589743589743670.2127659574468
424096MMMH1631431431335036433391.489361702127776.744186046511664646425679767239401121428692797659862454000004733144397.589743589743670.2127659574468
314096MLML735660504323332091.489361702127776.74418604651166416641611543375132593221154132542045.064377682403454.72103004291850.2145922746781120.4779487179487182.092274678111594733144397.589743589743670.2127659574468
324096MLMM866373634016332091.489361702127793.02325581395356416646421278661013253462221276325244070.479491623339129.43385326400920.08665511265164641.775384615384610.563258232235702474074397.589743589743685.1063829787234
464096MHLL937670634033333691.489361702127793.023255813953564256161621278661012182342221276218132056.362083689154643.50982066609740.1280956447480791.201025641025640.832621690862511474074397.589743589743685.1063829787234
474096MHLM11096901004620363391.489361702127776.7441860465116642561664407691232013833640240767218292780.412087912087919.17582417582420.4120879120879121.866666666666670.5357142857142864733144397.589743589743670.2127659574468
484096MHLH1631331401205033403391.489361702127776.744186046511664256162567976723940144235812797652183089000004733144397.589743589743670.2127659574468
494096MHML11686931005030362691.489361702127776.744186046511664256641640769123201130527192407671012185568.223611621919831.51894078705410.2574475910261121.394358974358970.7171754321441714733144397.589743589743670.2127659574468
504096MHMM1561201331164340434091.489361702127776.7441860465116642566464797672394011436258827976510121894000004733144397.589743589743670.2127659574468
514096MHMH2332231802206330432691.489361702127776.744186046511664256642561574324728011494253021574201012208482.371541501976316.56126482213441.067193675889331.29743589743590.7707509881422924733144397.589743589743670.2127659574468

 

 

 

 

 

 

 

 

 

 

This post doesn't even scratch the surface of Spatial Index Tuning. For example, if you're using SQL 2012, you could set the index to Auto Grid, which gives you 8 Cell Levels and supposedly is a good marriage for ArcGIS. You also want to look at other performance factors such as Internal Filter Efficiency. There's a lot of Witch Magic and Snake Oil Medicine to Spatial Index Tuning. Hope this blog post gets you started in the right direction!

In Real-time Geodatabase Replication? Part 1 you were introduced to SQL Real-Time Replication. One requirement of SQL Merge Replication is that the table being replicated must have a rowguidcol. This is easy with a cursor:

 

DECLARE 
@sql VARCHAR(500), 
@tableName VARCHAR(128)
DECLARE gandgCursor CURSOR 
FOR 
SELECT 
table_name
FROM 
SDE_column_registry
WHERE 
table_name like 'CR%' 
ORDER BY 
table_name ASC 
OPEN gandgCursor 
FETCH NEXT FROM gandgCursor  
INTO @tableName 
WHILE ( @@FETCH_STATUS = 0 ) 
BEGIN 
SET @sql = ' 
ALTER TABLE [dbo].['+ @tableName + '] 
ALTER COLUMN GlobalID ADD ROWGUIDCOL ;
' 
 PRINT 'Executing Statement - '+ @sql 
 EXECUTE ( @sql ) 
 FETCH NEXT FROM gandgCursor  
 INTO  @tableName 
 END 
 CLOSE gandgCursor 
 DEALLOCATE gandgCursor

Messin' up the scenery, breakin' my mind. Do this, don't do that, can't you read the LINE? In Hey Neighbor? What's Your Value? , Rasters, Pixels and Code Oh My!, and Using SQL to Auto-Populate XY Values I had introduced some SQL trickery to automate some data entry. Probably should have mentioned those methods mostly just work on points....

 

For our GIS data standard, we also want to collect the same administrative values and automation for line types such as roads, trails, and other linear features. Problem is, there's a lot of vertices (nodes, coordinate pairs) on a line, which one to use for the SQL trickery?

 

There are some existing SQL Geography methods for manipulating well-known-text from a Geography line instance, but none of them give me a midpoint value, and they certainly don't lend themselves well to being part of a trigger.

 

In order to easily get a line midpoint value from Geography, you'll need the SQL Server Spatial Tools - Home  installed, which is as easy as registering a DLL through SQL on your database instance- works on 2008 and 2012.

 

A line midpoint can be queried as

 

SELECT dbo.LocateAlongGeog(SHAPE, SHAPE.STLength()/2).Lat FROM [dbo].[SOMELINE]
SELECT dbo.LocateAlongGeog(SHAPE, SHAPE.STLength()/2).Long FROM [dbo].[SOMELINE]

 

Wrapping that up in a trigger to populate the LAT, LON, and Elevation fields we get:

 

CREATE TRIGGER    [dbo].[SOMELINE_GEO]
ON    [dbo].[SOMELINE]
AFTER INSERT, UPDATE NOT FOR REPLICATION
AS   
BEGIN    
SET NOCOUNT ON;  
 UPDATE p SET  
LON = CASE WHEN p.SHAPE IS NULL THEN p.STStartPoint.LON ELSE p.SHAPE.Long END,  
LAT = CASE WHEN p.SHAPE IS NULL THEN p.STStartPoint.LAT ELSE p.SHAPE.Lat END,
ELEVATION = (SELECT
pdata.getValueByLoc(1,p.SHAPE.Long,p.SHAPE.Lat)  FROM [dbo].[DEM10MP])
  FROM  
    dbo.SOMELINE AS p 
  INNER JOIN  
    inserted AS i 
    ON i.GlobalID = p.GlobalID 
END
GO

 

Polygons don't require the "special function" and simply can be queried for midpoint values with:

 

SELECT SHAPE.EnvelopeCenter().Lat
FROM [SOMEPOLY
SELECT SHAPE.EnvelopeCenter().Long
FROM [SOMEPOLY]
tpcolson

It's 4:20. Got a minute?

Posted by tpcolson Champion Jan 19, 2015

It's very easy as a GIS administrator to add lots and lots of fields to a feature class and even easier to naively think that folks are going to populate or edit them! One common theme for me is date fields. We have edit date, create date, FGDC edit and create date, year...blah blah blah.

 

I wish there was just one, or even no, onus on editors to have to think about dates and the database just...handled it.

 

This can be so. With a database trigger. Let's start with FGDC dates. If you're implementing some form of feature-level metadata, or some data-mining that tags another metadata element with FGDC source, create or edit date yet have editor tracking enabled, there's no reason why you should have to also populate the FGDC[whatever]DATE column as well.

 

CREATE TRIGGER [dbo].[SOMETABLE_DATE]
ON [dbo].[SOMETABLE]
AFTER INSERT, UPDATE NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON; 
UPDATE [dbo].[SOMETABLE]
SET 
SRCDATEFGDC = (convert(varchar(8), SOURCEDATE, 112)), 
CREATEDATEFGDC = (convert(varchar(8), CREATEDATE, 112)), 
EDITDATEFGDC = (convert(varchar(8), EDITDATE, 112))
END
GO

 

Here we're taking the EDITDATE and CREATE date values, which are sql

datetime(2)

and converting them to a string format as yyyymmdd. The 112 is what controls the output format. See CAST and CONVERT (Transact-SQL) for a full list of date conversion formats.

 

In addition, the user is also selecting a source date in this case which may be different than create date and that is converted as well.

 

If you have a year column you can also

 

YEAR = CASE WHEN SOURCEDATE IS NULL THEN NULL ELSE YEAR(SOURCEDATE) END

 

with null value handling thrown in.

 

If you have editor tracking disabled for some reason (often causes issues with Collector for ArcGIS) you could through a default constraint on your date column(s) as

getdate()

.

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Do you have someone in your organization that YELLS WITH THEIR KEYBOARD? Everything is upper case? What about the rogue all-lower-case folks? Or ever worse, the First-word-is-proper-case-all-other-words-are-lower-case ninjas.

 

I have a personal pet-peeve (OCD). Feature names in GIS should be Proper Case. Happy Valley Road. Not Happy valley road, not Happy valley Road, and definitely not HAPPY VALLEY ROAD.

 

I'd like to enforce Proper Case naming of those feature regardless of how the text is cased by the editor. This can be accomplished with a simple Function and Trigger in SQL.

 

First create the following Function:

 

create function [dbo].[ProperCase](@Text as varchar(8000))
returns varchar(8000)
as
begin
   declare @Reset bit;
   declare @Ret varchar(8000);
   declare @i int;
   declare @c char(1);


   select @Reset = 1, @i=1, @Ret = '';
   
   while (@i <= len(@Text))
    select @c= substring(@Text,@i,1),
               @Ret = @Ret + case when @Reset=1 then UPPER(@c) else LOWER(@c) end,
               @Reset = case when @c like '[a-zA-Z]' then 0 else 1 end,
               @i = @i +1
   return @Ret
end
GO

 

Then this Triggger:

 

CREATE TRIGGER [dbo].[NAME_UPDATE]
ON [dbo].[SOME_TABLE]
AFTER INSERT NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON; 
UPDATE SOME_TABLE
SET
NAME = dbo.Propercase(NAME)
END
GO

 

Note here that this only fires after an insert, not an update. There could be legitimate reason why the YELLERS want something other than proper case, e.g. "ND Happy Valley Road" (ND for "North District). This allows them (or me after they YELL at me) to update that one feature without the trigger proper-casing my edit.

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

I have tried, and failed, many times to get geodatabase replication using ESRI tools to work, work reliably, or work at all. Perhaps it's all the un-authorized SQL tinkering I do. Even when I do get it working, I'm not a big fan of having to fire up Arc Toolbox and push buttons to make replication happen. Not a big fan of writing Python code, either.

 

With SQL server, there IS a way to replicate an entire database, or parts of of a database. The part of a database is handled with Merge Replication, in which you define what tables to replicate, and what criteria under which data is replicated. I like that option! Consider this environment:

 

I have a water quality geodatabase (in SQL), which is big, complex, has lots of legacy stuff in it that I just don't feel like dealing with right now. Replicating or mirroring the entire database is not an option....because.....in a brilliant management decision, the "datacenter" was placed in a building very far from any modern telco hubs. On a good day I can actually download email attachments. The problem is, the users of this database are every where else but in this building.

 

More drama: We're drinking the Portal for ArcGIS Cool Aid and high on Collector for ArcGIS. Can't really connect my database to either of those technologies using carrier pigeons.

 

But I do have some very limited in capacity database and application servers in the "cloud". Certainly not the kind that can house a 7GB water quality database, but here's an idea. What if I could replicate the feature classes that are most needed operationally to these cloud platforms? Field users could get to their data, and I could consume it in my remote datacenter in other applications. All they need access to is the "dots on the map" for their mobile devices, desktop applications, and management access to web-based maps (what stream did that oil spill impact?).

 

Here's what we'll need:

 

  1. An obnoxiously massive and complex SDE/SQL database called "FISH";
  2. A feature class that's been around forever and has a ton of data called "TEST";
  3. A remote SDE/SQL database called "FISHREPL" which is the same SQL/SDE version as "FISH";
  4. A horrible internet connection.


This is a multi-part blog post, here are the steps:

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

And of course a few caveats: This only works with unversioned data which places quite a limit on things. In some cases the cons of having unversioned feature classes are not as great as the pros of having a real-time replication service.

 

Please bear in mind that this method of replication is extremely unsupported. I just did it because I like tinkering under the hood of SDE and wanted to see if it would work. It "sort of" does! However in contrast to the out-of-the-box ESRI replication tools, this process sure resolves a lot of headaches for me. I can attach user-editors to a local SDE instance where "bulk" editing occurs and SQL Merge Replication will fire those changes to the "cloud" SDE instance where the rest of my organization can access (and edit) the data through Portal for ArcGIS, and my mobile clients can perform edits and updates using Collector for ArcGIS. And it works continuously. Full-time. Even with file attachments-enabled and 2 mb photos being attached to a point those changes sync within 60 seconds. SQL replication in 2012 is very efficient in compressing data and ONLY sending deltas. When I generate a "snapshot" of a feature class with file attachments, SQL sees 50 mb of data that needs to be replicated. Performing a versioned-checkout of the same data using the Distributed Geodatabase tools results in a 220 mb file geodatabase.

 

But emphasis on "experimental". There is a lot of overhead to managing SQL Merge Replication and you can really screw up some operational data. If you get it working after following this blog series I suggest you spend some time deliberately breaking things and see if you can recover your data integrity! Otherwise stick to the ESRI replication tools. They work. When you push the button.....

 

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Setting up a Publication Server

 

First a little side-step to prepare our GIS table for replication. More on the caveats later!

 

In order to participate in replication a SQL table will need  a column flagged as a ROWGUID column. In Dear GlobalID: I Hate You! Rants from an OCD data manager  we talked about the Global ID Unique Identifier. We're going to use it to tell the Replication engine that this column is also used to manage changes between the Publisher database and any subscribers. There are two requirements for this column:

 

  1. It has to default to newsequentialid();
  2. It has to be flagged as a ROWGUID column.

 

4332.JPG

If you don't set the GlobalID to be the ROWGUID, when you create the publication a new ROWGUID column will be created for you, outside of the "ESRI stack", which will cause problems when you attempt to edit this feature class (as this new column has been created without the participation and consent of SDE).

 

 

In Microsoft SQL Merge Replication you first need a publication server, from which other SQL servers can "subscribe" to replication articles published by the publication server.

 

In SSMS launch the New Publication Wizard and select the Fish Database as the publication database. Select Merge Publication as the publication type.

 

qwr4e3.JPG

 

12434.JPG

 

Since we're using the Geography Storage type we don't need to publish any other table other than the "Test" base table. Select "Test" as the Object to Publish.

 

rewrew.JPG

Instinctively you think we need more tables than that considering how Arc SDE manages data. The secret will be revealed in a few more posts. Don't filter any tables.

 

324.JPG

 

Configure the Snapshot Agent to run every hour (you can pick any schedule you like!).

 

 

reqwrew.JPG

 

Set the Snapshot Agent to use a domain account (ideal).

 

fdsfdsf.JPG

Then check Create the publication and Generate a script file. Not pictured, but one of the dialogues will ask you where the Snapshot files will go. Be sure to pick a network share that both servers will be able to access. It's very important that you check the box that asks if you want to create the script that will generate the Publication Agent!

 

543252.JPG

34.JPG

qw.JPG

 

Make sure the Publication and Snapshot Agent is working by Launching Replication Monitor and starting the Agent. The directory you picked to host the replication articles should start filling up with some files.

 

4.JPG

 

You should see a status of [100%] in the last action column.

 

 

e4532.JPG

 

Double-click on the agent and select Action -> Start Agent. Wait a few minutes and select Action -> Refresh. You should see another successfully completed Agent job.

 

42.JPG

Now review the Publisher Properties. It's good that you see how all of the pieces are put together! In Replication Monitor in the left window pane select the Replication (FISH) and right-click -> Properties. You should see:

 

t543543.JPG

 

434.JPGNote here in the Location of Snapshot files that they are going to a UNC network share that the domain account used to run the agent has full access to.

 

r32542.JPG

 

We need to make a few changes though.

 

Convert filestream to MAX data types is set to False by default. If you attempt to Subscribe to this publication any column data in a Geography or Geometry column will be converted to a string during the replication process. Not good for GIS data! The only way to correct this is to delete the publication and recreate it programmatically. However since you saved the script the created the publication all you have to do is edit two lines of that script and re-run it!

 

 

use [FISH]
exec sp_replicationdboption @dbname = N'FISH', @optname = N'merge publish', @value = N'true'
GO
-- Adding the merge publication
use [FISH]
exec sp_addmergepublication @publication = N'FISH_REPL', 
@description = N'Merge publication of database ''FISH'' from Publisher ''INPGRSMS04TC''.', 
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', 
@allow_anonymous = N'true', @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true', 
@compress_snapshot = N'false', @ftp_port = 21, @ftp_subdirectory = N'ftp', @ftp_login = N'anonymous', 
@allow_subscription_copy = N'false', @add_to_active_directory = N'false', @dynamic_filters = N'false', 
@conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false', 
@max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null, 
@publication_compatibility_level = N'100RTM', @replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false', 
@allow_web_synchronization = N'false', @allow_partition_realignment = N'true', @retention_period_unit = N'days',
 @conflict_logging = N'both', @automatic_reinitialization_policy = 0
GO

exec sp_addpublication_snapshot @publication = N'FISH_REPL', @frequency_type = 4, 
@frequency_interval = 14, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, 
@frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day = 500, 
@active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, 
@job_login = N'domain\account', @job_password = null, @publisher_security_mode = 1

use [FISH]
exec sp_addmergearticle @publication = N'FISH_REPL', @article = N'TEST', 
@source_owner = N'dbo', @source_object = N'TEST', @type = N'table', 
@description = null, @creation_script = null, @pre_creation_cmd = N'drop', 
-- Change 0x000000010C034FD1 to 0x000000000C034FD1
@schema_option = 0x000000010C034FD1, 
@identityrangemanagementoption = N'manual', 
@destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false', 
@subset_filterclause = null, @vertical_partition = N'false', @verify_resolver_signature = 1, 
@allow_interactive_resolver = N'false', @fast_multicol_updateproc = N'true', @check_permissions = 0, 
@subscriber_upload_options = 0, @delete_tracking = N'true', @compensate_for_errors = N'false', 
-- Change true to false
@stream_blob_columns = N'false', 
@partition_options = 0
GO

 

In the code above which was saved when the Publication Agent was first created @schema_option has been changed to 0x000000010C034FD1 and @stream_blob_columns has been set to false.

 

Run the script however note that you will have to re-generate the agent job that runs the article generation and synchronization process. Right click on the Publication in Replication Monitor and select Agent Security and configure the agent to run using a domain service account.

 

re3453.JPG

 

Instead of re-creating the Snapshot Agent you could just:

 

sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'schema_option','0x000000000C034FD1',1,1
GO
sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'stream_blob_columns','false',1,1
GO

 

And then re-initialize the snapshot.

 

But it's a good idea to learn how to create the Publication using SQL.

 

 

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Setting up a Subscription Server

 

On your remote database server, create a NEW SDE database called "FEMPFISH" (or something other than fish).

 

Once the database has been created, manually replicate by right-clicking and selecting Import, and Import the TEST feature class definition from the FISH (publication) database. Make sure you select the Geography storage type. Everything about the two feature classes must be the same! If your feature class has attribute domains the best way to do this is with a XML export/import. Otherwise your feature class on the remote (subscription) server won't have attribute domain pick-lists for editors to use!

 

Note that I'm not using Feature Datasets here! More complexity....

 

On the remote SQL server activate the New Subscription Wizard.

 

Select the Local SQL Server that hosts the FISH database as the Publisher. You should then see this:

 

rftewrtwe.JPG

 

I prefer to run all agents at the Distributor as this a) keeps the overhead on my server that has the capacity to do so and b) makes managing many subscriptions a lot easier (one interface).

 

reqwr32.JPG

 

Select the Subscription Database as the one you created on the remote server to host the replicated feature class.

 

r3ew45r3.JPG

In the Agent Security, you SHOULD use the SAME domain account you used to create the Publisher Agent. Good luck with another security model....and set the agent schedule to run continuously.

 

r32w423.JPG

 

 

 

qew453.JPG

r3432.JPG

tr3254.JPG

 

qwer34.JPG

 

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Making Sure the Agent is Working 

 

Before we dive into testing replicated CRUD operations on either SDE instance we need to make sure the Agent(s) are working.

 

In Replication Monitor select the Agent tab and right-click on Snapshot Agent and select Start Agent. It should report "Running".

 

When it reports "Completed" double-click on it. You should see something like this:

 

r3452.JPG

This indicates that the Agent is successfully reading the local (source) table TEST and any changes and publishing change articles to the server share.

 

Select the All Subscriptions tab and double-click on the subscription. You should see something like this:

 

r3e243.JPG

Now create lots of new points in the TEST feature class on the local (publication) SDE instance. Wait a few minutes and refresh the Subscription status in Monitor.

 

You should see something like this:

 

453432.JPG

This indicates that inserts on the FISH database are being replicated to the FEMPFISH database. Success! Try some updates and deletes.

 

However it is very likely you'll see some errors like this, which is related to our previous issue of the Geography column being converted to text. While we did update the properties of the Publication Agent to correctly merge the spatial column we need to reinitialize the subscription agent to recognize this change.

 

re432423.JPG

 

To resolve this:

 

Restart the Publication Agent:

 

qwer4325432.JPG

 

After is has completed its job with 100% success stop, and restart, the sync agent:

 

r3254232.JPG

aws4r3432.JPG

 

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Testing CRUD Operations on the Publisher

 

Lets make some edits on the Publisher (local) SDE and see if they show up on our remote SDE instance.

 

In order to edit non-versioned data you will need to set your editor options to allow un-versioned editing:

 

asreew.JPG

In Arc Map, add one point to the local FISH feature class. In the Replication Monitor you should see that insert fire to the remote database within 60 seconds.

 

reqwr432.JPG

 

If you add the remote FISH feature class to Arc Map you should see the exact same point.

 

Perform an attribute edit of the new point. In the Replication Monitor you should see that update fire to the remote database within 60 seconds.

 

fewr3ew.JPG

 

Delete the point. In the Replication Monitor you should see that delete fire to the remote database within 60 seconds.

 

rf3e2432.JPG

 

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

Schema Changes

 

In Arc Catalog add a new field to the local instance of FISH.

 

fewtrew.JPGIn the Replication Monitor you should see that schema change fire to the remote database within 60 seconds.

 

dsatrfew.JPGIn the remote instance of TEST you should see "ANEWFIELD" in its properties in Arc Catalog. Make some attribute edits to this field on the local instance and watch them fire.

 

Here's why we created an empty version of TEST on the remote SDE instance and ONLY replicate the TEST table. In the SDE "stack" there are lots and lots and lots of tables, functions, and stored procedures that handle things. Had we attempted to replicate TEST from the Publisher to the Subscriber without first having created the TEST feature class on the subscriber, none of the SDE dependencies would have been created and we wouldn't be able to use the remote TEST feature class in any GIS applications.

 

In this manner we can have a fully synchronized copy of TEST on both SDE instance yet the SDE "stack" on either instance independently manages the feature class without any interference caused by replication.

 

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

 

If you've used any component of ESRI enterprise technology, you've encountered the ubiquitous Global ID unique identifier.

 

Probably by right-clicking on a feature class and select "Add Global ID's" and forgetting about it.

 

This post is more of a rant and rave against random GUID's than anything else. You certainly need a GUID column if you're replicating data, or performing disconnected edits.

 

I just absolutely hate them! My problem with the Global ID column is how ESRI implements it. Not only is it unique (as it should be), it's random. I use SDE/SQL data for far more than showing dots on a map. We link to that data with other applications, such as those that manage the non-GIS data that is attached to that dot on the map. In some applications I have dozens of relationships and tables hanging off of that "GIS table". All accomplished with crafty use of the Global ID column as a way to relate non-GIS data to the GIS data. Sure, I could use something else, but then I wouldn't have such an attention-getting blog post title.

 

My problem with the Global ID column is its randomness. Imagine a feature class with 10,000 (I have a few with 500,000) points. That's 10,000 random GUID's. Now link to that table using that column, or sort it, or filter it. Throw a clustered (or non-clustered) index on the Global ID column. Those random GUID's sure slow things down, don't they?

 

Let's see how this works. When you do that right-clicking and forgetting, SDE adds a constraint to the Global ID field:

 

ALTER TABLE [dbo].[TEST] ADD  DEFAULT ('{00000000-0000-0000-0000-000000000000}') FOR [GlobalID]

 

Every new feature added to that feature class calls a stored procedure which takes a SQL newid (GUID value) and converts it to a string.

 

CREATE PROCEDURE [dbo].[next_globalid]
@guid NVARCHAR(38) OUTPUT
AS SET NOCOUNT ON
BEGIN
SELECT @guid =  '{' + CAST (NEWID() AS NVARCHAR(38)) + '}'
END
GO

 

But what if we could make the GUID sequential and not random, but still unique? Such a thing exists, and a search of the internet will reveal to you a very heated debate with many opinions firmly rooted in the for or against sequential GUID's. Not for here....

 

Trying to alter the SP to default to the SQL newSEQUENTIALID doesn't work...

 

CREATE PROCEDURE [dbo].[next_globalid]
@guid NVARCHAR(38) OUTPUT
AS SET NOCOUNT ON
BEGIN
SELECT @guid =  '{' + CAST (NEWSEQUENTIALID() AS NVARCHAR(38)) + '}'
END
GO
The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression.

So here's my work around: Put a trigger in between the delta and the base table and ignore the ESRI SP that writes the random GUID. We can do this with an instead of insert trigger and changing the default of the Global ID column.

 

First:

 

ALTER TABLE [dbo].[TEST] DROP CONSTRAINT [DF__TEST__GlobalID__65FFA925]
GO
ALTER TABLE [dbo].[TEST] ADD  CONSTRAINT [DF__TEST__GlobalID__65FFA925]  DEFAULT (newsequentialid()) FOR [GlobalID]
GO

 

Then:

 

CREATE TRIGGER [dbo].[TEST_GLOBAL_ID]

ON [dbo].[TEST]
INSTEAD OF INSERT NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON; 

INSERT [TEST](
OBJECTID, SOMEFIELD, SHAPE, LON, LAT, QuadName, Watershed, County, State, PARKDISTRICT, ELEVATION, StreamName, RiverOrder
)
SELECT 
OBJECTID, a.SOMEFIELD, a.SHAPE, a.LON, a.LAT, a.QuadName, a.Watershed, a.County, a.State, a.PARKDISTRICT, a.ELEVATION, a.StreamName, a.RiverOrder
From
(SELECT 
OBJECTID, SOMEFIELD, SHAPE, LON, LAT, QuadName, Watershed, County, State, PARKDISTRICT, ELEVATION, StreamName, RiverOrder
FROM inserted) 
AS a 
;
end
GO

Add a couple of new points and

 

SELECT 
[GlobalID]
FROM [FISH].[dbo].[TEST]

 

 

GlobalID
C10EB116-8B14-45AB-924D-0C734E5AB5B6
61AE23FA-F02D-45C1-991D-571B77592014
0695789D-35A7-4BE4-B5F6-5EAF68D2A50B
5A20B628-6048-4D48-8380-AC005A0E70EC
CF52E6DE-5F60-456E-9DEF-C006D9BBD348
58F80A07-F8A8-4D62-BBB3-D012EA781F0C
5E7B9C91-2891-E411-B57C-E41F134196DA
BE30E498-2891-E411-B57C-E41F134196DA
BF30E498-2891-E411-B57C-E41F134196DA
C030E498-2891-E411-B57C-E41F134196DA
C130E498-2891-E411-B57C-E41F134196DA
38C5A6F2-60FF-4C39-BF37-F7AFCBDFDE90

 

Note how my new points (starting at line 08 and ending at 12) contain sequential ID's. I'm using an instead of versus an after update trigger here because I want this to occur BEFORE it hits the base table.

 

Unfortunately, this "alteration" is not very robust and can't be used if your data is versioned WITHOUT moving edits to base, or not versioned at all (as you need when using Collector for ArcGIS). Also not discussed here is my very compelling reason for using the Global ID field in some of the non-GIS applications that use these data. Let's chalk this up to the "tinkering" category. ESRI I'm sure has a very good reason for implementing the Global ID default like they do, and this post surely will foster some heated debate and comments.

 

This, added with the after update/insert trigger presented in Hey Neighbor? What's Your Value?  will result in a nice, neat feature class where almost all of the attributes are automatically populated and your Global ID's are ordered.

 

 

 

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.