GIS Life Blog - Page 2

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Latest Activity

(450 Posts)
ThomasColson
MVP Frequent Contributor

Setting up a Publication Server

First a little side-step to prepare our GIS table for replication. More on the caveats later!

In order to participate in replication a SQL table will need  a column flagged as a ROWGUID column. In Dear GlobalID: I Hate You! Rants from an OCD data manager  we talked about the Global ID Unique Identifier. We're going to use it to tell the Replication engine that this column is also used to manage changes between the Publisher database and any subscribers. There are two requirements for this column:

  1. It has to default to newsequentialid();
  2. It has to be flagged as a ROWGUID column.

4332.JPG

If you don't set the GlobalID to be the ROWGUID, when you create the publication a new ROWGUID column will be created for you, outside of the "ESRI stack", which will cause problems when you attempt to edit this feature class (as this new column has been created without the participation and consent of SDE).

In Microsoft SQL Merge Replication you first need a publication server, from which other SQL servers can "subscribe" to replication articles published by the publication server.

In SSMS launch the New Publication Wizard and select the Fish Database as the publication database. Select Merge Publication as the publication type.

qwr4e3.JPG

12434.JPG

Since we're using the Geography Storage type we don't need to publish any other table other than the "Test" base table. Select "Test" as the Object to Publish.

rewrew.JPG

Instinctively you think we need more tables than that considering how Arc SDE manages data. The secret will be revealed in a few more posts. Don't filter any tables.

324.JPG

Configure the Snapshot Agent to run every hour (you can pick any schedule you like!).

reqwrew.JPG

Set the Snapshot Agent to use a domain account (ideal).

fdsfdsf.JPG

Then check Create the publication and Generate a script file. Not pictured, but one of the dialogues will ask you where the Snapshot files will go. Be sure to pick a network share that both servers will be able to access. It's very important that you check the box that asks if you want to create the script that will generate the Publication Agent!

543252.JPG

34.JPG

qw.JPG

Make sure the Publication and Snapshot Agent is working by Launching Replication Monitor and starting the Agent. The directory you picked to host the replication articles should start filling up with some files.

4.JPG

You should see a status of [100%] in the last action column.

e4532.JPG

Double-click on the agent and select Action -> Start Agent. Wait a few minutes and select Action -> Refresh. You should see another successfully completed Agent job.

42.JPG

Now review the Publisher Properties. It's good that you see how all of the pieces are put together! In Replication Monitor in the left window pane select the Replication (FISH) and right-click -> Properties. You should see:

t543543.JPG

434.JPGNote here in the Location of Snapshot files that they are going to a UNC network share that the domain account used to run the agent has full access to.

r32542.JPG

We need to make a few changes though.

Convert filestream to MAX data types is set to False by default. If you attempt to Subscribe to this publication any column data in a Geography or Geometry column will be converted to a string during the replication process. Not good for GIS data! The only way to correct this is to delete the publication and recreate it programmatically. However since you saved the script the created the publication all you have to do is edit two lines of that script and re-run it!

use [FISH]
exec sp_replicationdboption @dbname = N'FISH', @optname = N'merge publish', @value = N'true'
GO
-- Adding the merge publication
use [FISH]
exec sp_addmergepublication @publication = N'FISH_REPL', 
@description = N'Merge publication of database ''FISH'' from Publisher ''INPGRSMS04TC''.', 
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', 
@allow_anonymous = N'true', @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true', 
@compress_snapshot = N'false', @ftp_port = 21, @ftp_subdirectory = N'ftp', @ftp_login = N'anonymous', 
@allow_subscription_copy = N'false', @add_to_active_directory = N'false', @dynamic_filters = N'false', 
@conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false', 
@max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null, 
@publication_compatibility_level = N'100RTM', @replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false', 
@allow_web_synchronization = N'false', @allow_partition_realignment = N'true', @retention_period_unit = N'days',
 @conflict_logging = N'both', @automatic_reinitialization_policy = 0
GO

exec sp_addpublication_snapshot @publication = N'FISH_REPL', @frequency_type = 4, 
@frequency_interval = 14, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, 
@frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day = 500, 
@active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, 
@job_login = N'domain\account', @job_password = null, @publisher_security_mode = 1

use [FISH]
exec sp_addmergearticle @publication = N'FISH_REPL', @article = N'TEST', 
@source_owner = N'dbo', @source_object = N'TEST', @type = N'table', 
@description = null, @creation_script = null, @pre_creation_cmd = N'drop', 
-- Change 0x000000010C034FD1 to 0x000000000C034FD1
@schema_option = 0x000000010C034FD1, 
@identityrangemanagementoption = N'manual', 
@destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false', 
@subset_filterclause = null, @vertical_partition = N'false', @verify_resolver_signature = 1, 
@allow_interactive_resolver = N'false', @fast_multicol_updateproc = N'true', @check_permissions = 0, 
@subscriber_upload_options = 0, @delete_tracking = N'true', @compensate_for_errors = N'false', 
-- Change true to false
@stream_blob_columns = N'false', 
@partition_options = 0
GO

In the code above which was saved when the Publication Agent was first created @schema_option has been changed to 0x000000010C034FD1 and @stream_blob_columns has been set to false.

Run the script however note that you will have to re-generate the agent job that runs the article generation and synchronization process. Right click on the Publication in Replication Monitor and select Agent Security and configure the agent to run using a domain service account.

re3453.JPG

Instead of re-creating the Snapshot Agent you could just:

sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'schema_option','0x000000000C034FD1',1,1
GO
sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'stream_blob_columns','false',1,1
GO

And then re-initialize the snapshot.

But it's a good idea to learn how to create the Publication using SQL.

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
3 0 1,192
ThomasColson
MVP Frequent Contributor

Setting up a Subscription Server

On your remote database server, create a NEW SDE database called "FEMPFISH" (or something other than fish).

Once the database has been created, manually replicate by right-clicking and selecting Import, and Import the TEST feature class definition from the FISH (publication) database. Make sure you select the Geography storage type. Everything about the two feature classes must be the same! If your feature class has attribute domains the best way to do this is with a XML export/import. Otherwise your feature class on the remote (subscription) server won't have attribute domain pick-lists for editors to use!

Note that I'm not using Feature Datasets here! More complexity....

On the remote SQL server activate the New Subscription Wizard.

Select the Local SQL Server that hosts the FISH database as the Publisher. You should then see this:

rftewrtwe.JPG

I prefer to run all agents at the Distributor as this a) keeps the overhead on my server that has the capacity to do so and b) makes managing many subscriptions a lot easier (one interface).

reqwr32.JPG

Select the Subscription Database as the one you created on the remote server to host the replicated feature class.

r3ew45r3.JPG

In the Agent Security, you SHOULD use the SAME domain account you used to create the Publisher Agent. Good luck with another security model....and set the agent schedule to run continuously.

r32w423.JPG

qew453.JPG

r3432.JPG

tr3254.JPG

qwer34.JPG

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
3 0 732
ThomasColson
MVP Frequent Contributor

Making Sure the Agent is Working 

Before we dive into testing replicated CRUD operations on either SDE instance we need to make sure the Agent(s) are working.

In Replication Monitor select the Agent tab and right-click on Snapshot Agent and select Start Agent. It should report "Running".

When it reports "Completed" double-click on it. You should see something like this:

r3452.JPG

This indicates that the Agent is successfully reading the local (source) table TEST and any changes and publishing change articles to the server share.

Select the All Subscriptions tab and double-click on the subscription. You should see something like this:

r3e243.JPG

Now create lots of new points in the TEST feature class on the local (publication) SDE instance. Wait a few minutes and refresh the Subscription status in Monitor.

You should see something like this:

453432.JPG

This indicates that inserts on the FISH database are being replicated to the FEMPFISH database. Success! Try some updates and deletes.

However it is very likely you'll see some errors like this, which is related to our previous issue of the Geography column being converted to text. While we did update the properties of the Publication Agent to correctly merge the spatial column we need to reinitialize the subscription agent to recognize this change.

re432423.JPG

To resolve this:

Restart the Publication Agent:

qwer4325432.JPG

After is has completed its job with 100% success stop, and restart, the sync agent:

r3254232.JPG

aws4r3432.JPG

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
3 0 654
ThomasColson
MVP Frequent Contributor

Testing CRUD Operations on the Publisher

Lets make some edits on the Publisher (local) SDE and see if they show up on our remote SDE instance.

In order to edit non-versioned data you will need to set your editor options to allow un-versioned editing:

asreew.JPG

In Arc Map, add one point to the local FISH feature class. In the Replication Monitor you should see that insert fire to the remote database within 60 seconds.

reqwr432.JPG

If you add the remote FISH feature class to Arc Map you should see the exact same point.

Perform an attribute edit of the new point. In the Replication Monitor you should see that update fire to the remote database within 60 seconds.

fewr3ew.JPG

Delete the point. In the Replication Monitor you should see that delete fire to the remote database within 60 seconds.

rf3e2432.JPG

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
3 0 794
ThomasColson
MVP Frequent Contributor

Schema Changes

In Arc Catalog add a new field to the local instance of FISH.

fewtrew.JPGIn the Replication Monitor you should see that schema change fire to the remote database within 60 seconds.

dsatrfew.JPGIn the remote instance of TEST you should see "ANEWFIELD" in its properties in Arc Catalog. Make some attribute edits to this field on the local instance and watch them fire.

Here's why we created an empty version of TEST on the remote SDE instance and ONLY replicate the TEST table. In the SDE "stack" there are lots and lots and lots of tables, functions, and stored procedures that handle things. Had we attempted to replicate TEST from the Publisher to the Subscriber without first having created the TEST feature class on the subscriber, none of the SDE dependencies would have been created and we wouldn't be able to use the remote TEST feature class in any GIS applications.

In this manner we can have a fully synchronized copy of TEST on both SDE instance yet the SDE "stack" on either instance independently manages the feature class without any interference caused by replication.

Real-time Geodatabase Replication? Part 1

Real-time Geodatabase Replication? Part 2

Real-time Geodatabase Replication? Part 3

Real-time Geodatabase Replication? Part 4

Real-time Geodatabase Replication? Part 5

Real-time Geodatabase Replication? Part 6

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
3 0 596
ThomasColson
MVP Frequent Contributor

If you've used any component of ESRI enterprise technology, you've encountered the ubiquitous Global ID unique identifier.

Probably by right-clicking on a feature class and select "Add Global ID's" and forgetting about it.

This post is more of a rant and rave against random GUID's than anything else. You certainly need a GUID column if you're replicating data, or performing disconnected edits.

I just absolutely hate them! My problem with the Global ID column is how ESRI implements it. Not only is it unique (as it should be), it's random. I use SDE/SQL data for far more than showing dots on a map. We link to that data with other applications, such as those that manage the non-GIS data that is attached to that dot on the map. In some applications I have dozens of relationships and tables hanging off of that "GIS table". All accomplished with crafty use of the Global ID column as a way to relate non-GIS data to the GIS data. Sure, I could use something else, but then I wouldn't have such an attention-getting blog post title.

My problem with the Global ID column is its randomness. Imagine a feature class with 10,000 (I have a few with 500,000) points. That's 10,000 random GUID's. Now link to that table using that column, or sort it, or filter it. Throw a clustered (or non-clustered) index on the Global ID column. Those random GUID's sure slow things down, don't they?

Let's see how this works. When you do that right-clicking and forgetting, SDE adds a constraint to the Global ID field:

ALTER TABLE [dbo].[TEST] ADD  DEFAULT ('{00000000-0000-0000-0000-000000000000}') FOR [GlobalID]

Every new feature added to that feature class calls a stored procedure which takes a SQL newid (GUID value) and converts it to a string.

CREATE PROCEDURE [dbo].[next_globalid]
@guid NVARCHAR(38) OUTPUT
AS SET NOCOUNT ON
BEGIN
SELECT @guid =  '{' + CAST (NEWID() AS NVARCHAR(38)) + '}'
END
GO

But what if we could make the GUID sequential and not random, but still unique? Such a thing exists, and a search of the internet will reveal to you a very heated debate with many opinions firmly rooted in the for or against sequential GUID's. Not for here....

Trying to alter the SP to default to the SQL newSEQUENTIALID doesn't work...

CREATE PROCEDURE [dbo].[next_globalid]
@guid NVARCHAR(38) OUTPUT
AS SET NOCOUNT ON
BEGIN
SELECT @guid =  '{' + CAST (NEWSEQUENTIALID() AS NVARCHAR(38)) + '}'
END
GO
The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression.

So here's my work around: Put a trigger in between the delta and the base table and ignore the ESRI SP that writes the random GUID. We can do this with an instead of insert trigger and changing the default of the Global ID column.

First:

ALTER TABLE [dbo].[TEST] DROP CONSTRAINT [DF__TEST__GlobalID__65FFA925]
GO
ALTER TABLE [dbo].[TEST] ADD  CONSTRAINT [DF__TEST__GlobalID__65FFA925]  DEFAULT (newsequentialid()) FOR [GlobalID]
GO

Then:

CREATE TRIGGER [dbo].[TEST_GLOBAL_ID]

ON [dbo].[TEST]
INSTEAD OF INSERT NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON; 

INSERT [TEST](
OBJECTID, SOMEFIELD, SHAPE, LON, LAT, QuadName, Watershed, County, State, PARKDISTRICT, ELEVATION, StreamName, RiverOrder
)
SELECT 
OBJECTID, a.SOMEFIELD, a.SHAPE, a.LON, a.LAT, a.QuadName, a.Watershed, a.County, a.State, a.PARKDISTRICT, a.ELEVATION, a.StreamName, a.RiverOrder
From
(SELECT 
OBJECTID, SOMEFIELD, SHAPE, LON, LAT, QuadName, Watershed, County, State, PARKDISTRICT, ELEVATION, StreamName, RiverOrder
FROM inserted) 
AS a 
;
end
GO

Add a couple of new points and

SELECT 
[GlobalID]
FROM [FISH].[dbo].[TEST]

GlobalID
C10EB116-8B14-45AB-924D-0C734E5AB5B6
61AE23FA-F02D-45C1-991D-571B77592014
0695789D-35A7-4BE4-B5F6-5EAF68D2A50B
5A20B628-6048-4D48-8380-AC005A0E70EC
CF52E6DE-5F60-456E-9DEF-C006D9BBD348
58F80A07-F8A8-4D62-BBB3-D012EA781F0C
5E7B9C91-2891-E411-B57C-E41F134196DA
BE30E498-2891-E411-B57C-E41F134196DA
BF30E498-2891-E411-B57C-E41F134196DA
C030E498-2891-E411-B57C-E41F134196DA
C130E498-2891-E411-B57C-E41F134196DA
38C5A6F2-60FF-4C39-BF37-F7AFCBDFDE90

Note how my new points (starting at line 08 and ending at 12) contain sequential ID's. I'm using an instead of versus an after update trigger here because I want this to occur BEFORE it hits the base table.

Unfortunately, this "alteration" is not very robust and can't be used if your data is versioned WITHOUT moving edits to base, or not versioned at all (as you need when using Collector for ArcGIS). Also not discussed here is my very compelling reason for using the Global ID field in some of the non-GIS applications that use these data. Let's chalk this up to the "tinkering" category. ESRI I'm sure has a very good reason for implementing the Global ID default like they do, and this post surely will foster some heated debate and comments.

This, added with the after update/insert trigger presented in Hey Neighbor? What's Your Value?  will result in a nice, neat feature class where almost all of the attributes are automatically populated and your Global ID's are ordered.

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
2 0 1,210
ThomasColson
MVP Frequent Contributor

In Rasters, Pixels and Code Oh My! I extended an automation trigger to populate an attribute field based on its location within a raster pixel. But I still want more data automation. I want it all!


Not only do we want to know some administrative attributes of new data points (what county, state is it in?: Spatial Intersect), how high it is, but in an era of not-so-precise data points, what is nearest to this point?

In this particular application, a water quality monitoring dataset, I also want to know the name of the stream this point occurs on, as well as the stream order. I'm using NHD Stream Lines which have been run through Arc Hydro to add Strahler Stream Order as an attribute.

Using a Nearest Neighbor Query I can accomplish that, albeit with a little more complexity because now I'm asking SQL to look for the "Nearest" feature, and not a simple intersect of two geography objects. The code block starts on line 48.

ALTER TRIGGER [dbo].[TEST_GEOGRAPHY]
ON [dbo].[TEST]
/****** fire on inserts and updates  ******/
/****** disable trigger when SQL replication or mirroring is enabled  ******/
after INSERT,UPDATE NOT FOR REPLICATION
AS   
BEGIN    
  SET NOCOUNT ON;  

  UPDATE p SET  
/****** hypothetically we could enter the lat/lon as text and create a geography object  ******/
         SHAPE = CASE WHEN i.SHAPE IS NOT NULL   
        THEN p.SHAPE ELSE Geography::STPointFromText('POINT('  
          + CAST(p.LON AS VARCHAR(20)) + ' '   
          + CAST(p.LAT AS VARCHAR(20)) + ')', 4269) END,  
/****** usual case point is created with ARC and casts the LAT/LON as text  ******/
/****** from the geography object  ******/
      LON = CASE WHEN p.SHAPE IS NULL THEN p.LON ELSE p.SHAPE.Long END,  
      LAT = CASE WHEN p.SHAPE IS NULL THEN p.LAT ELSE p.SHAPE.Lat END,
       QuadName = COALESCE(b.name, p.QuadName),
    Watershed = COALESCE(c.HUC_12_Name, p.Watershed),
    County = COALESCE(d.Name, p.County), 
    State= COALESCE(e.Name, p.State),
    PARKDISTRICT = COALESCE(f.District, p.PARKDISTRICT),
  ELEVATION = (SELECT
pdata.getValueByLoc(1,p.SHAPE.Long,p.SHAPE.Lat)  FROM [dbo].[DEM10MP]),
  StreamName = COALESCE(g.GNIS_Name, p.StreamName),
  RiverOrder = COALESCE(h.RiverOrder, p.RiverOrder)
  FROM  TEST
     AS p 
/****** allow upate of lat/lon on update ******/
  INNER JOIN  
    inserted AS i 
    ON i.globalid = p.globalid 
   LEFT OUTER JOIN USGS_24K_TOPOMAP_BOUNDARIES AS b 
    ON b.Shape.STIntersects(i.Shape) = 1 
      LEFT OUTER JOIN WATERSHEDS AS c
    ON c.Shape.STIntersects(i.Shape) = 1 
          LEFT OUTER JOIN GRSM_COUNTIES AS d
    ON d.Shape.STIntersects(i.Shape) = 1 
              LEFT OUTER JOIN GRSM_States AS e
    ON e.Shape.STIntersects(i.Shape) = 1
    LEFT OUTER JOIN GRSM_SUB_RANGER_DISTRICTS AS f 
ON f.Shape.STIntersects(i.Shape) = 1 


/****** Nearest Neighbor ******/
CROSS APPLY (SELECT TOP 1 GNIS_Name, shape                   
FROM dbo.NHDFLOWLINE 
/****** force spatial index hint ******/
WITH(index ([S208_idx]))                  
WHERE NHDFLOWLINE.Shape.STDistance(i.Shape) IS NOT NULL
                  ORDER BY NHDFLOWLINE.Shape.STDistance(i.Shape) ASC) as g
CROSS APPLY (SELECT TOP 1 RiverOrder, shape                   
FROM dbo.NHDFLOWLINE 
/****** force spatial index hint ******/
WITH(index ([S208_idx]))                  
WHERE NHDFLOWLINE.Shape.STDistance(i.Shape) IS NOT NULL
                  ORDER BY NHDFLOWLINE.Shape.STDistance(i.Shape) ASC) as h
  ; 
END
GO

Note the use of an index hint here. In SQL 2008 use of a the spatial index is not honored, and in fact, doesn't work without a lot of tweaking. Here we're telling it to find the name of the nearest stream and its order, but use the spatial index of the NHD Flowline feature class to optimize the query. Without the index hint this query takes for ever!

If the stream name is null (as many are) the crossapply and order by statements stop the search at the nearest stream instead of looking for the nearest named stream.

I've found this works best when I enforce some data entry rules: add a new point only using snapping (to the NHD layer in the map interface). There is the danger that someone could erroneously add a new point that is "somewhat" between two streams, and the wrong name and order could get attributed to the point.

In this particular application, we are using it in conjunction with the USGS HEM tools, which handles the snapping, assignment, and population of reachcode to the point data, so we're not to worried about the wrong-point-in-the-wrong-place scenario.

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
5 0 1,275
ThomasColson
MVP Frequent Contributor

In Repetitive Attribute Values Automated with SQL  we added some automation to repetitive data entry using feature classes that are using the Microsoft Geography storage type.

But what about other repetitive data points? Back to our "Data Standard", much of the data we collect also needs to be attributed with discrete or continuous values obtained from a raster, such as elevation. Sure, I could bang out some Python or C# code to handle this, but I skipped that class...

This assumes that you have a raster Digital Elevation Model in SDE that has been FIRST configured using ST_RASTER (ArcGIS Help 10.1 ) then copied over into a ST_PIXEL table with SRID 4269.

In this example, I have a 10-m resolution DEM named DEM10MP. In my naming convention, the "P" indicates that the table is of ST_PIXEL type.

Extending the trigger presented in Repetitive Attribute Values Automated with SQL  I simply add a Get Value By Location query using the LAT/LON value from the inserted or updated Geography column. The entire trigger looks like this. Line 25/26 includes the elevation query.

ALTER TRIGGER [dbo].[TEST_GEOGRAPHY]
ON [dbo].[TEST]
/****** fire on inserts and updates  ******/
/****** disable trigger when SQL replication or mirroring is enabled  ******/
after INSERT,UPDATE NOT FOR REPLICATION
AS   
BEGIN    
  SET NOCOUNT ON;  

  UPDATE p SET  
/****** hypothetically we could enter the lat/lon as text and create a geography object  ******/
         SHAPE = CASE WHEN i.SHAPE IS NOT NULL   
        THEN p.SHAPE ELSE Geography::STPointFromText('POINT('  
          + CAST(p.LON AS VARCHAR(20)) + ' '   
          + CAST(p.LAT AS VARCHAR(20)) + ')', 4269) END,  
/****** usual case point is created with ARC and casts the LAT/LON as text  ******/
/****** from the geography object  ******/
      LON = CASE WHEN p.SHAPE IS NULL THEN p.LON ELSE p.SHAPE.Long END,  
      LAT = CASE WHEN p.SHAPE IS NULL THEN p.LAT ELSE p.SHAPE.Lat END,
       QuadName = COALESCE(b.name, p.QuadName),
    Watershed = COALESCE(c.HUC_12_Name, p.Watershed),
    County = COALESCE(d.Name, p.County), 
    State= COALESCE(e.Name, p.State),
    PARKDISTRICT = COALESCE(f.District, p.PARKDISTRICT),
  ELEVATION = (SELECT
pdata.getValueByLoc(1,p.SHAPE.Long,p.SHAPE.Lat)  FROM [dbo].[DEM10MP])


  FROM  TEST
     AS p 
/****** allow upate of lat/lon on update ******/
  INNER JOIN  
    inserted AS i 
    ON i.globalid = p.globalid 
   LEFT OUTER JOIN USGS_24K_TOPOMAP_BOUNDARIES AS b 
    ON b.Shape.STIntersects(i.Shape) = 1 
      LEFT OUTER JOIN WATERSHEDS AS c
    ON c.Shape.STIntersects(i.Shape) = 1 
          LEFT OUTER JOIN GRSM_COUNTIES AS d
    ON d.Shape.STIntersects(i.Shape) = 1 
              LEFT OUTER JOIN GRSM_States AS e
    ON e.Shape.STIntersects(i.Shape) = 1
    LEFT OUTER JOIN GRSM_SUB_RANGER_DISTRICTS AS f 
ON f.Shape.STIntersects(i.Shape) = 1 
  ; 
END
GO

Now for every new point I add to this feature class, or edit, all but one field is automatically populated or updated.

Every time.

No user input.

rewr.JPG

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
5 0 869
ThomasColson
MVP Frequent Contributor

Have you every been exceedingly annoyed by attribute domain values that aren't alphabetized or in some specific order? One could argue that it's your fault for not creating the domain values in the right order when you made it but.....

There IS a way you can update the domain values to render to end users in any way you want!

Lets say you have a domain titled "District". To see what it looks like in XML:

Select DEFINITION FROM [dbo].[GDB_ITEMS]
WHERE NAME = 'District'

And click on the value in the definition field in the results.

<GPCodedValueDomain2 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:typens="http://www.esri.com/schemas/ArcGIS/10.0" xsi:type="typens:GPCodedValueDomain2">
  <DomainName>District</DomainName>
  <FieldType>esriFieldTypeString</FieldType>
  <MergePolicy>esriMPTDefaultValue</MergePolicy>
  <SplitPolicy>esriSPTDefaultValue</SplitPolicy>
  <Description>Desc</Description>
  <Owner>DBO</Owner>
  <CodedValues xsi:type="typens:ArrayOfCodedValue">
    <CodedValue xsi:type="typens:CodedValue">
      <Name>North District</Name>
      <Code xsi:type="xs:string">ND</Code>
    </CodedValue>
    <CodedValue xsi:type="typens:CodedValue">
      <Name>South District</Name>
      <Code xsi:type="xs:string">SD</Code>
    </CodedValue>
    <CodedValue xsi:type="typens:CodedValue">
      <Name>Cades Cove District</Name>
      <Code xsi:type="xs:string">CC</Code>
    </CodedValue>
  </CodedValues>
</GPCodedValueDomain2>

What I want to see is the values and codes sorted alphabetically, which they are not. First, a little Excel magic. Using the Domain to Table tool, I've exported the Domain out to an Excel Spreadsheet. Then performed my sort.

dddd.JPG

Then some field calculations. Here I'm using plain text for the XML element and cell references for the CODE and VALUE.

="<CodedValue xsi:type=""typens:CodedValue""><Name>"&B2&"</Name><Code xsi:type=""xs:string"">"&A2&"</Code></CodedValue>"

Note how the formula is all one line with no carriage returns. Otherwise when you paste the result, quotes will be added where you don't want them!

eeeee.JPG

Now update the XML definition by pasting the Excel formula results to replace the XML elements (ONLY!):

UPDATE [dbo].[GDB_ITEMS]
SET DEFINITION = 
'
<GPCodedValueDomain2 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:typens="http://www.esri.com/schemas/ArcGIS/10.0" xsi:type="typens:GPCodedValueDomain2">
  <DomainName>District</DomainName>
  <FieldType>esriFieldTypeString</FieldType>
  <MergePolicy>esriMPTDefaultValue</MergePolicy>
  <SplitPolicy>esriSPTDefaultValue</SplitPolicy>
  <Description>Desc</Description>
  <Owner>DBO</Owner>
  <CodedValues xsi:type="typens:ArrayOfCodedValue">
<CodedValue xsi:type="typens:CodedValue"><Name>Cades Cove District</Name><Code xsi:type="xs:string">CC</Code></CodedValue>
<CodedValue xsi:type="typens:CodedValue"><Name>North District</Name><Code xsi:type="xs:string">ND</Code></CodedValue>
<CodedValue xsi:type="typens:CodedValue"><Name>South District</Name><Code xsi:type="xs:string">SD</Code></CodedValue>
  </CodedValues>
</GPCodedValueDomain2>
'
WHERE NAME = 'District'

fffff.JPGI have found this very handy for large domains that have dozens or hundreds of domain values that I either wanted sorted alphabetically or where I needed to make some sort of global update such as converting the text in "Description" to Camel Case.

Combining Excel with an advanced text editor such as Notepad ++ with the TextFX tools plugin can result is some very efficient management of attribute domains!

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
6 3 2,006
ThomasColson
MVP Frequent Contributor

Most (all actually) of my GIS data has what I call the "Mickey Mouse" attribute value requirement: Each feature class has a couple of attribute fields that contain boring information such as administrative boundaries (what state, county this point occurs in). In the past I've been very proud of the elaborate attribute domains I've created so data editors can choose what (County, State, USGS Topo Quad) their data point occurs in.

No one uses them.

Data entry has a time and a labor cost, and like most of us, we need to focus on what's important (what happened at this point) and don't have a lot of time for administrative data entry.

Until a report is due that requires the partitioning of data values by (state, county, or watershed).

In Using SQL to Auto-Populate XY Values I described using a Trigger to automate the population of GPS coordinates using the well-known text pulled from the geography column. There's no reason why we can't extend that to handling a few more data maintenance tasks.

In this example, I have the following administrative boundary polygon feature classes all configure with the "Geography" storage type and SRID 4629 in the same SDE database.

  1. GRSM_COUNTIES: County name is populated in the County field;
  2. GRSM_States: State name is populated in the State field;
  3. GRSM_Sub_Ranger_Districts: Ranger District name is populated in the PARKDISTRICT field;
  4. USGS_24K_TOPOMAP_BOUNDARIES: USGS Quad Name is populated in the QuadName field;
  5. WATERSHEDS: HUC 12 Watershed Name is populated in the Watershed field.

Our data policy is that each new data point will be attributed with the "Name" value of each of the 5 polygons it occurs in. Imagine the enthusiasm of data editors when told they have to choose one of many values from a pick list, 5 times, for each new point they add or edit!

Extending Using SQL to Auto-Populate XY Values we can "join" the new or updated point spatially with each of the 5 polygons it occurs in and grab the required value from the polygon and write it to the correct attribute field.

ALTER TRIGGER [dbo].[TEST_GEOGRAPHY]
ON [dbo].[TEST]
/****** fire on inserts and updates  ******/
/****** disable trigger when SQL replication or mirroring is enabled  ******/
after INSERT,UPDATE NOT FOR REPLICATION
AS   
BEGIN    
  SET NOCOUNT ON;  

  UPDATE p SET  
/****** hypothetically we could enter the lat/lon as text and create a geography object  ******/
         SHAPE = CASE WHEN i.SHAPE IS NOT NULL   
        THEN p.SHAPE ELSE Geography::STPointFromText('POINT('  
          + CAST(p.LON AS VARCHAR(20)) + ' '   
          + CAST(p.LAT AS VARCHAR(20)) + ')', 4269) END,  
/****** usual case point is created with ARC and casts the LAT/LON as text  ******/
/****** from the geography object  ******/
      LON = CASE WHEN p.SHAPE IS NULL THEN p.LON ELSE p.SHAPE.Long END,  
      LAT = CASE WHEN p.SHAPE IS NULL THEN p.LAT ELSE p.SHAPE.Lat END,
       QuadName = COALESCE(b.name, p.QuadName),
    Watershed = COALESCE(c.HUC_12_Name, p.Watershed),
    County = COALESCE(d.Name, p.County), 
    State= COALESCE(e.Name, p.State),
    PARKDISTRICT = COALESCE(f.District, p.PARKDISTRICT)
  FROM  TEST
     AS p 
/****** allow upate of lat/lon on update ******/
  INNER JOIN  
    inserted AS i 
    ON i.globalid = p.globalid 
   LEFT OUTER JOIN USGS_24K_TOPOMAP_BOUNDARIES AS b 
    ON b.Shape.STIntersects(i.Shape) = 1 
      LEFT OUTER JOIN WATERSHEDS AS c
    ON c.Shape.STIntersects(i.Shape) = 1 
          LEFT OUTER JOIN GRSM_COUNTIES AS d
    ON d.Shape.STIntersects(i.Shape) = 1 
              LEFT OUTER JOIN GRSM_States AS e
    ON e.Shape.STIntersects(i.Shape) = 1
    LEFT OUTER JOIN GRSM_SUB_RANGER_DISTRICTS AS f 
ON f.Shape.STIntersects(i.Shape) = 1 
  ; 
END
GO

There is no limit to the number of joins you can use to automate spatial queries that update attribute values. Just remember that both feature classes participating in the join need to use the same storage type.

But what if you want to apply this to existing data? After creating the trigger you could apply a one-time query to update existing values. This example is using the Geometry storage type:

use GRSM
update GRSM_TRAILS
set CONG_DIST = CONGRESSIONALDISTRICTS.NAMELSAD
from GRSM_TRAILS
inner join CONGRESSIONALDISTRICTS
on grsm_trails.location_id = grsm_trails.location_id
where (CONGRESSIONALDISTRICTS.Shape.STIntersection(grsm_trails.Shape).STDimension() = 1);

This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.

more
2 0 1,030
63 Subscribers
Labels