|
POST
|
This is the nth time I've heard this: Feature Classes in a Feature Dataset decrease performance. I'm wondering where (IT) folks are hearing that? However, I'm not debunking the notion that it happens...like others...I've never seen it. If you look at a SQL trace of a draw request of something that is in a FD, you do see some additional join statements. I've never detected any performance difference, even when looking for it with PerfQA. I think I may post a poll on this topic....
... View more
01-19-2015
11:42 AM
|
0
|
0
|
2577
|
|
BLOG
|
It's very easy as a GIS administrator to add lots and lots of fields to a feature class and even easier to naively think that folks are going to populate or edit them! One common theme for me is date fields. We have edit date, create date, FGDC edit and create date, year...blah blah blah. I wish there was just one, or even no, onus on editors to have to think about dates and the database just...handled it. This can be so. With a database trigger. Let's start with FGDC dates. If you're implementing some form of feature-level metadata, or some data-mining that tags another metadata element with FGDC source, create or edit date yet have editor tracking enabled, there's no reason why you should have to also populate the FGDC[whatever]DATE column as well. CREATE TRIGGER [dbo].[SOMETABLE_DATE]
ON [dbo].[SOMETABLE]
AFTER INSERT, UPDATE NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON;
UPDATE [dbo].[SOMETABLE]
SET
SRCDATEFGDC = (convert(varchar(8), SOURCEDATE, 112)),
CREATEDATEFGDC = (convert(varchar(8), CREATEDATE, 112)),
EDITDATEFGDC = (convert(varchar(8), EDITDATE, 112))
END
GO Here we're taking the EDITDATE and CREATE date values, which are sql datetime(2) and converting them to a string format as yyyymmdd. The 112 is what controls the output format. See CAST and CONVERT (Transact-SQL) for a full list of date conversion formats. In addition, the user is also selecting a source date in this case which may be different than create date and that is converted as well. If you have a year column you can also YEAR = CASE WHEN SOURCEDATE IS NULL THEN NULL ELSE YEAR(SOURCEDATE) END with null value handling thrown in. If you have editor tracking disabled for some reason (often causes issues with Collector for ArcGIS) you could through a default constraint on your date column(s) as getdate() . This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.
... View more
01-19-2015
10:51 AM
|
2
|
0
|
886
|
|
BLOG
|
Do you have someone in your organization that YELLS WITH THEIR KEYBOARD? Everything is upper case? What about the rogue all-lower-case folks? Or ever worse, the First-word-is-proper-case-all-other-words-are-lower-case ninjas. I have a personal pet-peeve (OCD). Feature names in GIS should be Proper Case. Happy Valley Road. Not Happy valley road, not Happy valley Road, and definitely not HAPPY VALLEY ROAD. I'd like to enforce Proper Case naming of those feature regardless of how the text is cased by the editor. This can be accomplished with a simple Function and Trigger in SQL. First create the following Function: create function [dbo].[ProperCase](@Text as varchar(8000))
returns varchar(8000)
as
begin
declare @Reset bit;
declare @Ret varchar(8000);
declare @i int;
declare @c char(1);
select @Reset = 1, @i=1, @Ret = '';
while (@i <= len(@Text))
select @c= substring(@Text,@i,1),
@Ret = @Ret + case when @Reset=1 then UPPER(@c) else LOWER(@c) end,
@Reset = case when @c like '[a-zA-Z]' then 0 else 1 end,
@i = @i +1
return @Ret
end
GO Then this Triggger: CREATE TRIGGER [dbo].[NAME_UPDATE]
ON [dbo].[SOME_TABLE]
AFTER INSERT NOT FOR REPLICATION
AS BEGIN
SET NOCOUNT ON;
UPDATE SOME_TABLE
SET
NAME = dbo.Propercase(NAME)
END
GO Note here that this only fires after an insert, not an update. There could be legitimate reason why the YELLERS want something other than proper case, e.g. "ND Happy Valley Road" (ND for "North District). This allows them (or me after they YELL at me) to update that one feature without the trigger proper-casing my edit. This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.
... View more
01-19-2015
07:57 AM
|
4
|
1
|
1138
|
|
POST
|
We are having the same issue. I have a hosted feature service that I want restricted editing on so the public can see facility closures. We're using the raw GeoJSON output of the feature service in a custom web map application, and unfortunately, hiding the FS from the public...hides the GeoJSON as well. Enabling editing enables anyone to stumble across the FS in AGOL and make edits. Making the editors administrators is a non-starter in our organizations security model.
... View more
01-17-2015
12:50 PM
|
1
|
11
|
1969
|
|
IDEA
|
Currently, the only way to sync Collector for ArcGIS offline data is via wireless. For those organization with no, limited, or restricted wireless capacity, Collector is a non-solution. Users should be able to connect Collector devices to a computer with the devices USB cable and sync to Portal or AGOL via some sort of Arc Tool Box script or other desktop tool. In addition, when collecting features with attachements not even the best wireless connection allows sync when many dozens or hundreds of photo attachments have been collected.
... View more
01-07-2015
07:51 AM
|
128
|
13
|
7460
|
|
POST
|
We don't have this service tied to credentials, so that is not the issue. Have also tried with photo from library, photo from camera, and just adding a text file. All the same error.
... View more
01-07-2015
07:39 AM
|
0
|
2
|
2344
|
|
POST
|
...allthough looking now at your error, that is not the attachment error. That error is related to the collector app not being able to contact the server. Do you have background refresh turned on or off for collector?
... View more
01-07-2015
07:38 AM
|
0
|
0
|
929
|
|
POST
|
This problem is also being discussed in Collector with Attachments will not synch. I think the only solution is to open up an incident with tech support at this point.
... View more
01-07-2015
07:36 AM
|
0
|
1
|
929
|
|
POST
|
This is not an answer but just me venting... I'd have a serious chat with organization leadership about what its mission is and how they want to achieve those goals. As ESRI technology grows more and more complex and more reliant on "cloud" infrastructure and web-based data discovery, the disconnect between IT and GIS departments grows ever and ever wider. I blame ESRI for some of this: with most, if not all, of their newest desktop products requiring more and more local machine administrative access rights yet IT departments locking down more and more tasks that most GIS practitioners take for granted. Yes, as the representative for GIS, it's my job to educate IT (and leadership) on what the GIS program requirements are if the organization wants to achieve its mission. But I suspect you're coming from a large organization where institutional mentality and compartmentalization of work divisions results in a situation like our current Congress. It's real tough to convince leadership that users need to be able to use Python when they're handing out awards for protecting the network from the "Python Virus".
... View more
01-07-2015
07:03 AM
|
0
|
1
|
1084
|
|
POST
|
GDB Versioning is enabled, as per the ESRI documentation, which suggests that it will not work otherwise, No track editor changes. Are you saying that disabling archiving makes feature attachements work? Why would a plain old feature service work with versioning enabled (per the documentation), but adding attachements (also with versioning) fail to sync in collector? I can collect points and sync all day long in collector, even on the FS with the attachments. As soon as I add an attachment (small > 1 mb photo)....sync fails.
... View more
01-05-2015
09:43 AM
|
0
|
9
|
2344
|
|
POST
|
Can you take a complete screen shot of the Geodatabase Administration screen showing your versions? When you click on a version it will report the parent of the version. In order to create a child of a SPECIFIC version your mouse pointer must BE ON that version when you right-click and select new version, as pictured below. Also, what version of Arc are you using, what version of SQL, what operating system, etc...?
... View more
01-02-2015
04:45 AM
|
0
|
6
|
1913
|
|
POST
|
In version manager, did you make Case1 to be a child of Case? If Case1 is posting directly to Default, that means it's a child of Default.
... View more
01-01-2015
01:03 PM
|
0
|
8
|
1913
|
|
BLOG
|
I have tried, and failed, many times to get geodatabase replication using ESRI tools to work, work reliably, or work at all. Perhaps it's all the un-authorized SQL tinkering I do. Even when I do get it working, I'm not a big fan of having to fire up Arc Toolbox and push buttons to make replication happen. Not a big fan of writing Python code, either. With SQL server, there IS a way to replicate an entire database, or parts of of a database. The part of a database is handled with Merge Replication, in which you define what tables to replicate, and what criteria under which data is replicated. I like that option! Consider this environment: I have a water quality geodatabase (in SQL), which is big, complex, has lots of legacy stuff in it that I just don't feel like dealing with right now. Replicating or mirroring the entire database is not an option....because.....in a brilliant management decision, the "datacenter" was placed in a building very far from any modern telco hubs. On a good day I can actually download email attachments. The problem is, the users of this database are every where else but in this building. More drama: We're drinking the Portal for ArcGIS Cool Aid and high on Collector for ArcGIS. Can't really connect my database to either of those technologies using carrier pigeons. But I do have some very limited in capacity database and application servers in the "cloud". Certainly not the kind that can house a 7GB water quality database, but here's an idea. What if I could replicate the feature classes that are most needed operationally to these cloud platforms? Field users could get to their data, and I could consume it in my remote datacenter in other applications. All they need access to is the "dots on the map" for their mobile devices, desktop applications, and management access to web-based maps (what stream did that oil spill impact?). Here's what we'll need: An obnoxiously massive and complex SDE/SQL database called "FISH"; A feature class that's been around forever and has a ton of data called "TEST"; A remote SDE/SQL database called "FISHREPL" which is the same SQL/SDE version as "FISH"; A horrible internet connection. This is a multi-part blog post, here are the steps: Real-time Geodatabase Replication? Part 1 Real-time Geodatabase Replication? Part 2 Real-time Geodatabase Replication? Part 3 Real-time Geodatabase Replication? Part 4 Real-time Geodatabase Replication? Part 5 Real-time Geodatabase Replication? Part 6 And of course a few caveats: This only works with unversioned data which places quite a limit on things. In some cases the cons of having unversioned feature classes are not as great as the pros of having a real-time replication service. Please bear in mind that this method of replication is extremely unsupported. I just did it because I like tinkering under the hood of SDE and wanted to see if it would work. It "sort of" does! However in contrast to the out-of-the-box ESRI replication tools, this process sure resolves a lot of headaches for me. I can attach user-editors to a local SDE instance where "bulk" editing occurs and SQL Merge Replication will fire those changes to the "cloud" SDE instance where the rest of my organization can access (and edit) the data through Portal for ArcGIS, and my mobile clients can perform edits and updates using Collector for ArcGIS. And it works continuously. Full-time. Even with file attachments-enabled and 2 mb photos being attached to a point those changes sync within 60 seconds. SQL replication in 2012 is very efficient in compressing data and ONLY sending deltas. When I generate a "snapshot" of a feature class with file attachments, SQL sees 50 mb of data that needs to be replicated. Performing a versioned-checkout of the same data using the Distributed Geodatabase tools results in a 220 mb file geodatabase. But emphasis on "experimental". There is a lot of overhead to managing SQL Merge Replication and you can really screw up some operational data. If you get it working after following this blog series I suggest you spend some time deliberately breaking things and see if you can recover your data integrity! Otherwise stick to the ESRI replication tools. They work. When you push the button..... This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.
... View more
01-01-2015
12:29 PM
|
3
|
0
|
3076
|
|
BLOG
|
Setting up a Publication Server First a little side-step to prepare our GIS table for replication. More on the caveats later! In order to participate in replication a SQL table will need a column flagged as a ROWGUID column. In Dear GlobalID: I Hate You! Rants from an OCD data manager we talked about the Global ID Unique Identifier. We're going to use it to tell the Replication engine that this column is also used to manage changes between the Publisher database and any subscribers. There are two requirements for this column: It has to default to newsequentialid(); It has to be flagged as a ROWGUID column. If you don't set the GlobalID to be the ROWGUID, when you create the publication a new ROWGUID column will be created for you, outside of the "ESRI stack", which will cause problems when you attempt to edit this feature class (as this new column has been created without the participation and consent of SDE). In Microsoft SQL Merge Replication you first need a publication server, from which other SQL servers can "subscribe" to replication articles published by the publication server. In SSMS launch the New Publication Wizard and select the Fish Database as the publication database. Select Merge Publication as the publication type. Since we're using the Geography Storage type we don't need to publish any other table other than the "Test" base table. Select "Test" as the Object to Publish. Instinctively you think we need more tables than that considering how Arc SDE manages data. The secret will be revealed in a few more posts. Don't filter any tables. Configure the Snapshot Agent to run every hour (you can pick any schedule you like!). Set the Snapshot Agent to use a domain account (ideal). Then check Create the publication and Generate a script file. Not pictured, but one of the dialogues will ask you where the Snapshot files will go. Be sure to pick a network share that both servers will be able to access. It's very important that you check the box that asks if you want to create the script that will generate the Publication Agent! Make sure the Publication and Snapshot Agent is working by Launching Replication Monitor and starting the Agent. The directory you picked to host the replication articles should start filling up with some files. You should see a status of [100%] in the last action column. Double-click on the agent and select Action -> Start Agent. Wait a few minutes and select Action -> Refresh. You should see another successfully completed Agent job. Now review the Publisher Properties. It's good that you see how all of the pieces are put together! In Replication Monitor in the left window pane select the Replication (FISH) and right-click -> Properties. You should see: Note here in the Location of Snapshot files that they are going to a UNC network share that the domain account used to run the agent has full access to. We need to make a few changes though. Convert filestream to MAX data types is set to False by default. If you attempt to Subscribe to this publication any column data in a Geography or Geometry column will be converted to a string during the replication process. Not good for GIS data! The only way to correct this is to delete the publication and recreate it programmatically. However since you saved the script the created the publication all you have to do is edit two lines of that script and re-run it! use [FISH]
exec sp_replicationdboption @dbname = N'FISH', @optname = N'merge publish', @value = N'true'
GO
-- Adding the merge publication
use [FISH]
exec sp_addmergepublication @publication = N'FISH_REPL',
@description = N'Merge publication of database ''FISH'' from Publisher ''INPGRSMS04TC''.',
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true',
@allow_anonymous = N'true', @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
@compress_snapshot = N'false', @ftp_port = 21, @ftp_subdirectory = N'ftp', @ftp_login = N'anonymous',
@allow_subscription_copy = N'false', @add_to_active_directory = N'false', @dynamic_filters = N'false',
@conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false',
@max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null,
@publication_compatibility_level = N'100RTM', @replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false',
@allow_web_synchronization = N'false', @allow_partition_realignment = N'true', @retention_period_unit = N'days',
@conflict_logging = N'both', @automatic_reinitialization_policy = 0
GO
exec sp_addpublication_snapshot @publication = N'FISH_REPL', @frequency_type = 4,
@frequency_interval = 14, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0,
@frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day = 500,
@active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0,
@job_login = N'domain\account', @job_password = null, @publisher_security_mode = 1
use [FISH]
exec sp_addmergearticle @publication = N'FISH_REPL', @article = N'TEST',
@source_owner = N'dbo', @source_object = N'TEST', @type = N'table',
@description = null, @creation_script = null, @pre_creation_cmd = N'drop',
-- Change 0x000000010C034FD1 to 0x000000000C034FD1
@schema_option = 0x000000010C034FD1,
@identityrangemanagementoption = N'manual',
@destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false',
@subset_filterclause = null, @vertical_partition = N'false', @verify_resolver_signature = 1,
@allow_interactive_resolver = N'false', @fast_multicol_updateproc = N'true', @check_permissions = 0,
@subscriber_upload_options = 0, @delete_tracking = N'true', @compensate_for_errors = N'false',
-- Change true to false
@stream_blob_columns = N'false',
@partition_options = 0
GO In the code above which was saved when the Publication Agent was first created @schema_option has been changed to 0x000000010C034FD1 and @stream_blob_columns has been set to false. Run the script however note that you will have to re-generate the agent job that runs the article generation and synchronization process. Right click on the Publication in Replication Monitor and select Agent Security and configure the agent to run using a domain service account. Instead of re-creating the Snapshot Agent you could just: sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'schema_option','0x000000000C034FD1',1,1
GO
sp_changemergearticle 'PFMD_REPL', 'TRAILSIGN', 'stream_blob_columns','false',1,1
GO And then re-initialize the snapshot. But it's a good idea to learn how to create the Publication using SQL. Real-time Geodatabase Replication? Part 1 Real-time Geodatabase Replication? Part 2 Real-time Geodatabase Replication? Part 3 Real-time Geodatabase Replication? Part 4 Real-time Geodatabase Replication? Part 5 Real-time Geodatabase Replication? Part 6 This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.
... View more
01-01-2015
12:28 PM
|
3
|
0
|
1732
|
|
BLOG
|
Setting up a Subscription Server On your remote database server, create a NEW SDE database called "FEMPFISH" (or something other than fish). Once the database has been created, manually replicate by right-clicking and selecting Import, and Import the TEST feature class definition from the FISH (publication) database. Make sure you select the Geography storage type. Everything about the two feature classes must be the same! If your feature class has attribute domains the best way to do this is with a XML export/import. Otherwise your feature class on the remote (subscription) server won't have attribute domain pick-lists for editors to use! Note that I'm not using Feature Datasets here! More complexity.... On the remote SQL server activate the New Subscription Wizard. Select the Local SQL Server that hosts the FISH database as the Publisher. You should then see this: I prefer to run all agents at the Distributor as this a) keeps the overhead on my server that has the capacity to do so and b) makes managing many subscriptions a lot easier (one interface). Select the Subscription Database as the one you created on the remote server to host the replicated feature class. In the Agent Security, you SHOULD use the SAME domain account you used to create the Publisher Agent. Good luck with another security model....and set the agent schedule to run continuously. Real-time Geodatabase Replication? Part 1 Real-time Geodatabase Replication? Part 2 Real-time Geodatabase Replication? Part 3 Real-time Geodatabase Replication? Part 4 Real-time Geodatabase Replication? Part 5 Real-time Geodatabase Replication? Part 6 This is a personal blog and does not recommend, endorse, or support the methods described above. Alteration of data using SQL outside of the ESRI software stack, of course, is not supported and should not be applied to a production database without a thorough understanding and disaster recovery plan.
... View more
01-01-2015
12:27 PM
|
3
|
0
|
1159
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-14-2019 06:24 AM | |
| 1 | 07-12-2018 09:29 AM | |
| 1 | 06-27-2019 12:08 PM | |
| 2 | 09-23-2019 11:03 AM | |
| 1 | 08-08-2019 07:02 AM |
| Online Status |
Offline
|
| Date Last Visited |
06-28-2024
02:40 AM
|