We are running 2 geodatabases on one Microsoft SQL server which are roughly copies of each other. When changed feature classes in the staging database are approved for promotion, the entire feature class is copied to the published database to replace the existing copy. This has been working, but requires a lot of extra setup as the schema is destroyed and can require feature class/dataset changes in both databases.
Long term, I suspect we might be better off doing something like versioning in a single database. For now, I'm thinking of using Python to copy changed records identified by Editor Tracking, either by Appending or using arcpy.da cursors. Any suggestions on maximizing performance, either with this method or another? Also, any suggestions on tracking records to be deleted in the production database?
Thanks for your time
How are you sharing the published data? Are you using desktop GIS or a web server / web maps / Enterprise ///...?
I assume the published data is read only?
How much data are we talking about? How often do you need to re-sync things?
I am thinking if it's small then "copy everything" is not bad vs the work to set up "copy changed". You might miss deletions if you scan for changes. The published dataset will still show deleted features.
I am using Map Image Layers in a web map created with "WebAppBuilderDeveloperEdition" and publish a few layers to hosted layers on ArcGIS Server, the visible data is therefore copied from SQL Server to the ArcGIS Server datastore and supposedly has better performance. We are a small county so performance has never been much of a problem, we have to pile on views and relational tables etc to notice any difference.
I've been thinking about abandoning the "hosted data" because it ends up being more of a pain to sync things. Also when I use "overwrite web layer" it seems 1/2 the time something random breaks and I end up republishing the layer and editing the 10 maps that depend on it.
If you are talking about "branch versioning" that means you have to be using Portal? We are transitioning from "traditional versioning" soon, I doubt you want to bother learning "traditional" at this point as it's last year's flavor and Esri will want you to move on.
I want to look at webhooks for the situation you describe, using it to trigger a script to run when an update occurs. I have no idea if this is practical. With webhooks the "script to run" has to be fired from a web server so it adds more complexity. Probably automatically running a script every night (or even more often) would be more sensible for me with our small datasets. But webhooks sounds so cool...
Our datasets can be pretty large, especially compared to the deltas. The copy has been the default, but the problems it has are why I'm looking for alternatives.
There's a big checklist of things that need to be undone and redone correctly, and some of them can't even be directly verified. It also requires downtime that sending just the deltas wouldn't. The extra downtime required make updates sometimes have to wait. I have some ideas about how to handle deletes, but I agree that under the best of circumstances it's more complicated than I'd like. For now, I'm mainly looking for a way to move the inserts/updates that's both fast and fits Esri best practices.
My other concern is the copy is a new object, creating a possibility that not everything that depends on it will recognize it as the same thing. In my non GIS database work, I would expect things like the "overwrite web layer" issue you describe to be risky for that reason. In that scenario, have you tried creating an ArcGIS Online alias to the uploaded service and pointing the maps to the alias instead? That way, you (theoretically) can repoint the alias without affecting the maps,. I've wondered about doing that for similar situations that have come up recently, but I haven't gotten around to trying it