Select to view content in your preferred language

Allow standard database replication with Branch Versioned data

1301
6
02-02-2022 10:57 AM
Status: Open
Labels (1)
MichaelSnook
Occasional Contributor III

We have been using database replication with the older, traditional database versioning for many years now.  It provides a simple way to manage syncing data that might need to be disconnected from the original source -- pushing data out for external web app data access for instance.  We would like to implement Branch Versioning on our master editing database to take full advantage of feature service editing capabilities including version management and validation.  It seems that Branch Versioning DOES NOT support the typical replication of old.  It would be rather helpful to see this implemented.

The, not-desirable, workaround for this is to create a schema copy of a database, and use scripting to truncate/append data out to these external databases -- this is a bit of a data management problem. 

Thanks!

6 Comments
RussellBrennan

Hi @MichaelSnook 

Are you looking for geo-database replication or standard database replication? It would be helpful for us to make sure we  are understanding what you are looking for.

Would you be able to contact me via PM? I have some more follow up questions about your workflows that I would like to ask you in order to get a better understanding of your constraints.

JerryOrnelas

Hello Michael/???

We have several national datasets that we replica from on-premise to other locations including AWS RD's.    

We depend on Traditional Versioning to do replica's, can you tell me Branched Versioning has some sort of Source To Target Sync/Replica functionality.   

It doesn't have to be a two way replica, mainly from one Parent to another disconnect Child using SDE PostGreSql Branched Versioning.

Any feedback would be apprecated.

VanessaSimps

I realize this is an old post, but I am hoping someone can help me out here. 

our current set up is a production gdb where editors edit. Nightly, these edits are pushed to a publication gdb using a one way geo-database replication. Not only is the publication database where we publish our data from, it is also where we have several sql query/views written with our GIS data that was replicated over from our production database. 

We are currently in the process of migrating to Trace Network/UtlityNetwork. Which means we have to use Branch versioning in our production database. BUT, I am reading here that we can't use branch versioning and geodatabase replication together. 

What is the suggested workflow supposed to look like now? Am I missing something here? Am I supposed to use and ETL like FME or Python to move the data I need from one gdb to another?

Thanks in advance for any clarification that can be provided here!

Vanessa

DasheEbra

Hi @VanessaSimps ,

I've encountered a similar challenge with a previous employer, and while there were potential solutions, they didn't quite meet production standards. Let's explore some options together:

  1. Offline Editing:

    • While offline editing allows for data manipulation without direct network access, it's not ideal for Utility Network situations. Downloaded maps have a simplified schema, lacking full Utility Network functionality.
  2. FME/ArcGIS Data Interoperability:

    • This option supports editing in a disconnected environment, particularly with child branch versions. However, thorough testing is needed to ensure seamless synchronization with the production environment.

Mohamed El-Saket
Esri Certified Professional

 

VanessaSimps

@DasheEbra - thanks for the reply. I started to think about this a bit more after posting. I think the ETL- FME process might work for what I will test out. I could pull the portal item from the branch versioned data down nightly using FME and continue with my existing publication database set up. This may not be the final process, but it will buy me and staff time to rework the SQL query views we have built to instead use the ArcGIS REST API 

I am going to test this out over the next few months and see how things go. I will try to report back here if I remember to let folks know what we came up with. I am sure we aren't the only ones in this predicament. 

KellyAlfaro_Haugen

@MichaelSnook & @VanessaSimps - We use the same replication method for our data. We have an edit database and then read only databases for customers. Currently we use a combination of Python and ETL/FME to output the data into some layers within the databases. For example data built from other data like Cities dissolved from Annexations to reduce data editing and errors. Additionally we have some data that we 2-way replicate for off-site editing that is not conducive to map services.

Most of the data we push is via replication. I would love to hear how it is going for you as we have over 500 data layers that we move, build and host on a nightly basis. Replication is simple and quick, building data via python and/or ETL/FME tends to take longer and with as much data as we have it could really slow down the works. We are getting ready to upgrade from 10.8.1 to 10.9.1 and one of the considerations is that some of our data will have to move to Branch Versioning (Parcel Fabric, Road Network) and I know Esri is pushing hard for this new method. They just do not seem to appreciate how we manage data and processes and rely on Traditional Versioning methods and Replication to make our data and databases work smarter not harder. 

I would love to hear how it going for you. I am hoping I do not have to entirely redesign our database structures to support this move on top of all the other necessary work.