We have been running a test version on the enterprise geodatabase (Microsoft Sql Server 2022) of a parcel fabric dataset for our taxlot editing here in Oregon. I wanted to update the data with the most current work that the cartographers have completed on the real data. I am looking for the best workflow to accomplish this. The data to update with has been moved from the working sql server geodatabase to a file geodatabase and manipulated by the tools created by the ORMAP folks and is ready to bring into the enterprise environment. At this point I could overwrite the existing data, or I thought I could truncate and append the layers. Unfortunately, I keep running into errors trying to do this. I also used delete rows instead of truncate. This works but it also took over 2 days just to process one layer. Not great. Anyone have any suggestions for a better workflow?
Solved! Go to Solution.
An XML Workspace is useful when you want to create an identical schema in multiple environments, for example - for digital submission workflows, before the data is appended to a named version.
In your case, you are correct - deleting the dataset and copying the updated data from the file geodatabase is the most straight forward.
The reason deleting rows is slow is because of the messaging that takes place between related tables (relationship classes), the need to update the dirty areas and error tables every time a feature is deleted, potentially firing attribute rules as well.
Is the data in the SQL database already branch versioned and published to Enterprise for the parcel fabric?
Yes it is. I have turned off the service in manager and I did try turning off the versioning in the Sql database. Still did not work. However, the delete rows script does work. Not sure why it does and truncate does not.
For the data in SQL EGDB that is branch versioned, you cannot use truncate against versioned data: https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/truncatetable.htm
If you need to truncate, you must UNVERSION the data first in Pro. That is why the delete rows works against the service.
You may also want to look at the append tool: https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/append.htm
I know that they are making improvements with each release.
Hi @BobNoyes , I see that this is a test. We are about to venture into migrating our custom (oracle) classic multi-editor versioned cadastral data (points, lines, polys, history, relationship tables, etc) to the latest Pro/Enterprise Parcel Fabric branch versioned. We understand the importance of using a file geodatabase as long as we can since the Parcel Fabric tools are all available and that format is the most flexible and easiest to update the schema during testing (before publishing to branch versioned Enterprise). Can you share the reasons why you are not conducting edits directly to your "working sql server geodatabase", and instead doing your edits in the File Geodatabase? Again, I realize you are going through a test period, so any insight you can share would benefit other jurisdictions in similar situations.
The cartographers are not editing the file geodatabase data. It is part of the workflow getting the data from the existing 10.9.1 sql server and putting the data onto the new enterprise geodatabase as a parcel fabric. So, the steps are: existing data --> file geodatabase --> run scripts that convert data to a parcel fabric into a second file geodatabase --> copy data (I was hoping I could truncate and append the test data) to the Enterprise geodatabase (on a 2020 MS Sql server) --> Write or overwrite the web layer --> Cartographers do their magic. The python scripts were built as part of the ORMAP standards. My understanding is that parcel fabric on a file geodatabase limits you to a single user while the on the sql servers allows for multiple users and branch versioning.
Thanks for the additional details!
You are correct about the Parcel Fabric deployment options for editing: https://pro.arcgis.com/en/pro-app/latest/help/data/parcel-editing/deploylicenseparcelfabric.htm
In general, we recommend performing large data manipulations on a file geodatabase. Why? Performing such manipulations in a named version increases the feature table size and might not perform as fast.
The reason you cannot use the Truncate tool is because the feature classes participate in a topology. If you want to run ist faster start by running the geoprocessing tool Disable Parcel Topology and once you are done with the data manipulation, run Enable Parcel Topology.
You will notice that every command runs faster when the topology is turned off as there are no need to update dirty areas. The price to pay for that is having to validate the entire data extent again. It will create a dirty areas for the entire data extent once you enable the toplogy again.
Personally, I would:
Such an operation should be carried out in the test environment first.
I hope this helps.
I have set the data up in a file geodatabase however, I was trying to find a short cut to getting the updated data to the sql server. And yes, it does look like I need to go old school and delete the existing data and add the new data then reset the versioning then overwrite the web service. So, essentually your outline is what my conclussion is as well.