I'm assuming this is unsupported, but wanted to make sure there is no workaround or I'm doing something wrong.
I am creating a table in my enterprise geodatabase, enabling global ids and archiving, and sharing as a referenced feature service with sync enabled to my Portal. I do this all with ArcGIS Pro / Python.
When I share this feature layer with my distributed collaboration configured to send as copies, a hosted feature layer is created in my ArcGIS Online instance reflecting the data in my enterprise geodatabase. That works as expected.
However, this table is periodically updated outside ArcGIS - that is, with native SQL. This process runs a truncate on the table, and repopulates with new data. This process also generates new global ids to populate on load as well.
This is where the distributed collaboration breaks.
My logs say "failure in processing exports for Replica", "Failed to export data changes message for replica with Guid," "Failed to export data changes to replica". "Invalid column value [globalid]."
So I'm assuming something is happening with the globalid. They look to be the standard format {8}-{4}-{4}-{4}-{12}, where the number is the number of characters (e.g. 52B2EBC3-DBA2-46C1-93F1-0D6DD52A2F13)
So two questions:
1. It is unsupported to maintain a distributed collaboration when the source table is maintained outside of ArcGIS
2. If not, is there a different process our DBA should follow so that the synchronization successfully processes?
We've got a somewhat similar process here, but we do not use a distributed collaboration to update the hosted copy, as we have run into various issues (though not your specific problem) in the past. Still, I don't see why a collab couldn't take the updates, regardless of where they happen.
Our update process is:
Not sure how feasible it would be for you to pursue a similar method, but it cuts out the middle-man of setting up a collaboration.