We have been trying to get replication in place for a few years. For Simple model replication with limited datasets, replicating to a FGDB works very well. Replicating larger datasets with the Full model and geometric networks has been tougher to manage.
The hurdle we are facing now is schema changes. We are a four-service utility with a huge database, and we make changes to our schema fairly regularly. In order to avoid recreating the replicas when we have a schema change (very time-consuming) I have been trying the schema update tools - Export Replica Changes, Compare Replica Schemas, and Import Replica Changes. For some things this worked great - I dropped two fields and re-added as new fields with a different data type and the same names, and it picked up and synced the schema. In another case where I renamed a field and dropped three others, the renamed field came across in the replica as a brand new field with the new name, and the old field was still in the table. The three deleted fields never got deleted from the replica database.
My question is, can I simply manually sync the field schema between the databases? If I drop a field in the parent, then manually delete the field in the child, will the data still synchronize? If I rename a field in the parent, then manually rename it to match in the child, is that going to hose anything up?
I don't expect the schema tools or the manual method to work on a feature class itself (renaming or adding a new one) as I know the replica has to be recreated in those cases. I am only interested in field level updates, renaming or deleting a field. I think adding a new field would also require using the schema tools or recreating the replica.