The potential 1000 record issue as mentioned in the bug report Asrujit pointed out, is not about the total number of records a Feature Class has, but about how many records you need to synchronize between the two replicas. E.g. if you deleted over 1000 records in the parent, and these now also needed to be removed from the child / relative replica, than you may run into this issue.
So the question is, did you delete over 1000 records from the layer having a problem synchronizing?
It happens that more than 1000 records are deleted. I got the error shown in the screenshot below:
...
is this issue solved in 10.2.1?
Jamal,
This Bug has not been resolved at 10.2.1 yet, as the link suggests.
If you are deleting and replacing the replica geodatabase entirely as a workaround, why not just consider a different approach to achieve your goal? Depending on your workflow and requirements, you may not even need to use replication. Would it be feasible for you to simply generate an export of the data you need at a specific interval? Instead of performing a synchronization only to find out that it doesn't work if there are 1,000 or more deletes, you might save yourself more time to just use the export or copy GP tools to produce the output file geodatabase altogether. Then you could delete the output prior to re-exporting your data each time. One of the reasons for using one-way replication is to avoid having to re-generate the entire geodatabase when changes are made; hence it allows you to simply export the deltas and apply them to the replica. If you are finding that you need to re-generate the entire replica each time, it seems self-defeating to use replication. If you must use replication based on your requirements, you could consider synchronizing twice instead of once so that you don't run into the 1,000-record deletion issue. For example, sync once after 500 deletes and then again after 500 more deletes.
�?� The only reason I replicate the enterprise geodatabase to file geodatabase is the performance. I gain some considerable speed in case my web mapping application reads from the gdb but not from mdf.
�?� It is much efficient to replicate the enterprise geodatabase to file geodatabase rather than copying the enterprise to file. In case of light changes on the enterprise, these changes can be easily reflected to the file geodatabase by synchronizing them without the need to copy the ENTIRE geodatabase to file geodatabase
�?� The 1000 records deletion limitation is an issue that I expect to be solved by ESRI.
Regarding your first bullet, I would say that exporting or copying the data as I suggested still results in a file geodatabase. You don't need replication to use or produce a FGDB. Thus, you're not gaining anything by using replication for this reason alone.
Regarding your second bullet, I agree with you that it is more efficient to export and apply changes to a replica rather than re-produce the entire geodatabase to a FGDB. I basically stated this in my last post. That being said, as a workaround you have already stated that you need to re-create the entire replica anyways so this really should not matter. Furthermore from my experience, replication comes with plenty of buggy behavior and a number of hard requirements that you would not otherwise have with exporting your data to a file geodatabase using Export or Copy. If you have any geometric networks that you are replicating, you will see huge wait times before the network is completely rebuilt on the child replica... especially with re-creating the entire replica as you are doing. You wouldnt see this with a copy or export. If this is not an issue for you or if you don't have geometric networks that you're working with, then it's probably not a big deal as long as you're willing to muddle through the rest of the challenges with Esri replication. Hopefully you've had better luck than me.
Regarding your third bullet, I wouldn't hold your breath about getting this fixed especially since it was reported at a recent release of 10.2. Even if there is a documented NIM, it could be months before it's fixed.