I'm running a Python script using the data access update cursor on a versioned feature class with 16,814 rows stored in a SQL Server SDE. I am using a for loop to go through each row and iterating a count variable each time through the loop. Without running the updateRow() method, things are as expected: my count is equal to the number of rows I have. However, when I run the updateRow() method, the count gets much higher and is inconsistent from one execution to the next. It's anywhere from 20K to 30K. I suspect this has something to do with the time it takes to manipulate so many records, similar to this question: arcpy - Scaling DA UpdateCursor to large datasets? - Geographic Information Systems Stack Exchange
When I run the same script on a smaller subset of the data (~1100 records), things are consistent and normal. This is also the case when running the script on the full dataset exported to a local file geodatabase (instead of in the SDE.)
Has anyone else run into this issue? Can anyone explain what is happening with more certainty? I'm not sure of the ideal workaround just yet (either use SQL in a while loop to break up the records returned in the cursor--as was suggested in the linked question, or running the script on local data and then copying it up to the SDE), so if anyone has any other ideas, I'm all ears as well.