I maintain in SQL Server an ArcSDE geodatabase named "BC" that synchronizes twice weekly via Python script (attached) to a remote geodatabase named "BCWA". The script has been running successfully for several years, which I know because the script writes to a CSV file the pre- and post- compress state and lineage counts, and the length of time compress took to complete. I monitor the CSV in Excel.
I can see in Excel that there's no trouble on "BCWA", but the last time I could say the same for "BC" was February 26, when Excel shows it went from a state count of 67 before compress down to 15 afterward, and likewise from a lineage count of 163 to 51, and the compress task took just over 15 minutes.
Since that date, Excel shows every run of the Python script with a pre and post state count of 0, a blank lineage count, and a blank compress time. This made me think perhaps the compress task is being skipped, but when to troubleshoot I ran compress manually, it completed within 60 seconds and reported no error. The number of rows in SDE_states and SDE_state_lineages, however, did not change.
Today in SSMS I viewed the SDE_states and SDE_state_lineages tables in both "BC" and "BCWA", and noted that the record count of SDE_states in "BCWA" matches the end_state_count shown in SDE_compress_log, just as I expected. But on "BC", I could view SDE_states and SDE_state_lineages, but found there is no SDE_compress_log to compare it to! Ack!
Both "BC" and "BCWA" are 10.3.1 geodatabases, but Database Properties for "BCWA" says "database internals such as stored procedures can be upgraded." So the Upgrade Geodatabase button is enabled. Could it be that whenever it was that I clicked that button for "BC", one of the database internals upgraded caused the removal of table SDE_compress_log? I don't believe I pressed the button for "BC" after Feb 26, so this idea may be irrelevant.
Will someone please suggest what might be causing this problem and how I might fix it? I really need to get compress working again on the "BC" geodatabase.
Thanks,
Justin