I am working on a project to automatically display topology errors from a gdb via a Python script. Essentially, I need to validate topology on several datasets, then use the Export Topology Errors tool once the topology has been validated. I am running into an issue for one dataset where the Validate Topology process runs for hours (over 24 hrs) without succeeding or throwing an error to determine what the problem is. All other datasets - even those bigger in disc size - are able to be validated successfully. For the dataset in question, I have iteratively removed data from each participating feature class and tested, and have determined that this infinite run time occurs only when data is in one of the line feature classes.
So far, I have tried the following:
Strangely, if I let the validate process run for a while (> 1 hour) and force it to cancel, I can then “generate summary” from the topology file and the summary does contain errors for the whole dataset including the problematic feature class that seems to stall everything.
Does anyone have any ideas about what in my data might be causing the validate topology process to stall? Thank you in advance for your help!
What type of GDB is the data stored? Repair Geometry only works for File Geodatabase features. That said, I wonder of you have some null geometries that a causing the problem in that feature class.
Hi Joe thanks for responding - it's in a file gdb, so should work in theory. When I ran Repair Geometry on the problematic feature class I chose to delete null geometries- but no messages were produced in the results so it seems there weren't any features with null geometry in that feature class.
Well shoot: sounds like you've got your bases well covered.
You create another line feature class that has the same schema as the problem child and use the new one in your topology. The idea is to append features from the problem child to the new one a dozen or so at a time and see if/when you hit your wall.
By appending a few at a time, you are basically sifting through the features. Depending on how many features the problem child has this can/will be an arduous task. Nothing sexy about data clean up, but obviously it's essential.