I have 5 types of issues with address ranges, due to irregularities in how the local communities assigned civic addresses. These include odds and evens on the same side of the street, the "high" end of the range actually being lower than the "low end of the range (due to digitizing direction, in relation to traffic flow direction), or all of the addresses on both sides being odd or being even in the case of a cul-de-sac. For some of these cases, I need to find the connected segments of the road that have the same street name, zero their ranges, and put the entire range onto one segment to solve the issue.
I need to export my street data for use by others, and their software doesn't tolerate any of these oddities, so we have to massage the data to make it work, by adjusting the ranges so that they comply with the rules: "odds on one side, evens on the other" and "low to high" in the digitizing direction.
We have a tool written by others in .NET/ArcObjects which can process the entire dataset (>60000 roads) in about a minute or two. (There are only about 110 roads with issues). But the existing tool has deficiencies that we want to correct and we don't have access to the original source code. The fastest I've been able to manage is about 30 minutes to loop through the data in python using selection sets and cursors.
I'm trying to figure out how the author of the other tool did it so much faster. Any ideas? I'm wondering if I would have better luck loading all of the attribute data into python dictionaries and processing it in memory and writing all the changes back at the end, rather than using a cursor to loop through the data directly.
I don't want to influence replies by posting my code - I want to abandon my method and find some alternate method that is significantly faster. So can anyone think of a way to do it in only 2 minutes?