I have shapefiles with segments (representing roads, but they're just regular polylines, not a network dataset) and polygons (~14,000 polygons generated with service areas). I'm running intersect to get the segments (and their lengths) within each polygon. On the first run, I got errors, and the tool did not complete. I reduced the segments dataset (from lines representing the whole county to just those within the city) and ran "repair geometry," which did find some problems and fixed them. When I ran it again, the analysis took 12+ hours and completed. The table looked normal, but the resulting featureset was suspicious. It had exactly 64,000 segments in it, and only a few tiny segments mapped. The rest did not appear on the map. Repair geometry on the results featureset had no effect. I ran the analysis again from python. Again it took 12+ hours. This time there were exactly 70,000 segments in the results, and none of them mapped. So, I don't have confidence that these are complete results.
I ran a similar analysis with the same polygons shapefile and another set of segments from the same road system; this one had many, many more segments, and the analysis completed normally. It took maybe four hours to run on my university's machine.
Seems like maybe the analysis is maxing out the computing resources, causing ARCMap to choke and produce bad results. But why could it run the previous, much more complex analysis? Advice?