I agree 5 hours is far too long. I have a rule of thumb that states "If a single process takes longer than a cup of coffee, then interrupt it and find a better way". I will not be able to produce benchmarks with times longer that a cup of coffee, life is too short.
I now remember my recent 'better way' was to use a Workstation intersect which only took a few minutes, I have to admit that an ArcGIS intersect failed first on the same data and since I have a better way I was quick to take that rather than research the issue. I was already using a file geodatabase that is indexed and cleaned. It was already on a local high speed disk drive. It failed quickly.
The idea of looping through the features using a script seems likely to take a very long time because of the overhead of starting an intersect process for each likely polygon/polyline combination. You would have to search to find which combinations to compare and this needs a spatial index to be efficient but even then it would be slow. Python is supposed to be a 'glue' of fast optimised tools, not the tool itself. So, alternative tools seem the best course.
I have tried another intersect in ArcGIS 10.0 SP1 on an Acer laptop with Windows 7, i5 Intel 4 core processor, 4GB of memory
Polygons 42,496, polylines 221,896, tolerance 1 metre, time 5min 43sec. Nothing wrong with that.
So to your data not performing. Clean the layer, maybe generalize a little for the purpose of the overlay. Are they both in a local projection? In a local file geodatabase? Get out of SDE. On a local disk? Output to a local file geodatabase. Strip off unnecessary fields. Is your scratch workspace a file geodatabase (not just a folder)? Do you have any unusually large features? Split them up. Multi-parts are not good for geoprocessing, split into singleparts.