# Count overlapping polylines created by network analyst

3916
5
02-25-2014 06:59 AM
New Contributor
Hi - I have created a service area in using network analyst with overlapping route polylines within .2.5, and .5 miles. I want to score those streets based on the number of times they overlap, and rank them for importance in a study network of infrastructure improvements. Is there a way to count the number of overlapping poylines and assign a count value to them?

A line density analysis is not preferable because I want end up with a list of street names and ranking. I have tried "Collect Events", but of course that yields separate points, which I can't use...

Pete
Tags (3)
5 Replies
New Contributor

I did an intersect, but I don't see that that tool actually counts the number of lines that intersect. To answer your question - yes, I want to count all the lines that are exactly identical in exactly the same locations...

To explain a little further, the lines that I have are exactly identical to each other (extracted from my network dataset multiple times depending on how many routes they are a part of.) They result from 86 frequently overlapping service areas calculated from 86 transit stops... so basically I want to know which streets are within the walkshed of transit stops with the highest frequency.

Thanks!
MVP Honored Contributor
I use a calculation to create a 70 character representation of my line as the From X/Y coordinates, the To X/Y coordinates and the length.  It is virtually unheard of in my network to have two lines that do not overlap that have these same exact values.  The Python calculation I use formats the number in a way that is compatible for state plane coordinates.  Other coordinate systems would require modification the formatting settings.  Anyway, here is the calculation:

Parser:  Python

Use Codeblock:  Checked

Pre-Logic Script Codeblock:
def Output(FirstPoint,LastPoint,Length):
FPX = round(float(FirstPoint.X), 4)
FPY = round(float(FirstPoint.Y), 4)
TPX = round(float(LastPoint.X), 4)
TPY = round(float(LastPoint.Y), 4)
Lnth = round(float(Length), 4)
return "{%(FX)012.4f}{%(FY)012.4f}{%(TX)012.4f}{%(TY)012.4f}{%(LN)012.4f}" % {'FX': FPX, 'FY': FPY, 'TX': TPX, 'TY': TPY, 'LN': Lnth}

Expression:  Output(!Shape.FIRSTPOINT!,!SHAPE.LASTPOINT!,!SHAPE.LENGTH!)

A typical output that would be identical for identical lines is:

{6217207.2498}{2303876.6114}{6227963.0879}{2299683.2294}{0009453.2624}

I use this to detect geometry changes in my network and alter the modified date field in a script each week and it works very well.  It also supports a standard join between separate copies of the network that is spatial without using a Spatial Join.
MVP Honored Contributor
That sounds like an interesting solution, may be the answer.

I was thinking more about this one and thought that some custom code may be needed.  You could loop through each line in each set of line features and use a spatial filter to spatially select other lines that 'are identical' to each line.  Of course, depending on the size of the road network and even with a small set of lines, 86 different sets would take a long time to run and probably some considerable time to write the code too...

I still think there should be an out of the box solution for this problem, if not then it may be a good idea for a new tool or enhancement request.  It gets more complicated as the number of input layers increases, but it seems there should be a way to do this with ArcGIS.

My proposal works if lines are exactly identical.  The field lets me use traditional attribute summary tools, like Dissolve, based on that spatial description.  That tool could get the Min ObjectID of the set of identical lines and tell you the count.  You only need to deal with features that have counts of 2 or more.  Join the Dissolve output to the original lines on the calculated field and then select all features where the dissolve count is 2 or more and the line ObjectIDs are higher than the dissolve Min ObjectID of the set.  This is your set of unneeded duplicates and excludes just one copy of those lines to remain with all other unduplicated lines.

My method permits me to do this set up and analysis in under 10 minutes manually (modeled using Model Builder the actual run time would be less than about 3 minutes for a large dataset).  Your clean up time depends on what you want to do with the duplicates and the one copy that is left in the database.
MVP Honored Contributor
Richard, that's a really nice solution because it's fast and relatively easy. I like really that idea / solution. +1
I wonder if the find identical features suggestion I posted would work.
Maybe he will let us know if he figured it out and how.

My experience is that Select by Location operations involving things like Must Share a Line Segment or Within in an effort to get identical lines is extremely slow compared to calculating this value and using summaries and joins.

Other benefits of the field calculation I use is that I can write pure SQL statements against that field using the Substring and Cast operators to do things like determine the relationship of the Euclidean distance between the end points and the reported line length.  This lets me detect lines that must be multi-part (reported line length is shorter than the Euclidean distance between the end points) or select different ranges of sinuosity of the lines (the number that represents the amount a line curves).  Once this field is calculated SQL selections like this can be done in shapefiles, file geodatabases and even Standalone tables without any special spatial field types.

I also use this field to do SQL selections that separate lines that are predominantly east/west lines from lines that are north/south lines.  Related SQL queries can select lines that are digitized against the expected direction of my addresses for me to examine.  Because it is SQL I can use definition queries to define layers that filter based on these different spatial characteristics of my lines.
MVP Honored Contributor
Yes, my thoughts exactly.  Brilliant!
SQL statements are much faster than a spatial selection using ArcObjects.  I have to remember this trick and apply where possible.
Thanks for the added explanation 🙂

To extend the SQL possibilities further, using the end point coordinates you can select lines based on whether they point to different compass quadrants or within various angular degree ranges.

If you find yourself doing a lot of the kinds of selections mentioned in my last post and this post, it is best to not only calculate the field I suggested as a Join/Summary field for each line, but to calculate separate individual numeric fields for each of the 5 numbers that make up that field (or at least the 4 coordinate values if length is automatically maintained by the feature class).  The separate numeric fields eliminate the need to use the Substring or Cast SQL statements to do the kinds  of selections I have described and result in more understandable SQL statements if your field names are easy to understand.  All 5 separate components of my Join field can be calculated using the Geometry Calculator.

The XY of the Centroid could be added to my join field, but I have not found that I have needed it.  However, it is useful to have the coordinates if you need to determine which direction a curved line (predominantly) bends away from its end points.