Just wondering if anyone else has seen this / has any quick suggestions. I'm currently using ArcGIS 10.0
I am running the 'Split Line At Point' tool in a script. My inputs are in a feature dataset - I have a feature class of lines (streams) and a feature class of points. The points were created previously in the script using the 'Make Route Event Layer' and 'Feature Class to Feature Class' tools, so they are located along the lines. I'm using a search radius of '1 FEET' and for the output, creating a new feature class in the same feature dataset.
When examining the output split lines and comparing them to the points, the lines are not always split where the points are at. Sometimes they are just not split where a point is. Sometimes they are split where a point does not exist.
I can say that the majority of the time, the lines have been split correctly. It appears that the problem occurs where there are many points are near each other (within a few feet). I'm experimenting with different resolution/tolerance settings for the feature dataset, but right now this does not appear to be helping. Any ideas?
I'm including an image with two screenshots of this issue. The green points are where the lines are supposed to be split. The brown lines are the output, symbolized with a black dot at each end, showing where the lines actually have been split. [ATTACH=CONFIG]32699[/ATTACH]
Just a note about using Integrate and why it helps (I failed to explain some boring details in the other post :rolleyes:). The M tolerance, Locate Features Along Route tool and Route Event Layer may create small imperfections between the point and line that are nearly invisible. Additionally, if no pseudonode exists at the interpolated point event location there is almost always a discrepancy between the coordinates of the point established using the feature class resolution and tolerance settings and the projected line passing by that location which has not been adjusted by those factors.
I believe the Split Line at Point tool will always favor splitting the line at a vertice within the search tolerance rather than at the actual location if there is any discrepancy. Probably the tool finds the closest vertex reading along the line geometry first and if the vertex is in the search tolerance it does not bother to locate the point more precisely and just uses that pseudonode. (Your second picture clearly shows that this is occurring at the lowest preexisting pseudonode where the line bends slightly). It is also possible that it may first cluster points that fall in the search radius rather than use each point individually and that would result in a slightly random phantom point location if that occurred.
Integrate moves the features slightly so that they lie directly on top of each other and also inserts a pseudonode into the line at the integrated point location. This allows you to use a 0 search tolerance since a line pseudonode will always be found that matches the point location exactly at the integrated position, and this should eliminate the issue.
Thank you very much for the advice! I added the Integrate tool to my script, in addition to adjusting the environment XY Resolution (0.0001 Feet) and Tolerance (0.001 Feet) at the beginning of the script. Now everything appears to be working the way I wanted. In addition, I also found that keeping a tiny search radius of 0.001 Feet (instead of leaving the parameter blank) for the Split Line at Point tool yielded better results for me. Thanks again!