Select to view content in your preferred language

Locate Features along Route slow - how to speed up things

1212
5
05-18-2010 06:18 AM
KoenVolleberg
Deactivated User
Hi all,
I've got a tool which calls the Locate Features along Route tool:
gp.LocateFeaturesAlongRoutes_lr(in_points, in_routes, "number", tolerance, out_events, "rid POINT MEASURE ", "ALL", "DISTANCE" )


For a dataset with 225 lines/routes (with an attribute index for "number") and on these lines 7912 points to be converted to events, it takes some 45seconds to complete the process. However, I've got some pretty large datasets (several 1.000s lines and >50.000 points), it takes quite long. Where can I look for speeding up this process, or is LocateFeatures... a very slow tool whatever you do.

Thanks in advance!
Cheers, Koen
0 Kudos
5 Replies
KeithPalmer
Emerging Contributor
Koen,

I know this is five months later, but maybe somebody else will be helped by the info.  I was having a problem with this command using 20,000+ routes and 36,000+ events.  I stopped the operation after 6 hours and really looked at the data.  That's when I realized I had a huge blank route.  I deleted that and reprocessed.  Total operation took 4 minutes.  Not speedy, but just a little faster than 6 hours 🙂

-Keith
0 Kudos
MaksimMazor1
Deactivated User
Let's say for instance i'm trying to locate 3 million records along 200,000 routes. I've check to make sure the routes aren't empty. This usually takes ALL night to run, I was wondering if that was the norm.
0 Kudos
RichardFairhurst
MVP Honored Contributor
Let's say for instance i'm trying to locate 3 million records along 200,000 routes. I've check to make sure the routes aren't empty. This usually takes ALL night to run, I was wondering if that was the norm.


I would expect that you are hitting the point where actual memory runs out and virtual memory using the hard disk kicks in.  That slows things down tremendously.  Is there any way to process the operation in chunks, especially spatially grouped chunks?  Any attributes that would do that?  If so use an Iterator and append the results together at the end.  Probably it would save you hours.

My largest run is 120K features on about 38K routes with the nearest flag unchecked.  That takes around 2 hours to run.   If I chunked that by area it would almost certainly run faster, but I am willing to wait 2 hours since it processes over the weekend using the windows scheduler.  Anyway, my experiences is that the wider the spatial extent of the features and routes becomes and the more features in the run, the slower all LR tools perform.  Not sure of the point where a severe drop off occurs, but I know it exists.
0 Kudos
MaksimMazor1
Deactivated User
Yea I could attempt grouping it up by Zip Codes or something smaller than the entire state. But this has to be done for the entire USA, around 150 million address records to go along with every road in America.

8GB of ram on my machine.
0 Kudos
RichardFairhurst
MVP Honored Contributor
Yea I could attempt grouping it up by Zip Codes or something smaller than the entire state. But this has to be done for the entire USA, around 150 million address records to go along with every road in America.


How about by County?  Los Angeles County all by itself will push your limits.  Anyway, I represent Riverside County and I don't try every address in my jurisdiction at once with this tool.  A properly done iterator will do a loop and append if done correctly.  Anyway a smaller scale comparison of some attribute grouping with an append process using an iterator compared to processing of the full state would give you an indication of what, if any, potential savings you could achieve if you tried it on a large state like California or New York.

Basic process I use is first to create an empty table with the schema of the output only (run a single record and then delete the actual record to just keep the schema).  Your iterator could keep overwriting the tool output each time and just append to this table prior to the next iteration that overwrites the tool output.  Alternatively a parameter could uniquely name each output of the iterator so nothing is overwritten during any iteration and then an Append could be done at the end.  The second option may be faster.

Iterators are exactly designed for breaking up large sets into smaller more manageable sets and the Append and Merge tools are designed to bring it all together in the end.
0 Kudos