bulk calibration of gapped local roads

714
3
03-02-2019 11:25 AM
Labels (1)

I wrote this script in attempt to reduce the NAN routes on gapped local roads for our RH implementation.

pydev106/CalibrateRouteParts.py at master · KDOTGIS/pydev106 · GitHub 

 It basically end dates about 100,000 calibration points and appends about 140,000 new ones.

I ran it on my desktop version 10.6.1 ArcGIS to the default version in an edit session, its a SQL Server versioned geodatabase.  Processing began when I saved edits...a few days ago. The delta tables are counts of D-~100K and A~260K and it has been processing for days.  There are probably about 70,000 miles / 45,000 routes being recalibrated, the response from ArcGIS desktop switches between "timeslicing and recalibrating routes" and "updating events", there are no events to update right now in this database, just networks.  Was this a bad idea?

I'm thinking I should have approached this in batches of a few thousand records at a time with version management happening between the batches, or maybe try SQL methods using the versioned view.  Even if I let this go for however long it takes, I fear the regenerate routes and compress are also going to take forever.  I'm going to see where it stands Monday, but how would I even back out of this processing at this point?  Restore my database to the state is was from before this started?  Should I be second guessing this approach so much?  Is any of this even necessary?

0 Kudos
3 Replies
RyanKoschatzky
Occasional Contributor III

Kyle,

 

I would have done any work in a version and not on default, so you have an easy way to back out. I agree that a smaller batch would be better to try. While overall it might take longer, the risk is minimized with smaller data sets.

 

I know when we have run our statewide spatially derived tool our adds and deletes are over 350K each for the three different event feature classes. I run this in a "edit" version, as you may recall this is at two levels removed from default (default, lockroot, edit) for NCDOT. Between each run, I would post to lockroot, others would post further but before they do, they run analyze, reconcile, post, sync replicas, analyze and compress before the next run.

 

If you have to end the process, I would restore to before the process. You last known good. 

 

Some questions I would have are: do you need that many gaps? For our data we find it best practice to renumber the other side of the gap. For us the route name is not part of the route id. That is done as an event (streetname). Do you have an index and are there any null values on those indexes that could be slowing things down? Are you running any log files. As I recall with our spatially derived process I had filled up the log files and that crashed the process.

I stopped the processing Monday morning, compressed and the A/D table counts went back down to zero, appears no harm (or benefit) was done. 

Ryan I will take your advice and try to uniquely ID the gapped segments more thoroughly.  The Route name is not part of the route ID but the route ID is based on an enumerated list of street names and cities in a county, it shouldn't be too difficult to enumerate the dissolved-unsplit gapped segments to uniquely identify the parts.  

I assume the the database log files associated with MSSQL are what I should be checking, and the LRS edit logging is out of my control.

Kyle

0 Kudos
RyanKoschatzky
Occasional Contributor III

Kyle, 

Yes, I was referring to the database log file. We topped over 200GB on log file that day. Since we are have not implemented using temporal polygons, we have given up on event temporality for a few select events. We truncate those events before running the process when we have new polygons, that way we only have 350K adds as the deletes already taken care of in a statewide run. 

Hope that helps. 

0 Kudos