Using OD Cost Matrix for shortest way - but have speed issues

Discussion created by TomGeo on Jul 22, 2013
Hi, I have some speed issues with the datasets I am using to find the shortest route.

I have a road network, the points of origin and the destination points in one file geodatabase. Points of origin I have 2500 and destination points I have a bit more than 74.000! The script I wrote works but process is really slow and during the process my hard drive is getting filled up with some temporary data. In the following there is the script I'm using to process the data and it would be great if some of you could have a look at it and point out how to speed up process.

import arcpy, gc
arcpy.env.workspace = 'D:/Projects/gis/processed/Network/Distance.gdb/'
arcpy.env.overwriteOutput = True

network = 'DM/DMnetwork_ND'
origin = 'data/Start_N2500'
destination = arcpy.MakeFeatureLayer_management('data/rIntersect_0Tolerance', 'in_memory')

fail = []
okay = []

OD_Cost_Matrix = arcpy.MakeODCostMatrixLayer_na(network, 'OD Cost Matrix', 'Length', '' , '', '',
                                                'ALLOW_UTURNS', '', 'NO_HIERARCHY', '', 'STRAIGHT_LINES', ''), 'Destinations', destination, 'Name Areas #', '5000 Meters', '',
                      'DMnetwork SHAPE;DMnetwork_ND_Junctions NONE', 'MATCH_TO_CLOSEST', 'APPEND',
                      'NO_SNAP', '5 Meters', 'INCLUDE', 'DMnetwork #;DMnetwork_ND_Junctions #')


for row in arcpy.da.SearchCursor(origin, ['Respondent']):
    expr = 'Respondent = ' + str(int(row[0])), 'Origins', origin, 'Name Respondent #', '5000 Meters', '',
                          'DMnetwork SHAPE;DMnetwork_ND_Junctions NONE', 'MATCH_TO_CLOSEST', 'CLEAR',
                          'NO_SNAP', '5 Meters', 'INCLUDE', 'DMnetwork #;DMnetwork_ND_Junctions #')
        arcpy.Solve_na(OD_Cost_Matrix, 'SKIP', 'TERMINATE', '')
        output = 'Min_Output_' + str(int(row[0]))
        tmpLyr = arcpy.MakeFeatureLayer_management('OD Cost Matrix/Lines', str(int(row[0])), '', '', 'ObjectID ObjectID HIDDEN NONE;Shape Shape HIDDEN NONE;Name Name VISIBLE NONE;OriginID OriginID VISIBLE NONE;DestinationID DestinationID VISIBLE NONE;DestinationRank DestinationRank HIDDEN NONE;Total_Length Total_Length VISIBLE NONE')
        arcpy.Statistics_analysis(tmpLyr, output, 'Total_Length MIN', 'OriginID;DestinationID')
        print str(int(row[0])) + ' processed'

arcpy.Merge_management(okay, 'finaloutput')
for item in okay:


As I mentioned my hard drive is getting flooded and it all happens in 'C:\Users\TomGeo\AppData\Local\Temp'. I'm talking about more than 100GB that are written to the directory and I wonder why such a data amount is created.

Thanks in advance for your comments!

Bests Thomas