Select to view content in your preferred language

Why does this code gets slower and slower?

1775
2
09-30-2014 02:29 AM
TomGeo
by
Frequent Contributor

Dear all,

the code below is jamming after  a while and I am incapable to see where or why.

Background... I have about 14.000 buffers in a feature class and a road network of almost 1.000.000 features in another feature class. Both are within the same FGDB.

I have to do a clip for each buffer, do some calculation on the clipped road pieces, and get the sum stored in a separate table.

Except the result of the statistics analysis, that is written in file, all other results are kept in memory. In line 31 I empty the memory.

Can somebody tell me why the process is getting continuously slower the longer it runs?

import arcpy

# Set environmental parameters

arcpy.env.workspace = r'C:/Path/to/Projects/Project.gdb'

arcpy.env.scratchWorkspace = r'C:/Path/to/Projects/scratchGDB/scratch.gdb'

arcpy.env.overwriteOutput = True

arcpy.MakeFeatureLayer_management('Cohort_noXY_buffer300', 'buffer')

arcpy.MakeFeatureLayer_management('roads', 'road')

expression = "TrafficLoad(!AGTRAF2DIR!, !AADT!, !TB_Length!)"

codeblock = """def TrafficLoad(Road2, ADT, Rlength):

    if Road2 == 2:

        x = ADT * 2  * Rlength

    else:

        x = ADT * Rlength

    return x"""

# , """ "OBJECTID" >= 2442"""

with arcpy.da.SearchCursor('buffer', ['OID@']) as SCursor:

    for row in SCursor:

        out = r'C:/Path/to/Projects/noXY.gdb/Export_noXY_300_Buffer_' + str(row[0])

        arcpy.SelectLayerByAttribute_management('buffer', 'NEW_SELECTION', """"OBJECTID" = {0}""".format(row[0]))

        arcpy.Clip_analysis('road', 'buffer', 'in_memory/clip')

        arcpy.AddField_management('in_memory/clip', 'TB_ADT', 'DOUBLE', '', '', '', '', 'NULLABLE', 'NON_REQUIRED', '')

        arcpy.CalculateField_management('in_memory/clip', 'TB_Length', '!shape.length!', 'PYTHON_9.3')

        arcpy.CalculateField_management('in_memory/clip', 'TB_ADT', expression, 'PYTHON_9.3', codeblock)

        arcpy.Statistics_analysis('in_memory/clip', out, "TB_ADT SUM", '')

        arcpy.Delete_management('in_memory')

        print "{0} processed".format(out)

Best regards,

Thomas

- We are living in the 21st century.
GIS moved on and nobody needs a format consisting out of at least three files! No, nobody needs shapefiles, not even for the sake of an exchange format. Folks, use GeoPackage to exchange data with other GIS!
Tags (2)
0 Kudos
2 Replies
BrandonKeinath1
Deactivated User

HI Thomas,

I stumbled on your question while looking for another answer.  Have you been able to make your code work?  My initial observation is that you're running arcpy.Statistics_analysis for each row in your SearchCursor which if I'm reading right means about 14,000 times.  Is that your intention?

Best,


Brandon

0 Kudos
JoshuaBixby
MVP Esteemed Contributor

After how many records does it start to slow down?  It could be a memory management issue by looping over all buffers with one search cursor.  I realize 14.000 records isn't really large, but it may be large enough to cause an issue.  What about chunking the records up and only have search cursors of say 1.000 records, does that keep the speed up?

0 Kudos