Problem is that each loop of arcpy script takes longer to finish. What I want to achive with the script is to see how many polylines cross a polygon square (basically create density map). Polyline feature class has 215560 and polygon has 1878 rows. I have 3D Analyst and Data Interoperability standard license so i can not use density map geoprocessing tool. I also tested with intersect tool but that seemed to have worse performance.
CPU Xeon E5-1620
RAM 16 GB
GRAPH NVIDIA Quadro K2000
WINDOWS 8.1 Pro
PYTHON 3.6.6 64-bit
ArcGIS Pro 2.3.2
Script logic goes like this:
1. Loop through polygon fc using unique id.
2. Select polygon square based on unique id (SelectLayerByAttribute_management) and create in_memory polygon fc with selected square.
3. Select polylines that intersect with in_memory polygon square (SelectLayerByLocation_management) and create in_memory polyline fc with selected polylines.
4. Get count of how many polylines are in in_memory polyline fc.
5. Update the polygon square row (that was selected in step 2) with count value.
6. Move to next polygon square based on unique id.
arcpy.env.outputCoordinateSystem=arcpy.SpatialReference("Estonia 1997 Estonia National Grid")
for grid_oid in range(1,grid_oid_max+1,1):
with arcpy.da.UpdateCursor("density_map_grid_vaina",["New_oid","All_count"]) as cursor:
for row in cursor:
Next plot shows the total time taken (y-axis in minutes) by each loop and time that each processes took in that loop. Loop was set to end at loop or New_oid 500 (x-axis). Time taken for each loop grows exponentially.
Does someone have an idea of what I'm doing wrong, what might be causing this or is there another solution to achive my goal? Can it be a python issue? I've read that using arcpy.SetLogHistory(False) will improve performance but that had little to no effect and that ArcGIS has to keep track of things and so the increased time with every loop is inevitable. (https://community.esri.com/thread/206440-loop-processing-gradually-slow-down-with-arcpy-script)