Dear colleagues,The script below seems to work fine, but slows down considerably and gradually after a few hundred datasets have been added to the geodatabase. The speed comes back when the script is restarted. This makes me believe that the problem is unrelated to the fact that the workspace that I use is on an external drive. The dataset and geodatabase are also too large to move to my internal harddisk so that I could test this. The memory that the process related to the script uses, does not increase as more datasets have been imported (visible through task manager). Questions: why does this happen and how to avoid it?import arcpy
from arcpy import env
import random
# Check out any necessary licenses
arcpy.CheckOutExtension("spatial")
arcpy.CheckOutExtension("3D")
# Set environment settings
env.workspace = "G:/My_workspace"
fullset = arcpy.ListDatasets("*", "ALL")
fcCount = len(fullset)
print fcCount
# randomize list so that any proportion of the full workload
# already delivers a random sample
random.shuffle(fullset)
for tiff in fullset:
try:
arcpy.RasterToGeodatabase_conversion(tiff, "big.gdb")
arcpy.Delete_management(tiff, "")
fcCount -= 1
print fcCount
except:
fcCount -= 1
print "failed at" + fcCount