I have a script that processes roads in different counties and builds a dictionary of road lengths in polygons. Due to the large size of the datasets I am building the dictionary "in_memory." The first time I ran this, it processed fairly quickly. The second time I ran the same script on the same data and the processing time doubled. I ran it a third time (on different data, with not much difference in dataset size) and the time doubled from the second run. I am deleting the "in_memory" as one of the first steps of my script to ensure a clear workspace. Any ideas on why it is getting slower each run?