I'm working on writing a validation script to check the URL's of each tile of a cached map service. Basically, the REST endpoint URL of tiles is formatted as "https://<map service URL>/tile/<level>/<row>/<column>", giving you the number of rows and columns of the start and end tile of each level of detail cached for that service. Using these values, I use nested while loops to iterate through each cell in the grid composed of those rows and columns and will ultimately build a URL for and ping each tile to check that it doesn't respond with an error.
The cache for the largest scale contains 52,398,060 individual tiles and the pyscripter throws an "out of memory" error when I try to iterate through that range. The attached code shows what I'm talking about.
Is there a better way to write this or some kind of memory release I could do to make this thing work?