start "scratch_file_path\subset_scratch_folder\pythonscript1.py > "scratch_file_path\log.txt"
start "scratch_file_path\subset_scratch_folder\pythonscript2.py >> "scratch_file_path\log.txt"
start "scratch_file_path/subset_scratch_folder/pythonscript18.py >> "scratch_file_path\log.txt"
Obviously, this is a very in-depth approach and involves some upfront work (and perhaps some learning of new things). However, for large jobs like yours, using scripts to do the processing can really reduce the amount of processing time (and headaches).
Hopefully some of this is helpful. I think it is very beneficial when analysts learn a bit of scripting. It definitely has it's benefits.
I'm not sure about if this is the problem, but when I manually load locations and solve the process use to loose a lot of locations.
I after loading I reload again the locations then the solving runs over all origins and locations.
I don't know whats the problem but seems tha the SOURCEID is not loaded at first run, means that don't link the point to the network. The Od matrix seems that really uses the position in the road as a location. When I reload then the SOURCEID appears at the attribute table.
My problem is 37256x37256 -USA cities-, so I need to iterate a model with low origins and all destinations. When runds 1 to 37256 takes 5 minutes to load and 1:40 min to compute all the routes.
Any idea like the 'Chris's one would be appreciated.
I'm facing the same issue with massive OD matrix calculations, but not as massive as yours: 34,000+ incidents x 79 destinations, which is even smaller than your subsets. Yet my workstation seems to have run out of memory after running my Py script for the first time on such *big* data (works ok on smaller sets of data).
What is/was your desktops' average hardware settings (RAM, CPU, ...)?
Thanks for your hints,