If you are happy to convert your models into Python code then you could take advantage of multi-core processors as described in this blog. It does mean you will have to stop working in the model builder environment.
You need to share more of your workflow for us to comment. My initial though is "Why do you need to run many parallel processes?" Why not do them all at once? It feels like you are 'Reinventing GIS'. By this I mean that the spatial tools are designed to run on whole datasets, so running a tool for each feature or even a group is very inefficient, if you are doing that. I can't tell. Partitioning is a good strategy if the data overloads the tool sometimes and in theory you could run in parallel, but I find that the partitioning is so successful that i just run the tool in a loop of a few partitions (not thousands) is good enough. My goal is to run the tool in a few minutes so that the total time is still reasonable.
I personally do not use Modelbuilder because I do not have enough control of intermediate results. They are always written out to a scratch geodatabase. In Python you can hold selections as views, use SQL queries, store sets in python dictionaries that are hashed arrays, use spatialite which is much faster for some operations using SQL and generally avoid some of the elegant but unscaleable standard tools. For example avoid any processing with a joined table.
I think that a change of approach can make you process run in the time to have a cup of coffee.