People seem to be having some serious problems geocoding a small file of one million records!
You should never have to split a table for geoprocessing since it is a serial process. Unless you are trying to use a server geocoder for batch. In this case the app posts all the records across and back again, not really likely to be fast enough before a timeout.
As a benchmark, I expect to be able to geocode addresses at the rate of at least 1M/hour and sometimes up to 6M/hour on my desktop machine. My typical file contains half a million addresses. I have the reverse performance experience with single line versus batch processing, single line slows down to one tenth the speed because zone indexes cannot kick in.
Therefore I hope you can keep looking for the non-obvious error that is causing the problem for you.
My laptop has no trouble geocoding any size file, so it is not the size of the machine.
It is likely that indexes are essential for normal use. Do you have parts of your address that can be indexed to allow the geocoder to break up the search? Such as state, zip or city?
Any geoprocessing across a network drive is a mistake! This includes geocoding. Networks are just not fast enough with all the packing and unpacking to send each request in a reasonable time. Put your geocoder, source and output on the same local machine on a local drive. I understand this will break "corporate policies" nearly everwhere, but do a demo to show the problem to your administrator. You will have to copy-process-replace. This will also apply if you are using a remote database using SDE.
I personally copy any source from a spreadsheet, shapefile or text file to a local filegeodatabase table. That enables indexing and unlimited sizes and makes it easy to put back in an enterprise database. You have complete control over the field types, null values and other unexpected data that will trip up the geocoder. Null values are sure to cause indexing to fail.
It is very hard to set up a custom locator properly, I have yet to make them work properly myself after trying to tweak the defaults. The default timeout in my locator is 100 seconds, but who wants to wait for that for each failure? If it takes even a millisecond it is too long, so I set it for 1 second. If timing out is a problem, then indexing is not kicking in.
If indexing is not running, why not? Either the reference does not have suitable indexes being built or you are throwing address candidates at it that do not have indexable components, eg no state when the locator expects a state.