I haven't geoprocessed that many features since v9, but right away would lean toward using raster. You might be looking at overnight runs to let the machine churn through the large number of records.
A factor to consider is how precise your data must be. Consider the number of raster cells you will need versus the number of polygons you'll process. If you need 10 million grid cells, but only half a million polygons, the latter might work better. Are your slope values from a 30m DEM? Your grid cells should not be less than 10m. If from LIDAR with cm accuracy, this would give you vastly different needs.
Raster processing would be simple, really. Assign values 1-6 for Input A. Add this (using the raster calculator) to values for Input B (values in multiples of 10) so that the possible output values are unique. Every combination of A and B has a unique sum in the output.
I'd go the most direct route using the Intersect tool followed by the Summary Statistics tool to calculate area -- the Statistics field would be SUM of shape_area and your Case fields would be both slope category and erosion zone. No need to convert to raster.
Since you've got a large number of polygons, before running Intersect, turn off background processing and exit any other applications that chew up memory (like Firefox).
Vector overlay should work just fine, but upon re-reading your original post, performance is an issue and raster overlay will be faster than pure vector overlay. But if this is just a one-off query, I'd still go vector. But if you're building some sort of service that'll be used frequently, rasterizing your static data (parcels) would be an acceptable approach. But of course you'll have to deal with resolution issues as others have posted. I'm not a raster guy, but I'm thinking you'd just use the Zonal Statistics tool (after rasterizing your land parcels).