I regularly load 2M, 3M, 5M, 8M, and 12M row tables into PostgreSQL, and occasionally go as high as 60M rows. I use both FeatureClassToFeatureClass / TableToTable (as appropriate) or a nested arcpy.da.SearchCursor / arcpy.da.InsertCursor pair. Performance is usually about the same, taking 20-240 minutes, depending on size of the data and the target database (effectively the same rate for each DB).
I have seen multi-day loads, but only when trying something sub-optimal, like loading into an AWS RDS across three routers shared by hundreds of users. My best performance has occurred on a high-IOPS EC2 in the same mission as the high-IOPS RDS, using ArcPy from the Linux EC2 AGS host, where a cascade of FC2FC commands (40 some-odd tables) populated 35-40M rows in ~2 hours. I have cheated on occasion, by parallelizing my load scripts, so one PuTTY session was loading the odd tables while the other PuTTY session was loading the even ones (with load order sequenced by size, so they finished about the same time).
You'd need to provide a great deal more information about the network architecture, the capabilities of the loading and database host, and location of the source FGDB before specific advice could be offered.
- V