The sde schema shouldn't be involved at all in any cross-DBMS geodatabase migration, only the table owner(s) at first.
I actually prefer to use SQL to define the tables *exactly* as desired, then Append from the source or even use FeatureClassToFeatureClass to generate a near clone, and populate the contents of the new table(s) via INSERT INTO newtableN(columnlist) SELECT columnlist FROM temptableN.
This latter solution allows flexibility to restructure the table, or optimize physical order via an ORDER BY (or both).
Of course, any versioned edits would be need to be reconciled and posted before starting transfer, and the feature classes only versioned in the target database after confirming successful transfer.
I recently did this with 120 million rows across a score of tables, optimizing the partitioning and adding timestamp columns for insert and update. I used Python to transfer the data, and added a parallel table with a SHA-1 hash to fingerprint the new contents, then compared the hash with a hash of the source data to confirm successful transfer.
- V