I tested this on my machine and everything appears to work fine. Images of my model are shown below.
Parent Model
Notes: I need this model to create the new geodatabase once. As such, I've placed the iteration logic within a submodel in this model. The model is designed to grab the new geodatabase name from the source geodatabase. In this case your source was a personal geodatabase (*.mdb) and this model outputs a file geodatabase (*.gdb). It then feeds this newly created file geodatabase as the output workspace for the Iterate Datasets submodel and uses the personal geodatabase as the input workspace.
Sub Model to Create Feature Datasets
Notes: This model is designed to iterate through the input workspace and locate each of it's feature datasets. For each feature dataset found the model will use python to determine its spatial reference, grab the name of the feature dataset from the iterator, and use the specified output workspace to create a new copy of the feature dataset. Warning: As a result of an iterator being in this model the entire model will repeat for each dataset found.
I'm not sure if you're wanting to use this model to recreate the entire geodatabase, but if you were you would continue this workflow to do so. I would think you'd need three more models to handle the other datatypes (i.e. tables, rasters, and feature classes). Due to the limitation that model can only contain a single iterator I would think you'd need a minimum of 6 models to complete this task, of which I've provided the first two.
The end result would vary depending on how complex you want to setup the logic. In the above model I return the newly created feature datasets as a list and would want to build additional models to handle iterating through each of these workspaces. Otherwise you could implement logic to determine if the data is stored within a feature dataset and then locate and create the path to the matching feature dataset in the new workspace.