I have been looking through this and while promising the one key I was really hoping for is not resolved:
- Advanced symbols are supported only if they are defined in the original service. Any overrides with advanced symbols will result in empty symbols in an offline map.
- geometries that cross the dateline are not currently supported.
- If more that one feature layer in a map refers to the same feature service end point only one feature layer will be taken offline. The other feature layers will raise an error.
This still seems to be the holy grail of the new replication model.
While the new features are nice and would work well in a situation where (1) the service area could be broken out into small areas and (2) the workforce relatively small. I like the idea of being able to download only the schema which could be very useful in situation where the need is for a field inventory that can then be compared to the existing GIS back at the enterprise. Also I think this gives the ability to develop a custom application with a Collector like workflow.
However, for our purposes there is a significant flaw and that is that there is not included functionality (that I observed) to register the offline map onto another machine. It seems the workflow is to use the OfflineMapTask to download all the services and then Sync with the OfflineMapSyncTask, which will sync all the layers in the map. This means for every user you need to do a full download of the map and that includes creating the replicas which is a performance nightmare for a large service area that cannot be broken out. Combine that with 500+ users and it really is not a possible deployment workflow. The offline map package does look to basically just be a folder that holds the offline replicas along with some information about WebMap configuration. So, it seems a possible workaround would be to side load the map package and then loop through the services using the existing register replica methods. A downside here is that with a large service area and a number of layers the map package (even zipped) is going to be rather large. Plus you run into the limitation highlighted above that if you want two layers pointing to the same class you need to come up with another method to get those layers setup on the client and included in the map.
A goal I have been trying to achieve is to give the user a similar experience both in portal and when disconnected and to have all configuration done in portal. I have written a server tool that offers the similar functionality that is being offered the new classes, while certainly more limited. It loops through through the layers and does a download on each layer and then used the rest API and calls the /data on the item and grabs all the popup configuration and capabilities, etc. This is bundled up and doing a bit of extending JSON.net the configuration can be re hydrated into the popup definition classes and associated to the layers when they load on the client. Using the capabilities property it can be determined which layers are editable on the client. Because I just download each service as it's own autonomous geodatabase, there is no limitation about having multiple layers pointing to the same feature classes as long as they are not in the same service. Granted this does mean we sync the same data multiple times because those duplicated layers are synced individually, however, there is really no way (that I see) around that. Once generated on the server the layers are zipped up and placed in portal. The client downloads, unzips, and calls register so we avoid having to generate a replica for each client and because the replicas are small downloading updates that might result due to a schema change or because something went awry performs well. On the server performance is good because the replication is all done in parallel so it only takes as long as the largest replica to generate and upload the entire map. On the client a similar approach makes even the initial deployment download perform well.