Take Mobile Map offline

1192
1
07-03-2017 08:01 AM
JoeHershman
MVP Regular Contributor

I have been looking through this and while promising the one key I was really hoping for is not resolved:

Limitations

  • Advanced symbols are supported only if they are defined in the original service. Any overrides with advanced symbols will result in empty symbols in an offline map.
  • geometries that cross the dateline are not currently supported.
  • If more that one feature layer in a map refers to the same feature service end point only one feature layer will be taken offline. The other feature layers will raise an error.

This still seems to be the holy grail of the new replication model.

While the new features are nice and would work well in a situation where (1) the service area could be broken out into small areas and (2) the workforce relatively small.  I like the idea of being able to download only the schema which could be very useful in situation where the need is for a field inventory that can then be compared to the existing GIS back at the enterprise.  Also I think this gives the ability to develop a custom application with a Collector like workflow. 

However, for our purposes there is a significant flaw and that is that there is not included functionality (that I observed) to register the offline map onto another machine.  It seems the workflow is to use the OfflineMapTask to download all the services and then Sync with the OfflineMapSyncTask, which will sync all the layers in the map.  This means for every user you need to do a full download of the map and that includes creating the replicas which is a performance nightmare for a large service area that cannot be broken out.  Combine that with 500+ users and it really is not a possible deployment workflow.  The offline map package does look to basically just be a folder that holds the offline replicas along with some information about WebMap configuration.  So, it seems a possible workaround would be to side load the map package and then loop through the services using the existing register replica methods.  A downside here is that with a large service area and a number of layers the map package (even zipped) is going to be rather large.  Plus you run into the limitation highlighted above that if you want two layers pointing to the same class you need to come up with another method to get those layers setup on the client and included in the map.

A goal I have been trying to achieve is to give the user a similar experience both in portal and when disconnected and to have all configuration done in portal.  I have written a server tool that offers the similar functionality that is being offered the new classes, while certainly more limited.  It loops through through the layers and does a download on each layer and then used the rest API and calls the  /data on the item and grabs all the popup configuration and capabilities, etc.  This is bundled up and doing a bit of extending JSON.net the configuration can be re hydrated into the popup definition classes and associated to the layers when they load on the client.  Using the capabilities property it can be determined which layers are editable on the client.  Because I just download each service as it's own autonomous geodatabase, there is no limitation about having multiple layers pointing to the same feature classes as long as they are not in the same service.  Granted this does mean we sync the same data multiple times because those duplicated layers are synced individually, however, there is really no way (that I see) around that.  Once generated on the server the layers are zipped up and placed in portal.  The client downloads,  unzips, and calls register so we avoid having to generate a replica for each client and because the replicas are small downloading updates that might result due to a schema change or because something went awry performs well.  On the server performance is good because the replication is all done in parallel so it only takes as long as the largest replica to generate and upload the entire map.  On the client a similar approach makes even the initial deployment download perform well.

Cheers

-Joe

Michael Branscomb

Thanks,
-Joe
Tags (1)
1 Reply
AnttiKajanus1
Occasional Contributor III

Hi Joe,

Thanks for the feedback and this is actually something that we are now working with. It sounds like you have built very similar system yourself that we have been doing. In first roll out of OfflineMapTask we targeted to support Collector style ad-hoc or on-demand workflows and on a next phase we are we working to get a proper support to get pre-planned workflow. Where the ad-hoc or on-demand workflow generates all the used packages and creates the local copy of the map after the task is called, the pre-planned worklow will have a management UI built into the ArcGIS Online which handles the generation before the data needs to be downloaded to the clients. On runtime side, we are adding functionality to query and download these areas. If you can, could you share the requirements for the workflow with me? You could send it to my email (akajanus@esri.com).

The pre-planned workflow is targeting to provide a framework that you can use in ArcGIS to define the areas that you want to take offline, create the packages, update them and ultimately download them to the clients. This will make the workflow a bit like a desktop workflow if you have seen our presentations or documentation where we have been describing the patterns before but supports synchronization. We know that if we want to create 500+ users on-demand offline package at 9 in the morning on Monday, servers wont be very pleased with it. Instead we can create the needed packages before that and only thing that users needs to do is to download the needed offline map (area).

As you said, the offline map is nothing really special, it's just a folder that contains the definition files (.info / .mmap) and then the needed data for it. It's actually using the same specification with MobileMapPackages but just as an exploded format. At the moment you can use underlying sync tasks to register the geodatabases to the services as you mentioned. I think that we will still have the issue with large packages if the areas are large and they contain a lot of layers (with attachments) and you want to get everything downloaded to the client. Using vector tiles will make that better but ultimately it's a map design question. If the area is large and it contains a lot of data and tons of attachments, the size of the package will reflect that - no matter how the offline map is created. We try to get a good mix of easy to use, versatility and performance build into the tasks so you would have a good range of setting to optimize the package for your needs.

Granted this does mean we sync the same data multiple times because those duplicated layers are synced individually, however, there is really no way (that I see) around that.

Yes, this is why we omitted the support for multiple layers pointing into the same Feature Service endpoint for now. This is one of the points where we want to get more user feedback. When we create the offline map, we create only one geodatabase per Feature Service and that won't allow us to have multiple layers which would be handy in many cases. Do you think that it's reasonable hit to download the same data multiple times if needed to get the support for this? It would work ok in cases where you for example define expression on webmap to take only part of the data offline.

At the moment you can point into the same underlying data for example by using a 'View' in ArcGIS Online (which generates a new rest endpoint but references the same data with other service) but if you use a layer level override (which only references the same rest endpoint), it won't work. They key here is the different url endpoint.


Hopefully the OfflineMapTask and OfflineMapSyncTask comes in handy for many of the users who are building offline apps and as mentioned earlier we welcome all feedback. What should be supported? Does the classes actually make your life easier? Do you have workflows that cannot be solved with the current APIs? And so forth, the more feedback we get the better calls we can make for what is actually needed on the field.