Skip navigation
All People > NFurness-esristaff > Geeknixta's Blog

Our instructions for manually installing the ArcGIS Runtime SDK for iOS discuss linking the Static Framework, but the 100.x version also includes a Dynamic Framework that you can use with your projects. Apple requires Dynamic Frameworks if you're, for example, building an iOS Extension (including WatchKit Extensions), or using Playgrounds.

 

Using the Dynamic Framework is straightforward. Simply browse to the SDK install location's iOS/Frameworks/Dynamic folder and drag the ArcGIS.framework into your the General->Embedded Binaries section of your target's settings.

 

 

There's one last step you need to take to ensure that your app will be accepted to the App Store, and that's to ensure that the appropriate components are stripped from the App Store submission. This is a well understood "workaround" created by the smart folks at Realm.

 

First, under the Build Phases section of your target's settings, click the + and choose New Run Script Phase.

 

 

 

Then paste in the following:

 

bash "${BUILT_PRODUCTS_DIR}/${FRAMEWORKS_FOLDER_PATH}/ArcGIS.framework/strip-frameworks.sh"

 

So you end up with something like this:

 

 

Now you're ready to start working with the Runtime SDK's Dynamic Framework.

 

Why pick Dynamic or Static? That's a much broader topic, but in general linking against a Dynamic Framework means your actual executable will be smaller since it won't need to include the Static Framework at link time - instead the Dynamic Framework is seamlessly submitted to Apple alongside the executable and linked at runtime. However, if your project includes many Dynamic Frameworks, that could impact your application's startup time, so every project is different. Just be aware of the pros and cons and do what works for you.

 

Note, there is now also an ArcGIS.framework.dSYM file provided alongside the dynamic framework that can be used to symbolicate crash reports against apps submitted using the Dynamic ArcGIS.framework.

The ArcGIS Platform runs on JSON. WebMaps are JSON. Service Definitions are JSON. Feature records are JSON. Our SDKs and APIs make use of this to make your life easier as a developer, of course, but this is also a great way to exchange objects (for example, geometries or graphics) between apps/devices/etc.

 

The iOS Runtime makes this easy with the AGSJSONSerializable protocol (other Runtime flavors offer the same functionality, but I'll talk about iOS here). 

 

Here's some sample Swift 3 code to serialize a map view's current viewpoint and restore it later…

 

1. When your app gets backgrounded, you might call this:

 

if let viewpoint = mapView.currentViewpoint(with: .centerAndScale) {
    do {
        let json = try viewpoint.toJSON()
        // Store the JSON somewhere. E.g. NSUserDefaults.
    } catch(let jsonEncodeError) {
        print("Error: \(jsonEncodeError)")
    }
}

 

2. Later, when your app is reopened, read that JSON to create a new AGSViewpoint:

 

let json = // Read the JSON from wherever you stored it...
let map = // Load or create a map...

do {
    if let restoredViewpoint = (try? AGSViewpoint.fromJSON(json)) as? AGSViewpoint {
        map.initialViewpoint = restoredViewpoint
    }
} catch(let jsonParseError) {
    print("Error: \(jsonParseError)")
}

mapView.map = map

 

If you already have a map associated with the map view, you can instead call AGSMapView.setViewpoint().

Often you'll want to extend existing functionality on an API class. You could write stand-alone functions, or even a helper object, but occasionally this functionality might nicely complement the exiting functionality on the class and would make most sense to be part of the class itself.

 

Here's a case in point. The 4.0 MapView and SceneView has a UI overlay that you can add HTML elements and Dojo Widgets to. I just wanted to be able to turn this on or off entirely. After digging around a bit I was able to build some functions to do just this:

 

function hideUI(view) {
  _setUIVisible(view, false);
}

function showUI(view) {
  _setUIVisible(view, true);
}

function _setUIVisible(view, visible) {
  var items = view.ui._components;

  for (var i=0; i<items.length; i++) {
    var component = view.ui.find(items[i]);
    setComponentVisible(component, visible);
  }

  function setComponentVisible(component, visible) {
    var widget = component.widget;
    if (widget) {
      widget.visible = visible;
    } else {
      component.node.style.display = visible ? "" : "none";
    }
  }
}

 

But this code isn't very portable. You have to either create global functions or embed these in your own functions and limit their accessibility. That just leads to bad things like Copy & Paste duplication, which is just more maintenance.

 

No global variables, ever!

So, to add this to the appropriate JS API class, I had to wrap a little Dojo around it, and of course do some renaming. Using Dojo's extend() function you can add behavior to existing Dojo classes. So the following code gets hold of the UI class and adds my functionality to it.

 

require([
  "esri/views/ui/UI"
], function(UI) {
  var extension = {
    hide: function() {
      _setUIVisible(this, false);
    },
    show: function() {
      _setUIVisible(this, true);
    }
  };

  UI.extend(extension);

  function _setUIVisible(ui, visible) {
    var items = ui._components;

    for (var i=0; i<items.length; i++) {
      var component = ui.find(items[i]);
      setComponentVisible(component, visible);
    }

    function setComponentVisible(component, visible) {
      var widget = component.widget;
      if (widget) {
        widget.visible = visible;
      } else {
        component.node.style.display = visible ? "" : "none";
      }
    }
  }
});

 

You'll notice I've renamed the showUI() and hideUI() functions to show() and hide() because they now exist on the UI class itself. So you would call view.ui.hide() and view.ui.show(). And the method that does all the work doesn't even make its way onto the UI class, reducing clutter and the risk of stepping on other methods and properties.

 

Danger!

And that raises a very important point. If the JS API team add this functionality down the line, my code might trample all over it, or worse! This is of course a "very bad thing". So, to be a little safer and politer, let's add logic to make sure we're not overwriting anything.

 

require([
  "esri/views/ui/UI"
], function(UI) {
  /// View UI Helper Functions.
  var extension = {
    hide: function() {
      _setUIVisible(this, false);
    },
    show: function() {
      _setUIVisible(this, true);
    }
  };

  safeExtend(UI, extension);

  ...

  function safeExtend(classToExtend, extension) {
    var existing = Object.getOwnPropertyNames(extension).filter(function (item) {
      return UI.prototype[item] !== undefined;
    });

    for (var i=0; i < existing.length; i++) {
      console.warn("'" + existing[i] + "' already exists on class " + classToExtend.prototype.declaredClass + ": Skipping…");
      delete extension[existing[i]];
    }

    classToExtend.extend(extension);

    return existing;
  }
});

 

The above code takes the extension we want to make to the UI class, but first removes anything the UI class already implements, logging a warning to the browser console. We don't need to return anything from the safeExtend() function, but anyone calling it can check whether there was a conflict if they want to.

 

Here's the complete code.

 

Don't go crazy

I don't recommend doing this often. You need to be thoughtful (scroll down to the "Considerations" section of this article on doing similar stuff in iOS). But this code is small, nicely compartmentalized, and even though it leans on the internal _components variable, not very intrusive. Commonly it is more appropriate to subclass or create a helper object. However, this highlights a couple of neat ways to interact with Dojo and the JS API so I figured I'd share it.

 

Enjoy!


A user recently asked whether there was a blank tiled layer anywhere that could save them temporarily after they pushed a build to the Apple App Store. I wasn't aware of one, but knowing that the Runtime SDKs and JS APIs would be OK with a tiled layer that was only partially generated, it struck me that ArcGIS Online could support a pretty efficient workflow for creating one.

 

So, here it is.

 

  1. Log in at https://developers.arcgis.com and create a new layer:
    01.png
  2. Set up the layer (no harm allowing attachments but since this layer will never hold any data I uncheck that option):
    2016-07-07_12-24-37.png
  3. Open the new layer in ArcGIS Online:
    2016-07-07_12-28-35.png
  4. Click Publish to create a tiled layer from the feature layer:
    2016-07-07_12-29-32.png
  5. Wait for it to spin up, then give the tiled layer a name and make sure it's visible across all ranges (don't worry - we won't actually create all those tiles!), and click OK:
    2016-07-07_12-30-53.png
  6. Now only generate the smallest scale set of tiles and leave all other levels blank. Select the scale level with least tiles and click Create Tiles:
    2016-07-07_12-32-41.png
  7. You'll be given a warning dialog. Make sure the numbers make sense and you're not about to accidentally burn through all your credits and click "Yes, create tiles":
    2016-07-07_12-33-16.png
  8. ArcGIS Online will tell you things are baking… You can close the tile management panel now.
    2016-07-07_12-33-32.pngand ArcGIS Online will still tell you things are baking…
    2016-07-07_12-33-50.png
  9. When it's done, you'll see 0% of the tiles are available. Don't worry. 24 out of a few hundreds of millions of tiles is really close to 0% Click Manage Tiles to review what was created. Just make sure that 100% of the tiles you actually requested for that one scale level were created:
    2016-07-07_12-35-35.png
  10. If you need to share the tiled layer, go ahead, but otherwise that's it - you've generated a blank tiled service layer that the APIs and SDKs can use.

 

You generally won't need this hack, but maybe you'll find yourself in a bind and this can help out.

This post is based on a talk I gave at the 2015 Esri User Conference about Building Geospatial apps for the Apple Watch (slides). Many of the concepts are applicable to wearables in general, but I’ll be focusing on how Apple have consciously decided to implement these concepts.

 

Big, medium and small

Desktop software has many luxuries, not only in the resources available to it (CPU, RAM, Storage and Network) but also in user attention. When you sit at your laptop, you could be about to engage in work that’ll take a few hours, although it may be more like tens of minutes.

 

In contrast, when you pick up your phone, interactions take seconds or minutes. There’s limited CPU and RAM. Network speed and even availability is always a concern. And you need to be easy on the battery.

 

Now comes the watch. There is little space for interaction (which has serious implications for mapping apps) and short battery life. And a user’s attention is seconds at most. Even 3 seconds is often too long.

 

Apple’s take

These are early days for Apple Watch developers, and Apple wants their users to have a good experience, so here’s some key advice:

 

  • Be ready: Have your information waiting for the user. Don’t make them raise their wrist then wait for something to load.
  • Be specific: Space is at a premium. Don’t include things the user doesn’t want or need to see.
  • Be helpful: This is a philosophy intertwined with “Be ready” and “Be specific”. The user will appreciate it.
  • Do not annoy: While implied by the above, it’s critical on its own. A nagging app will get deleted from your wrist quicker than anything.

 

You need to be really conscious of your user then, with a clear focus on what they want. Get your data early, even before the user wants it, and use a really clean UI.

 

Apple’s recognition of these limitations is codified very carefully into these components of any installed app:

 

Notifications: You alert your user with a Notification. Initially, it’s a short-look notification. If the user decides they can deal with it later, they drop their wrist and carry on with their day. But if they keep looking, you provide a long-look notification with custom UI and optional response buttons. Tapping on either type of notification will open the app.

 

Glances: If the user instead wants to request information, there are “Glances”. These are available with a swipe from the watch face and provide a summary of your app data. E.g. GPS location, current scores, flight status. Glances are not interactive. They’re a single-page billboard of information. Tapping one will open the app.

 

Apps: The interactive part. An app can do almost anything, but the UI is limited by screen-size and battery considerations.

 

Complications: These are tiny pieces of information integrated into the watch face itself (the term comes from the traditional watch industry, and I don’t like it). In watchOS 2, you create your own complications and the user may not even need to swipe to see your app’s Glance.

 

Architecture

In watchOS 1, Apple Watch Apps run on the phone while the UI is on the watch. This means they’re dependent on the phone being nearby, but it also means there’s a lot of chatter over Bluetooth between the interface on the watch and the interface controller on the phone. This can lead to poor app experiences but save battery.

 

With watchOS 2, you are able to write apps that run fully on the watch (I suspect Apple wanted real-world battery usage metrics before they allowed this). A new framework is provided to communicate to the phone should you still need to (and to use the ArcGIS Runtime, you will).

 

This gives you:

 

  • More responsive UI
  • Phone-independent execution
  • Access to watch hardware (Heartrate monitor, accelerometer, microphone, etc.)

 

watchOS1-2.png

 

Limitations

Not all iOS frameworks have made it over to the watch (e.g. OpenGL, so there’s no ArcGIS Runtime SDK for Apple Watch). Some frameworks have come over in limited form (e.g. CoreLocation).

 

This means, for example, that to use the ArcGIS Runtime SDK you still need to communicate with a component of the app running on the phone. It's worth considering how and when you do this, and whether you need frequent access to the Runtime once you've done some work. That depends on the nature of your app and whether it needs to be able to operate without the paired phone nearby, and many apps don't need that. For example, if your watch app depends on point in polygon tests, an ArcGIS Polygon could be represented as a UIBezierPath.

 

Even though Apple Watch doesn't have a power-hungry GPS unit, it does have WiFi and with watchOS 2 it can geolocate itself with limited accuracy and frequency using WiFi triangulation, without contacting the paired phone. There are so many caveats involved that testing in the field is the only way to know if this will be good enough for your app.

 

Think different® strikes again

So at last we get to what this all means for a mapping/geo/location app on the watch.

 

More than anything else, it means you probably have to rethink what your app does for the user. Displaying a map on the watch is of limited use. Interacting with it is even trickier since fingers are always in the way.

 

Crimeglance.pngThe first step is to ask whether you need a map at all. What problem are you solving for the user? Perhaps “TURN LEFT AHEAD”, “3 Indian Restaurants close by”, “You’ve walked 2 miles”, or “High risk of mugging ahead” are more appropriate to present than a map. Or maybe a chart based off the user's location would be useful.

 

Even if you're not using a map, you might still need to do some spatial work. You might be calculating routes or buffers, performing proximity searches, figuring out drive-times. Using the Runtime, as I've already mentioned, means those have to happen on the iPhone (and that's unlikely to be a problem), but if you can think about your work ahead of time then hand off enough information to the watch to monitor progress, then you can build a largely decoupled app. For example, your home's location doesn't change often, nor do locations of restaurants. So if that data is relevant to your app, one set of analyzed data loaded to the watch is probably valid for at least a week and your watch app can function with minimal or no communication to the iPhone in that time.

 

Then consider what subset of your iPhone app’s functionality it makes sense to use a watch for. It’s unlikely to be all of it. Maybe there are only a couple of use-cases out of many that your app covers. Entering an address probably doesn’t make sense. But picking from that list of 3 Indian Restaurants and selecting a time to book a table might.

 

Be personal and personable

Another consideration is what Apple refer to a lot as “intimacy”. You should make efforts to ensure your app feels natural to use. Don’t say something is 47 seconds away, but say it’s “close” or “coming up”. Instead of saying “There are 43 Indian Restaurants within 5 miles” perhaps say “There are lots of Indian Restaurants close by”. And use context sensibly too. The wearer of the watch may be walking, not driving, and while 5 miles might be “close by” in a car, half a mile might be more suitable for walking, so adjust appropriately to “There are 3 Indian Restaurants close by”.

 

Fuzziness is a key component to this. Half a mile might be a sensible threshold for a search, but if there are another 6 restaurants only another minute’s walk further, you’re not giving the user the best answer. Your app might notice this and instead say “There are 9 Indian Restaurants within walking distance”.

 

Lastly you should be prepared for what the user wants to know. Remember, they won’t wait, staring at their watch while you load some information to show them. Try to know ahead of time what information will be useful to them, and have it ready. This might involve you reconsidering some workflows and usecases, but it’s for a good cause, right?

 

Another way to look at that is that a really good Watch App is passive. It trundles along in the background, making sure it's on top of things so that it can show the user what they want to see when they want to see it, and very occasionally tapping them gently on the wrist to tell them what they need to know but haven't thought to look for. In fact, the more the user can avoid thinking, the more they'll appreciate the app telling them something relevant at the right time. Earning the user's trust is also a very personal behavior.

 

Feeding back to the iPhone app

What’s perhaps most interesting about the demands made of you as a Watch App developer are the things it forces you to understand about your iPhone app. Many of the above examples would be well served making their way back to the phone, even if there is still a map involved.

 

But the clearest takeaway to me is that to make a successful app for the watch, you need to take a deep breath, take a step back, and take another look at how your app helps someone with their day.

 

There are early days for building apps for wearables, full of exciting opportunities.

 

Further reading

Read up on WatchKit and what's new in watchOS 2.

 

Check out the following videos from Apple's 2015 WWDC:

 


Think Different is a trademark of Apple Inc.

Swift is a new language introduced by Apple to ease writing iOS and OS X applications. And at the Speedgeeking session during the Esri UC last week, I showed some examples of working with Swift and our Runtime SDKs. Here's my very-much-in-flux repo that formed the basis of that.

 

Swift and ArcGIS today

You can use Swift today with the 10.2.3 Runtime. It takes a tiny bit of configuration of your installed SDK.

 

The first and only step is to add a module definition file to your installation of the Runtime and is described in this Gist.

 

framework module ArcGIS {
  umbrella header "ArcGIS.h"

  export *
  module * { export * }
}

 

Credit goes to Eric Ito who sat down with the Apple guys at WWDC. It took them all of 20 seconds to work out as I took notes looking over their shoulders. I know this would have taken me an age to figure out.

 

Note also, the above is only necessary in a pure Swift project. An Objective-C project that you add Swift to doesn't even need that.

 

Supporting Swift

Quite a few questions I got at the User's Conference last week revolved around the ArcGIS Runtime SDKs for iOS/OS X and Esri's support of Swift. The truth is, Apple have done such a good job in building bridges between Objective-C and Swift that we don't actually need to do anything to let you get full use of the our SDK out of the box. And the above module definition will be included with our next SDK release, making it even simpler.

 

import UIKit
import ArcGIS

var tileUrl = "http://services.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer"
var nycZipsUrl = "http://services.arcgis.com/OfH668nDRN7tbJh0/arcgis/rest/services/NYC_ZipCodes/FeatureServer/0"
var zipCodeLayerName = "Zipcodes"

class ViewController: UIViewController, AGSMapViewTouchDelegate {
                            
    @IBOutlet var mapView:AGSMapView
    
    override func viewDidLoad() {        
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        
        var basemap = AGSTiledMapServiceLayer(URL: NSURL(string: tileUrl))
        mapView.addMapLayer(basemap, withName: "Basemap")
        
        var zipcodes = AGSFeatureLayer(URL: NSURL(string: nycZipsUrl), mode: AGSFeatureLayerModeOnDemand)
        mapView.addMapLayer(zipcodes, withName: zipCodeLayerName)
        
        mapView.touchDelegate = self
    }

    func mapView(mapView: AGSMapView, didClickAtPoint screen:CGPoint, mapPoint mappoint:AGSPoint, features touchedFeatures:Dictionary<String,[AGSFeature]>) {
        if touchedFeatures[zipCodeLayerName]?.count > 0 {
            println("\(touchedFeatures[zipCodeLayerName]![0]) was a zipcode!")
        }
    }
}

 

That doesn't mean we aren't thinking about ways to make things more "Swifty" when the time comes, but I wouldn't imagine it'll be a priority until at least such a time as the language settles down. The Runtime team have a ton more exciting things to bring you first.

 

One other thing that was news to some folks in San Diego was that Swift apps can target iOS7 and Mavericks just fine. No need for iOS8 or Yosemite, but remember: you can't even release anything written in Swift yet, and only Apple will tell you when that'll be ("the fall" is all the community knows so far).

 

Working with Swift

Swift is currently in beta. Unusually for Apple, the typical NDAs barely apply and many people are actively discussing it and writing about it publicly. You can find entire sites dedicated to it, and StackOverflow threads abound. Apple's own engineers seem actively involved in discussions around the web.

 

My own experience has been that it's a little touch-and-go at the moment. With each beta release, certain aspects of the language break as others are fixed, so prepare yourself for some frustration until initial release around iOS8 "in the fall". To be clear, these breaks and fixes haven't affected using the Runtime SDK, they've affected the way the language compiles and how the static analyzer tells you things are wrong, when perhaps they're not . I describe it as Apple untying their shoelaces to tie them up properly.

 

One area I'd love to see Apple improve is being able to use third party frameworks (e.g. ours!) in an interactive Swift Playground. Right now, you simple cannot. I've had no luck with our framework in the Swift REPL either (probably by design, and for the same reasons, but I'm going to keep trying).

 

I'd love to hear your experiences of playing with Swift and the ArcGIS Runtimes.