Skip navigation
All Places > Applications Prototype Lab > Blog
1 2 3 Previous Next

Applications Prototype Lab

62 posts


The map above shows some spider diagrams. These diagrams are useful for presenting spatial distribution, for example, customers for a retail outlet or the hometowns of university students. The lab was recently tasked with creating an automated spider diagram tool without using Business Analyst or Network Analyst. The result of our work is in the Spider Diagram Toolbox for use by either ArcGIS Pro or ArcGIS Desktop.


Installation is fairly straight forward. After downloading the zip file, decompress and place the following files on your desktop.:

  • SpiderDiagram.pyt,
  • SpiderDiagram.pyt.xml,
  • SpiderDiagram.Spider.pyt.xml, and
  • SpiderDiagramReadme.pdf

In ArcGIS PRO or ArcMap you may connect a folder to this the desktop folder so that you access these files.


Running the tool is also easy. The tool dialog will prompt you for the origin and destination feature classes as well as the optional key fields that will link destination points to origin points. In the example below, the county seats are related to state capitals by the FIPS code.





Leave one or both key fields blank to connect each origin point to every destination point.




Which is the origin and which is the destination feature class?  It really doesn’t matter for this tool – either way will work.  If you want to symbolize the result with an arrow line symbol, know that the start point of each line is the location of points in the origin feature class.


Script and article written by Mark Smith.


Please direct comments to Bob Gerlt.


Amazon recently released a deep learning enabled camera called the DeepLens ( The DeepLens allows developers to build, train, and deploy deep learning models to carry out custom computer vision tasks. The Applications Prototype Lab obtained a DeepLens and began to explore its capabilities and limitations.


One of the most common tasks in computer vision is object detection. This task is slightly more involved than image classification since it requires localization of the object in the image space. This summer, we explored object detection and how we could expand it to fit our needs. For example, one use case could be to detect and recognize different animal species in a wildlife preserve. By doing this, we can gather relevant location-based information and use this information to carry out specific actions.


Animal species detection was tested using Tensorflow’s Object Detection API, which allowed building a model that could easily be deployed to the DeepLens. However, we needed to scale down our detection demo to make it easier to test on the ESRI campus. For this, we looked at face detection.


The DeepLens comes with sample projects, including those that carry out object and face detection.  For experimentation purposes, we decided to see if we could expand the face detection sample to be able to recognize and distinguish between different people.


Services and Frameworks

Over the course of the summer, we built a face recognition demo using various Amazon and ESRI services. These services include:

  • The AWS DeepLens Console (to facilitate deployment of the projects onto the DeepLens)
  • Amazon Lambda (to develop the Python Lambda functions that run the inference models)
  • Amazon S3 (to store the trained database, as well as the models required for inference)
  • The AWS IoT Console (to facilitate communication to and from the DeepLens)
  • A feature service and web map hosted on ArcGIS  (to store the data from the DeepLens’ detections)
  • Operations Dashboard (one for each DeepLens, to display the relevant information)


To carry out the inference for this experiment, we used the following machine learning frameworks/toolkits:

  • MxNet (the default MxNet model trained for face detection and optimized for the DeepLens)
  • Dlib (a toolkit with facial landmark detection functionality that helps in face recognition)



The MxNet and Dlib models are deployed on the DeepLens along with an inference lambda function. The DeepLens loads the models and begins taking in frames from its video input. These frames are passed through the face detection and recognition models to find a match from within the database stored in an Amazon S3 bucket. If the face is recognized, the feature service is updated with relevant information, such as name, detection time, DeepLens ID, and DeepLens location, and the recognition process continues.


 If there is no match to the database, or if the recognition model is unsure, a match is still returned with “Unidentified” as the name. When this happens, we are triggering the DeepLens for training. For this, we have a local application running on the same device as the dashboard. When encountering an unidentified object, the application prompts the person to begin training. If training is triggered, the DeepLens plays audio instructions and grabs the relevant facial landmark information to train itself. The database in the S3 bucket is finally updated with this data, and the recognition process resumes.

Demo Workflow


The face recognition model returns a result if its inference has an accuracy of at least 60%. The model has been able to return the correct result, whether identified or unidentified, during roughly 80% of the tests. There have been a few cases of false positives and negatives, but this has been reduced significantly by tightening the threshold values and only returning consistent detections (over multiple frames). The accuracy of the DeepLens detections can be increased with further tweaking of the model parameters and threshold values.


The model was initially trained and tested on a database containing facial landmark data for four people. Since then, the database has increased to containing data for ten people. This has led to a reduction in the occurrence of false positives.The model is more likely to be accurate if there are more people that are trained and stored into the database.


Object or animal species detection can be implemented on the DeepLens if we have a model that is trained on enough data (related to the object(s) we intend to detect). This data should consist of positives (images containing the object to be detected) as well as negatives (images that do not contain the object) for the best accuracy. Once we have the models ready, the process to get it running on the DeepLens is very similar to the one used to run the face detection demo.



The DeepLens currently only supports Lambda functions written in Python 2.7, but Amazon seems to be working on building support for Python 3.x.


The model optimizer for the DeepLens only supports optimizing certain MxNet, Tensorflow, and Caffe models for the in-built GPU. Other frameworks and models can still be used on the DeepLens, but the inference speed is be drastically reduced.


Future Work

We discovered that the DeepLens has a simple microphone. Although the DeepLens is primarily designed for computer vision tasks, it would be interesting to run audio analysis tests and have it run alongside other computer vision tasks, for example, to know when a door has been opened or closed.

CityEngine Station-Hand model


Every now and then a really unique and out-of-the-box idea comes our way that expands our conceptions about the possible applications of the ArcGIS platform. This was one of those ideas. Could GIS be used to map the human body? More specifically, could we use CityEngine to visualize the progress of physical therapy for our friend and Esri colleague Pat Dolan of the Solutions team? Pat was eager to try this out, and he provided a table of measurements taken by his physical therapist to track his ability to grip and extend his fingers over time. With the help of the CityEngine team, we developed a 3D model of a hand, and used CityEngine rules to apply Pat's hand measurements to the model. We thought it would be fun to show a train station in a city that would magically transform into a hand. Our hand model is not quite anatomically correct, but it has all the digits and they are moveable!


Click the image above to view a short video of this project. Pat and I showed this application, and others, at the 2017 Esri Health and Human Services GIS Conference in Redlands. Click here to view a video of that presentation.

The graph theory concept of Centrality has gained popularity in recent years as a way to gain insight into network behavior. In graph or network theory, Centrality measures are used to determine the relative importance of a vertex or edge within the overall network. There are many types of centrality. Betweenness centrality measures how often a node or edge lies along the optimum path between all other nodes in the network. A high betweenness centrality value indicates a critical role in network connectivity. Because there are currently no Centrality tools in ArcGIS, I created a simple ArcGIS Pro 2.1 GP toolbox that uses the NetworkX Python library to make these types of analyses easy to incorporate in ArcGIS workflows.  

 Centrality Analysis Tools

  Figure 1 Centrality Analysis Tools (CAT)

The terms network and graph will be used interchangeably in this blog. Here, network does not refer to an ArcGIS Network dataset. It simply means a set of node objects connected by edges. In ArcGIS these nodes might be points, polygons or maybe even other lines. The edges can be thought of as the polylines that connect two nodes. The network could also be raster regions connected by polylines traversing a cost surface using Cost Connectivity.



Figure 2 A few mid-western urban areas connected by major roads. Cost Connectivity was used to find

"natural" neighbors and connect the towns via the road network.


As it turns out, the output from Cost Connectivity (CC) is perfect input for the Centrality Analysis tools. Let’s take a look at the CC output table.

Figure 3 Cost Connectivity output with "out_neighbors_paths" option selected.


Now let’s see how this lines up with CAT Node Centrality input parameters.


 Figure 4 Node Centrality tool parameters 


There are a couple things worth mentioning here. The Starting Node Field and Ending Node Field do not indicate directionality. In fact, the tool assumes cost is the same to move in either direction. I used Shape_Length but could have used PathCost or some other field indicating the cost to move from node to node. This table and its associated feature class are created by Cost Connectivity when you select the “out_neighbor_paths” option. While the minimum spanning tree option will work, the Neighbor output seems more reasonable for centrality analysis. It is also important to make sure you do not have links in your graph that connect a node to itself and that all link costs are greater than zero.


Figure 5 Options for Node Centrality type


Both the Node and Edge Centrality tools require “connected” graphs, which means all the nodes in the graph must be connected to the rest of the network. If you have nodes that are not connected or reachable by all the other nodes, some functions will not work. This can happen when you have nodes on islands that are unreachable for some reason. If this happens, you will have to either make a connection and give it a really high cost or remove those nodes from the analysis.


Because these tools require some specific input, I included a Graph Info tool so that users could get information about the size and connectedness of their input data before trying to run either the Node or Edge centrality tools.


Figure 6 Graph Info tool provides critical information about potential input

data without having to run one of the tools first.


One last thing to keep in mind -- many of the centrality measures available within these tools require the optimum path between all nodes in the network to be calculated. This is quite compute intensive, and execution time and computer resource requirements grow exponentially. It is best to try the tool out on a fairly small network of 1000 nodes and maybe 5000 connectors before trying to run on larger datasets, just to get a feel for time and resource requirements. The example shown above runs in less than five seconds but there are only 587 nodes and 1469 connectors.


Please download the toolbox, try it out, and let me know what you think.  I would like to hear about your use cases.

At the 2018 Esri DevSummit in Palm Springs, Omar Maher demonstrated how to predict accident probability using artificial intelligence (AI). The Applications Prototype Lab (APL) has built an iOS application allowing drivers to route around accident prone areas, suggesting the safest route available. The safest route has the lowest probability of an accident occurring on it.


Imagine that you could drive to your destination on a route that is the fastest, shortest and safest path available, knowing that the path will be clear of potential accidents, knowing that you will not become part of an accident study that month. 

Parents could choose the safest route for their teen drivers, to avoid common issues and spots on the road network that are potentially dangerous. 


This demo does not compute the AI prediction results itself but rather consumes a probability prediction that has been computed using a gradient boosting algorithm in Azure using 7 years of historical data. The demo shows a routing engine considering the accident probability as an input and it tries to route around areas with high accident probability.

In the future, the prediction will use real time inputs in its probability prediction.


Compare the screenshots below with 2 different probability inputs. The first image shows the routing information for a chosen accident probability of about 38%.



Reducing the chances of an accident to about 23% will cause the route to be longer in time and length.



Here is a video showing how decreasing the probability of accidents will return a safer route:



The application uses the ArcGIS Runtime SDK for iOS routing engine `AGSRouteTask` that considers different barriers in the form of lines and polygons. Using the generated probability lines as barriers, the routing task will generate a route around the areas of the specified acceptable accident probability. All code is available upon request, but the feature service and credentials are private at this time.


In summary, we have shown the impact of accident probability on routing computations. Using the prediction models trained by artificial intelligence using historical and current road conditions we hope that in the near future accident probabilities will become an additional input for all routing engines. 


The Applications Prototype Lab was asked to create an app that would collect and record cellular signal strength at various locations around the new Jack and Laura Dangermond Preserve. Two of us did just that: my colleague Al Pascual wrote the iOS version, and I wrote the Android version.


Though we used very different approaches—some of them dictated by the differences between the two mobile platforms—the app performs the same basic function on each operating system. It very simply gathers the device's location and the strength of its cellular connection at specified time and space intervals, and saves those observations to a feature service layer hosted on ArcGIS Online.


Once the app was done, we realized that it could be adapted to collect more than just cell signal strength; it can save just about anything a mobile device is capable of detecting. So we’re making the source available to those interested in modifying it.


Note: it’s built to save results to a feature layer hosted on Unless you want to modify it to save to a different back-end storage mechanism, you’ll need to create and publish your own hosted feature layer in your ArcGIS Online organization.

Preparation: Create a hosted feature layer to hold the results

You'll need a hosted feature layer to hold the collected data. We’ve provided an empty template database to hold location, cell signal strength, and a few device details.

  1. Download the template file geodatabase here:
  2. Follow the instructions to publish it here:


iOS: how to use it

1. Requirements

- Fork and then clone the repo. Don't know how? Get started here.

- Build and run the project to create a single app containing all of the samples.

2. Settings

Go to device settings, find the app CellSignal in the list to change the feature service layer you've created and hosted. The User ID and password settings are for using the service services.

3. Features

The app needs to be running on the foreground to work, will measure the cell coverage and will send that information to your feature service or, when offline, store it in the device until, connection to the feature service is being restored. The user does not need to interact with the app, only needs to make sure the app is running on the foreground.

The chart will show a historical view of the measurements. The scale is from 0 to 4, depending on the cell bars received. A custom map can show the intended extent as well as a simple rendering of the data.

To change how we capture the cell service information, please refer to this function.

private func getSignalStrengthiOS11() -> Int {
   let application = UIApplication.shared
   if let statusBarView = application.value(forKey: "statusBar") as? UIView {

   for subbiew in statusBarView.subviews {

   if isiPhoneX() {

      return getSignalStrengthiPhoneX()

    } else {
      if subbiew.classForKeyedArchiver.debugDescription ==
            "Optional(UIStatusBarForegroundView)" {
     for subbiew2 in subbiew.subviews {

         if subbiew2.classForKeyedArchiver.debugDescription ==
               "Optional(UIStatusBarSignalStrengthItemView)" {

          let bars = subbiew2.value(forKey: "signalStrengthBars") as! Int
          return bars

return 0 //NO SERVICE

4. Source code

For more information and for the source code, see the GitHub repository here:


Android: how to use it

1. Requirements

  • Android Studio 3+
  • An Android device running Android 18 or above (JellyBean 4.3) and having GPS hardware

2. Installing and sideloading

This app will run on devices that are running Android 4.3 (the last version of "Jelly Bean") or above. It will only run on Google versions of Android--not on proprietary versions of Android, such as the Amazon Kindle Fire devices. If you're running Android 4.3 or later on a device that has the Google Play Store app, you should be able to run this. (Oh, you'll need a functional cell plan as well.)


One way to run the app is to build and run the source code in Android Studio and deploy it to a device connected to the development computer.


If you don’t want to build it, you can download the precompiled .apk available in the GitHub releases section; you'll need to install this app through an alternative process called "sideloading".


3. Settings

Tap the Feature Service URL item and enter the address of the feature service layer you've created and hosted. There are two settings affecting the logging frequency. You can set a distance between readings in meters and you can set a time between readings in seconds. Readings will be taken no more often than the combination of these settings. For example, a setting of ten meters and ten seconds means that the next reading won't be taken until the user has moved at least ten meters and at least ten seconds have passed. If you want to only limit readings by distance, you can set the seconds to zero. Please don't set both time and distance to zero.


The User ID and password settings are for using secured services. If you are using your own ArcGIS Enterprise Portal (not, and you want to log to a secured service, you'll need to enable enter your own portal's token generator URL into the Token Generator Service URL setting.


Start logging by tapping the switch control at the top of the settings page. You should see a fan-shaped icon (a little like the wifi icon) in the notification bar. That tells you that the app is logging readings in the background.


It will continue logging until you tap the switch control again to turn logging off. The notification item also displays the number of unsychronized local records. As the app gathers new readings, it will update a chart on the main activity showing the last fifteen signal strength readings.


You can turn the screen off or use other apps during logging, since it runs as a background service. An easy way to get back to the settings screen is to pull down the notification bar and tap the logger notification item. Features are logged to a local database, and then sent to the feature service when the internet is available.



4. Synchronization

There are three events that cause a synchronization:

  • There is a setting for the synchronization interval; the app will sync whenever that many minutes have passed;
  • When internet connectivity has been lost and then restored;
  • When the logging switch is turned off

5. Source code

For more information and for the source code, see the GitHub repository here:

I built this app to show some of the capabilities of the recently released ArcGIS Quartz Runtime 100.2.1 SDK for Android—specifically its 3D capabilities. The 3D and runtime teams have put in a lot of work to make 3D data and analyses run smoothly on the latest mobile devices.


What does it do?

Esri’s I3S specification covers three kinds of scene layers: 3D Objects, Integrated Meshes, and Point Clouds. Currently, 3D Objects and Integrated Meshes can be displayed in the Quartz runtime. The web scene this app loads by default shows examples of both those layer types.

The app uses a web scene ID to load a list of scene layers, background layers, and slides (3D bookmarks). You’ll find that web scene’s ID in the identifiers.xml source code file. If you want to open a different web scene than the default one, use the Open button in the toolbar to enter your credentials. It will then find out which web scenes your account owns and let you open one of those instead. Note that it will only show the web scenes you have created—not all the web scenes that others have created and made available to you.

The Bookmarks button will show a list of slides in the web scene; tapping one will take you to the slide location. The Layers button shows a checkbox list of all scene layers defined in the web scene.


Standard navigation

First, get familiar with panning, zooming, rotating, and tilting the display. The SDK uses the device’s GPU to accelerate graphics computation and make navigation smoother. You can find more information on supported out-of-the-box gestures and touches here:

This app’s tools can all be found under the rightmost toolbar icon; tap it and you will see a pop-up menu. Standard Navigation will disable any currently chosen tool and return the view to its standard, out-of-the-box navigation gestures as documented in the link above.


Measure tool

This tool is straightforward to use; activate it and tap a location. It calculates a distance and heading from your observation point in space to the location you tapped on the ground. The location and bearing are simple Pythagorean and trigonometric calculations; the point here was not about the calculations, but about using 3D graphics and symbols to display the results.


Line of Sight

Line of Sight and Viewshed are two new onscreen visibility analysis tools; there is detailed information on what that means here:

Line of Sight is simple to implement; just set a start point and an end point, and add the analysis overlay to the scene. Updating the analysis is no more difficult than updating the end point location.



The Viewshed analysis does some extra work beyond what the SDK provides. First, each analysis is limited to a 120° arc; each tap invokes three analyses for complete 360° coverage. I also wanted a put the user right in the middle of the analysis, as if they’re standing on the ground—and that’s what the zoom floating action button does.Once the camera moves down into the scene, the floating action button becomes a return button, which will take the camera back to its original point in space. There’s also a slider in the lower left of the screen which lets you interactively change the viewshed distance. You can use it to explore different visibility scenarios in different scenes; you might want to use a smaller value for dense urban areas or a much larger value for unimpeded rural landscapes.


I wanted to make the analysis experience more interactive by letting you watch the analysis move as you drag your finger around the screen. This can be an interesting exercise, but it uses the same gesture that’s normally used to pan the view. You may reach a point where you want to pan the view without having to go back into standard navigation mode first. If you long-press—tap and hold a finger down without moving it for a second or so—you should see a four-arrows icon show up underneath the compass. That means the view is now in pan mode, and the display will pan (instead of re-running the viewshed) until you lift your finger.


Sensor Navigation mode

Once I was in the shoes of an observer in the middle of a viewshed, I thought it would be fun if I could tilt and rotate the device itself to move the view—kind of like a physical viewport into a virtual scene. And that’s what Sensor Navigation mode does. It listens to the device’s gyroscopic sensors to know when you’ve moved the device, and it moves the scene accordingly. The downside with this mode is that it can request so much scene data that the device, network connection, or scene service may not be able to keep up.


Pivot lock

If you see a building or other feature of special interest, you can use Pivot Lock to focus on that location and rotate around it. Activate the tool, then tap or drag a point, and the view will begin to rotate around it. Return to standard navigation by tapping the floating action button. You can stop the rotation by tapping anywhere on the display; then you can tap or drag a new point to start again. This tool uses the SDK’s OrbitCameraController to provide this functionality without a lot of custom code.


Technical notes

All the tools extend the class. When one is selected, it’s just one line of code to set the new touch listener on the Scene View and let it take over responsibility for all touch gestures until a new tool is chosen.

While the manifest requires OpenGL ES 3.0 or above, that’s not a strict requirement of the runtime SDK (although that could possibly become a requirement in a future release). This will run on devices using OpenGL ES 2, but those devices are generally older and don’t have the GPU, memory, or processor power to run 3D apps smoothly anyway.

I did use a couple of open-source libraries that are licensed under the Apache 2.0 license.


The source code for this app is available in a public Github repo; find it at

Feel free to clone or fork the repo and use it as you like. Also, I’ll probably be making a one-time major update for the next release of the Esri SDK, as that release will probably make obsolete much of the custom web scene parsing code in the app.


This is an experimental project to test the effectiveness of using a Microsoft Xbox controller to navigate in 3d web applications built using Esri's ArcGIS API for JavaScript.  This work was inspired by a customer that illustrated the difficulty of navigating underwater in a custom web application.


Click here for the live application.

Click here for the source code.


To date we have only testing the app on Windows 10 desktops.  We suspect that drivers for both Xbox 360 and Xbox One controllers are bundled with Windows 10.


How Do I Fly?

Left AxisHorizontal movement. Adjust to move the observer forward, back, left and right.
Right AxisLook. Adjust to change the horizontal and vertical angle of observation.
Left TriggerDescend.
Right TriggerAscend.
Left BumperZoom to previous web scene slide.
Right BumperZoom to next web scene slide.
A Button (green)Perform identify on the currently selected scene layer object.
B Button (red)Hide identify window.
Menu ButtonShow controller button map.
Start ButtonReset controller. This is used to reset the "at rest" values for the controller.


Don't Like This Map?

By default, the application loads this San Diego web scene.  This can be customized with a webscene url argument, for example.


Known Issues

  • When the app starts, the camera may spontaneously creep without any controller interaction. Occasionally it may be an erratic spin. To correct this, after a few seconds press the start button. This will reset the controller.
  • Occasionally when the app starts, scene layers (e.g. buildings) may no fully load. To correct this refresh the browser and wait 5-10 seconds before using the controller.



  • The app is experimental. The app is based on draft implementations of the gamepad API in modern browsers (see W3C and MDN for details).
  • The app has not been tested with a Sony PlayStation controller.


The most common technique for indoor location, determining an observer location inside an enclosed space, is the blue dot tracking approach. A client-side algorithm is actively tracking signals in its environment to determine the observer’s location in the context of the received signals. The types of received electronic signals can range from 802.11.x signals (WiFi, Bluetooth, etc.) to detecting magnetic anomalies. This method is considered an active client-side location approach.

A different method is to perform the positioning server side. The environment itself is configured to seek out surrounding signals and to correlate the matching signals from various points within the environment. This is a called a passive server-side approach.

We (the Applications Prototype Lab) wanted to explore the passive approach a little further as it allows for greater flexibility in the types of devices that can be recorded. Since no additional software needs to be installed on a device of interest, we can detect new hardware in our in-situ environment. However, since we must receive multiple recordings from our environment, a proper hardware layout is required to guarantee an adequate amount of coverage.

We do see potential for the server-based location services in the context of determining the digital footprint and traffic flow within a given location. For a business, this approach could be helpful for planning and design efforts as well as to provide on-demand information in contingency situations.


Prototype Layout

Here is the general strategy we implemented. The blue dot in the diagram represents a scanning device (blue box) actively seeking out signals. For this prototype we focused on detecting smart watches, wireless routers, cell phones, and laptops.

Detectable devices by wireless scanning

Using multiple blue boxes, we built out an environment keeping track of the signals in our office area. The blue boxes submit signals that are recorded by a central service in the cloud. In addition to providing a central collection service, the cloud service keeps us informed about the current state of the blue box hardware and provides a software update mechanism.

General layout of blue boxes and cloud service.



In building our blue box prototype, we used a Raspberry Pi Zero W board running Raspian Jessie 4.9.24. The Zero hardware is nice as it already has a Bluetooth and WiFi chip onboard. Since we are using the onboard chip for communication with the cloud service, we need one more wireless adapter ( seen as the dongle) to act as the scanner module.

For simplicity, we distributed the blue boxes around our office area and kept them connected to a power outlet to get a continuous 24 hours data collection.

To give the blue boxes a spatial identity, we wrote an ArcGIS Runtime based application that allows us to place the blue box in the context of the building.

Closed blue box case.Blue box open with Raspberry Pi board exposed.



When the Raspberry Pi starts up, it registers itself with the central cloud service. Upon registration, the blue box is assigned a unique identifier based on the MAC address, and client-side scripts ensure that the existing software is in sync with the version provided by the cloud environment.

After the initial handshake, the blue box assumes its scanning role and is ready to receive WiFi MAC addresses and record the RSSI (received signal strength indicator) for Bluetooth and WiFi devices. This information is sent to the cloud service from where we can use a trilateration algorithm to position the recorded signals. The location information is stored as a time-enabled point feature in ArcGIS Online.



The screen capture below shows the distribution and the location of received signals. The blue dots are recorded Bluetooth signals and the amber colored dots are WiFi signals. The red squares show the location of the blue boxes in the context of the building with their associated unique identifier. Using the time awareness of the feature service, we can show the live data as a layer in ArcGIS Pro or in a web map.

Time enabled device collection visualized by ArcGIS Pro.Time enabled device collection visualized in ArcGIS Online.


We also developed an ArcGIS Pro Addin to view the archived content distribution by date and device type. We can see the start and the end of a work day as the numbers of devices increase throughout the day. Another interesting observation is the drop-off of Bluetooth devices during the nights and the weekends.

Analyzing archived data of collected devices by date and type in ArcGIS Pro.


We prototyped a server-based location service and we integrated our solution into ArcGIS Enterprise. For our blue box prototype, we used a low-cost hardware approach that has the potential to scale beyond our testing environment. We have written helper applications for the ArcGIS Runtime (iOS) and the ArcGIS Pro application to facilitate the setup and analysis of the recorded information. With the described approach, we see the potential for ubiquitous presence detection offering an indoor accuracy of about 8 – 20m / 24 – 60 ft.


Among the best resources for learning the ArcGIS API for Python are the sample notebooks at the developers website. A new sample notebook is now available that demonstrates how to perform a network analysis to find the best locations for new health clinics for amyotrophic lateral sclerosis (ALS) patients in California. To access the sample, click on the image at the top of this post.


I originally developed this notebook for a presentation that my colleague Pat Dolan and I gave at the Esri Health and Human Services GIS Users conference in Redlands, California in October. Although network analysis is available in many of Esri's offerings, we chose the Jupyter Notebook, an open-sourced browser-based coding environment, to show the attendees how they could document and share research methodology and results using the ArcGIS API for Python.  This sample notebook provides a brief introduction to network analysis and walks you through our methodology for siting new clinics, including accessing the analysis data, configuring and performing analyses, and displaying the results in maps. 

This blog posting was first published in August 2013 on the previous blog infrastructure.


In the 2008 article ‘Where Did Water Flow on Mars? Modeling Mars’ surface in search of ancient rivers and oceans’ Witold Fraczek demonstrated how GIS can furnish support for the theory that at some time in the past, water did flow on the Martian surface. By utilizing NASA’s available Martian DEM and other supporting data layers, a hydrologic network was created by running a series of hydro functions. For this analysis, a selected section of the Martian DEM was treated in exactly the same way that a DEM from Earth would have been handled. A series of cylindrical projections were then exported from ArcMap and wrapped around 3D spheres to represent Mars. These 3D planet models were then imported into CityEngine as Collada where small selectable domes were added to represent the many probes that have successfully landed on Mars. Finally this model was exported as a 3D Web Scene and uploaded to ArcGIS online to easily share with the public. Since 3D Web Scenes are based on WebGL technology, no plug-in is required for most browsers.


To read more about how GIS helped to derive the Martian Ocean click here,


Exporting to a 3D Web Scene is currently available for CityEngine, ArcGlobe and ArcScene. 3D scenes and the ability to publish directly on the web is revolutionizing the way we share, collaborate, and communicate analysis results or design proposals with decision makers or the public. After all, our world is in 3D.


ArcMap is used to analyze the digital terrain model for Mars’ hydrological network.


The cylindrical projection is then wrapped around a 3D sphere and imported into CityEngine as Collada.


Motion Mapper

Posted by rcarmichael-esristaff Employee Nov 6, 2017

First published on 14 January, 2013.


Motion Mapper is an application built using Esri’s ArcGIS Runtime for WPF and Microsoft’s Kinect for Window'SDK. The application uses Kinect’s audio and motion recognition to interact with the map and exploit Landsat satellite imagery without the use of a keyboard or mouse.


The source code is available here.


The video embedded in this post shows a person gesturing and speaking to a desktop mapping application. The text within the black banner represents voice commands available to the user. Below is a detailed description of the operations being performed by the operator in the video (spoken commands in bold):


  1. The user activates the pan tool and navigates from the Middle East to Europe by pointing in the intended direction of travel,
  2. The user activates the zoom tool and moves his hands away from the screen to zoom out.
    Pointing directly at the screen with either (or both) hands will zoom in.
  3. The user displays the bookmark menu and then zooms to the Dubai preset extent.
  4. The user activates the swipe tool and selects the year 2005. As his hands move across the screen, Landsat imagery from 2005 clearly shows the impressive Palm Jebel Ali and Palm Jumeira archipelagos.
  5. Then the user selects 2000 to reveal that these engineering marvels did not exist five years earlier!
  6. The user zooms out to a smaller scale and activates the Landsat tool that commences a download of all individual Landsat scenes that overlap the map display. Details about each image appear in the upper left hand corner of the screen whenever his hand hovers over an image. Information boxes are colored blue and yellow to represent images selected with the left and right hands respectively.
  7. The rotate tool is activated so that the map can be pivoted in three dimensions revealing the chronological order of imagery. Older imagery is located at the bottom close to the map and newer imagery is located near the top.
  8. Lastly, the user places his hand over a single image and says open to view the image at full resolution. The image is traversed using the same panning technique described in (1) above.

Just over a year ago we published an add-in for ArcGlobe that allowed a user to navigate in three dimensions using hand gestures. When observing other people using this app we quickly realized that the hand and arm rules were too complicated and clearly not as intuitive as they could be. Based on these observations and recommendations from Microsoft we researched alternative techniques of Kinect integration.


Inspired by Netflix and other apps for the Xbox 360 gaming console we decided that speech was the key to compartmentalizing mapping tools. Rather than using complicated gestures to differentiate between mapping operations we choose to use speech to switch between panning, zooming and other tools. Overall this meant that hand gesturing could be much simpler but at the cost of a slightly more time consuming experience.


The Kinect sensor features a directional four microphone audio array, ideal for noise cancellation. Within our offices, speech recognition works very well but we have yet to test its proficiency in a noisy environment such as an exhibition hall at a large a conference.


The stacked temporal view of Landsat Imagery is achieved using WPF’s Viewport3D and Esri’s Map hosted in a Viewport2DVisual3D visual. This works well with no significant performance degradation but coding in three dimensional space is considerably more difficult than 2D! One must define texture coordinates, vertex mapping and odd things like ambient lighting. Something that needs additional work is better management of 2D scaling of the map in the 3D viewport.


In summary, developing Kinect-based apps is both challenging and rewarding. Challenging because Microsoft technology does not natively support “motion”. Developers must interpret and present raw video, depth and skeleton feeds for themselves. A developer’s job would be a lot easier if Microsoft extended the Kinect SDK to support fundamental gestures like “swipe left” and include fingers in the skeleton model. It is unlikely our trusted keyboard and mouse will be redundant anytime soon but it is very rewarding to experiment with technology that may augment our lives in the near future.


Landsat Viewer

Posted by rcarmichael-esristaff Employee Sep 7, 2017

Landsat Viewer Demonstration

The lab has just completed an experimental viewer designed to sort, filter and extract individual Landsat scenes. The viewer is a web application developed using Esri's JavaScript API and a three.js-based external renderer.


Click here for the live application.

Click here for the source code.


The application has a wizard-like workflow. First, the user is prompted to sketch a bounding box representation the area of interest. The next step defines the imagery source and minimum selection criteria for the image scenes. For example, in the screenshot below the user is interested in any scene taken over the past 45+ years but those scenes must have 10% or less cloud cover.



Finally, once preview scenes have been downloaded the user can advance to the final step of sorting, filtering and interrogating individual Landsat images. In the screenshot below the images have been sorted by cloud cover with cloudless images located at the top of the stack. Also, on the right hand side of the screenshot below one image has been identified. From the identify window one can naturally peruse the image's attribution but also add the image to the map as a normal image layer.



For more information about Landsat imagery hosted by the USGS and Esri and associated apps, please visit:

One of the great things about working in the Lab is you get to experiment with the new goodies from our core software developers before they are released.  When I heard that version 1.2 of the ArcGIS API for Python would include a new module for raster functions, I could not wait to give it a try.  Now that v.1.2 of the API is released, I can finally show you a Jupyter Notebook I built which has an example of a weighted overlay analysis implemented with raster functions.   The following is a non-interactive version of that notebook which I exported to HTML.  I hope it will give you some ideas for how you could use the ArcGIS API for Python to perform your own raster analysis.



Finding Natural and Accessible Areas in the State of Washington, USA

The weighted overlay is a standard GIS analysis technique for site-suitability and travel cost studies. This notebook leverages the new "arcgis.raster.functions" module in the ArcGIS API for Python 1.2 to demonstrate an example of a weighted overlay analysis.  This example attempts to identify areas in the State of Washington that are "natural" while also being easy to travel within based on the following criteria:

  • elevation (lower is better)
  • steepness of the terrain (flatter is better)
  • degree of human alteration of the landscape (less is better)

The input data for this analysis includes a DEM (Digital Elevation Model), and a dataset showing the degree of human modification to the landscape.

In general, weighted overlay analysis can be divided into three steps:

  1. Normalization: The pixels in the input raster datasets are reclassified to a common scale of numeric values based on their suitability according to the analysis criteria.
  2. Weighting: The normalized datasets are assigned a percent influence based on their importance to the final result by multiplying them by values ranging from 0.0 - 1.0. The sum of the values must equal 1.0.
  3. Summation: The sum of the weighted datasets is calculated to produce a final analysis result.


We'll begin by connecting to the GIS and accessing the data for the analysis.

Connect to the GIS

In [1]:
# import GIS from the arcgis.gis module
from arcgis.gis import GIS

# Connect to the GIS.
   web_gis = GIS("", 'djohnsonRA')   
   print("Successfully connected to {0}".format(
Enter password:········ 
Successfully connected to ArcGIS Enterprise A

Search the GIS for the input data for the analysis

Human Modified Index

In [2]:
# Search for the Human Modified Index imagery layer item by title
item_hmi ='title:Human Modified Index', 'Imagery Layer')[0]
Human Modified Index 
A measure of the degree of human modification, the index ranges from 0.0 for a virgin landscape condition to 1.0, for the most heavily modified areas.Imagery Layer by djohnsonRA 
Last Modified: July 06, 2017 
0 comments, 2 views


In [3]:
# Search for the DEM imagery layer item by title
item_dem ='title:USGS NED 30m', 'Imagery Layer')[0]
The National Elevation Dataset (NED) is the primary elevation data product of the USGS. This version was resampled to 30m from source data at 1/3 arc-second resolution and projected to an Albers Equal Area coordinate system.Imagery Layer by djohnsonRA 
Last Modified: July 06, 2017 
0 comments, 8 views

Study area boundary and extent

In [4]:
# Search for the Ventura County feature layer item by title
item_studyarea ='title:State of Washington, USA',
                                        'Feature Layer')[0]
State of Washington, USA 
State of Washington, USAFeature Layer Collection by djohnsonRA 
Last Modified: July 07, 2017 
0 comments, 2 views
In [5]:
# Get a reference to the feature layer from the portal item
lyr_studyarea = item_studyarea.layers[0]

Get the coordinate geometry of the study area

In [6]:
# Query the study area layer to get the boundary feature
query_studyarea = lyr_studyarea.query(where='1=1')
# Get the coordinate geometry of the study area.
# The geometry will be used to extract the Elevation and Human Modified Index data.
geom_studyarea = query_studyarea.features[0].geometry
# Set the spatial reference of the geometry.
geom_studyarea['spatialReference'] = query_studyarea.spatial_reference

Get the extent of the study area

In [7]:
# Import the geocode function
from arcgis.geocoding import geocode
# Use the geocode function to get the location/address of the study area
geocode_studyarea = geocode('State of Washington, USA',
out_sr= query_studyarea.spatial_reference)
In [8]:
# Get the geographic extent of the study area
# This extent will be used when displaying the Elevation, Human Modified Index,
# and final result data.
extent_studyarea = geocode_studyarea[0]['extent']
{'xmax': -1451059.3770040546,  
'xmin': -2009182.5321227335, 
'ymax': 1482366.818700374, 
'ymin': 736262.260048952}

Display the analysis data

Human Modified Index

In [9]:
# Get a reference to the imagery layer from the portal item
lyr_hmi = item_hmi.layers[0]
# Set the layer extent to geographic extent of study area and display the data.
lyr_hmi.extent = extent_studyarealyr_hmi


In [10]:
# Get a reference to the imagery layer from the portal item
lyr_dem = item_dem.layers[0]
# Set the layer extent to the geographic extent of study area and display the data.
lyr_dem.extent = extent_studyarealyr_dem

Slope (derived from elevation via the Slope raster function)

In [11]:
# Import the raster functions from the ArcGIS API for Python (new to version 1.2!)
from arcgis.raster.functions import *
In [12]:
# Derive a slope layer from the DEM layer using the slope function
lyr_slope = slope(dem=lyr_dem,slope_type='DEGREE', z_factor=1)
# Use the stretch function to enhance the display of the slope layer.
lyr_slope_stretch = stretch(raster=lyr_slope, stretch_type='StdDev', dra='true')
# Display the stretched slope layer within the extent of the study area.
lyr_slope_stretch.extent= extent_studyarealyr_slope_stretch

Extract the data within the study area geometry

Use the Clip raster function to extract the analysis data from within the study area geometry

Human Modified Index

In [13]:
# Extract the Human Modified Index data from within the study area geometry
hmi_clipped = clip(raster=lyr_hmi, geometry=geom_studyarea)


In [14]:
# Extract the Elevation data from within the study area geometry
elev_clipped = clip(raster=lyr_dem, geometry=geom_studyarea)


In [15]:
# Extract the Slope data from within the study area geometry
slope_clipped = clip(raster=lyr_slope, geometry=geom_studyarea)
# Apply the Stretch function to enhance the display of the slope_clipped layer.
slope_clipped_stretch = stretch(raster=slope_clipped, stretch_type='StdDev',

Perform the analysis

Step 1: Normalization

Use the Remap function to normalize each set of input data to a common scale of 1 - 9, where 1 = least suitable and 9 = most suitable.

In [16]:
# Create a colormap to display the analysis results with 9 colors ranging 
# from red to yellow to green.
clrmap=  [[1, 230, 0, 0], [2, 242, 85, 0], [3, 250, 142, 0], [4, 255, 195, 0],
         [5, 255, 255, 0], [6, 197, 219, 0], [7, 139, 181, 0], [8, 86, 148, 0],
9, 38, 115, 0]]
In [17]:
# Normalize the elevation data
elev_normalized = remap(raster=elev_clipped,
                        input_ranges=[0,490, 490,980, 980,1470, 1470,1960, 1960,2450,
                                      2450,2940, 2940,3430, 3430,3700, 3920,4100],
                       output_values=[9,8,7,6,5,4,3,2,1], astype='U8')

# Display color-mapped image of the reclassified elevation data
colormap(elev_normalized, colormap=clrmap) 
In [18]:
# Normalize the slope data
slope_normalized = remap(raster=slope_clipped,                         
                        input_ranges=[0,1, 1,2, 2,3, 3,5, 5,7, 7,9, 9,12, 12,15,
                        output_values=[9,8,7,6,5,4,3,2,1],  astype='U8') 

# Display a color-mapped image of the reclassified slope data
colormap(slope_normalized, colormap=clrmap)
In [19]:
# Normalize the Human Modified Index data
hmi_normalized = remap(raster=hmi_clipped,                 
                      input_ranges=[0.0,0.1, 0.1,0.2, 0.2,0.3, 0.3,0.4, 0.4,0.5,
0.5,0.6, 0.6,0.7, 0.7,0.8, 0.8,1.1],                 
                      output_values=[9,8,7,6,5,4,3,2,1],  astype='U8')

# Display a color-mapped image of the reclassified HMI data
colormap(hmi_normalized, colormap=clrmap)

Step 2: Weighting

Use the overloaded multiplication operator * to assign a weight to each normalized dataset based on their relative importance to the final result.

In [20]:
# Apply weights to the normalized data using the overloaded multiplication 
# operator "*".
# - Human Modified Index: 60%
# - Slope: 25%
# - Elevation: 15%
hmi_weighted = hmi_normalized * 0.6
slope_weighted = slope_normalized * 0.25
elev_weighted = elev_normalized * 0.15

Step 3: Summation

Add the weighted datasets together to produce a final analysis result.

In [21]:
# Calculate the sum of the weighted datasets using the overloaded addition 
# operator "+".
result_dynamic = colormap(hmi_weighted + slope_weighted + elev_weighted,
                          colormap=clrmap, astype='U8')

The same analysis can also be performed in a single operation

In [22]:
result_dynamic_one_op = colormap(    
# Human modified index layer       
      0.60 * remap(raster=clip(raster=lyr_hmi, geometry=geom_studyarea),
input_ranges=[0.0,0.1, 0.1,0.2, 0.2,0.3, 0.3,0.4, 0.4,0.5,
0.5,0.6, 0.6,0.7, 0.7,0.8, 0.8,1.1],                    
      # Slope layer       
      0.25 * remap(raster=clip(raster=lyr_slope, geometry=geom_studyarea),
input_ranges=[0,1, 1,2, 2,3, 3,5, 5,7, 7,9, 9,12, 12,15,
      # Elevation layer       
      0.15 * remap(raster=clip(raster=lyr_dem, geometry=geom_studyarea),
         input_ranges=[-90,250, 250,500, 500,750, 750,1000, 1000,1500,
                       1500,2000, 2000,2500, 2500,3000, 3000,5000],                   
   colormap=clrmap,  astype='U8')

Generate a persistent analysis result via distributed server based raster processing.

Portal for ArcGIS has been enhanced with the ability to perform distributed server based processing on imagery and raster data. This technology enables you to boost the performance of raster processing by processing data in a distributed fashion, even at full resolution and full extent.

You can use the processing capabilities of ArcGIS Pro to define the processing to be applied to raster data and perform processing in a distributed fashion using their on premise portal. The results of this processing can be accessed in the form of a web imagery layer that is hosted in their ArcGIS Organization.

For more information, see Raster analysis on Portal for ArcGIS

In [23]:
# Does the GIS support raster analytics?
import arcgis
In [24]:
# The .save() function invokes generate_raster from the
# module to run the analysis on a GIS server at the source resolution of the
# input datasets
and store the result as a persistent web imagery layer in the GIS.
result_persistent ="NaturalAndAccessible_WashingtonState")
Analysis Image Service generated from GenerateRasterImagery Layer by djohnsonRA 
Last Modified: July 07, 2017 
0 comments, 0 views
In [25]:
# Display the persistent result
lyr_result_persistent = result_persistent.layers[0]
lyr_result_persistent.extent = extent_studyarea
Data Credits:
Human Modified Index: 
A measure of the degree of human modification, the index ranges from 0.0 for a virgin landscape condition to 1.0 for the most heavily modified areas. The average value for the United States is 0.375. The data used to produce these values should be both more current and more detailed than the NLCD used for generating the cores. Emphasis was given to attempting to map in particular, energy related development. Theobald, DM (2013) A general model to quantify ecological integrity for landscape assessment and US Application. Landscape Ecol (2013) 28:1859-1874 doi: 10.1007/s10980-013-9941-6
USGS NED 30m:  
Data available from the U.S. Geological Survey. See USGS Visual Identity System Guidance for further details. Questions concerning the use or redistribution of USGS data should be directed to: or 1-888-ASK-USGS (1-888-275-8747). NASA Land Processes Distributed Active Archive Center (LP DAAC) Products Acknowledgement: These data are distributed by the Land Processes Distributed Active Archive Center (LP DAAC), located at USGS/EROS, Sioux Falls, SD.
State of Washington: Esri Data & Maps

Experimental Water Effects

At last year's Developer SummitJesse van den Kieboom demonstrated how realistic water effects can be applied to a JavaScript based web application (see slides, demo and source).  The Prototype Lab modified Jesse's code to work with coastal inundation areas hosted in an AGOL feature service.  This sample is based on version 4.3 of the ArcGIS API for JavaScript and three.js.


Click here for the live application.

Click here for the source code.

Filter Blog

By date: By tag: