Skip navigation
All Places > Applications Prototype Lab > Blog > Author: rcarmichael-esristaff
1 2 3 Previous Next

Applications Prototype Lab

37 Posts authored by: rcarmichael-esristaff Employee

 

The map above shows some spider diagrams. These diagrams are useful for presenting spatial distribution, for example, customers for a retail outlet or the hometowns of university students. The lab was recently tasked with creating an automated spider diagram tool without using Business Analyst or Network Analyst. The result of our work is in the Spider Diagram Toolbox for use by either ArcGIS Pro or ArcGIS Desktop.

 

Installation is fairly straight forward. After downloading the zip file, decompress and place the following files on your desktop.:

  • SpiderDiagram.pyt,
  • SpiderDiagram.pyt.xml,
  • SpiderDiagram.Spider.pyt.xml, and
  • SpiderDiagramReadme.pdf

In ArcGIS PRO or ArcMap you may connect a folder to this the desktop folder so that you access these files.

 

Running the tool is also easy. The tool dialog will prompt you for the origin and destination feature classes as well as the optional key fields that will link destination points to origin points. In the example below, the county seats are related to state capitals by the FIPS code.

 

 

Result:

 

Leave one or both key fields blank to connect each origin point to every destination point.

 

Result:

 

Which is the origin and which is the destination feature class?  It really doesn’t matter for this tool – either way will work.  If you want to symbolize the result with an arrow line symbol, know that the start point of each line is the location of points in the origin feature class.

 

Script and article written by Mark Smith.

 

Please direct comments to Bob Gerlt.

 

This is an experimental project to test the effectiveness of using a Microsoft Xbox controller to navigate in 3d web applications built using Esri's ArcGIS API for JavaScript.  This work was inspired by a customer that illustrated the difficulty of navigating underwater in a custom web application.

 

Click here for the live application.

Click here for the source code.

 

To date we have only testing the app on Windows 10 desktops.  We suspect that drivers for both Xbox 360 and Xbox One controllers are bundled with Windows 10.

 

How Do I Fly?

Button/AxisDescription
Left AxisHorizontal movement. Adjust to move the observer forward, back, left and right.
Right AxisLook. Adjust to change the horizontal and vertical angle of observation.
Left TriggerDescend.
Right TriggerAscend.
Left BumperZoom to previous web scene slide.
Right BumperZoom to next web scene slide.
A Button (green)Perform identify on the currently selected scene layer object.
B Button (red)Hide identify window.
Menu ButtonShow controller button map.
Start ButtonReset controller. This is used to reset the "at rest" values for the controller.

 

Don't Like This Map?

By default, the application loads this San Diego web scene.  This can be customized with a webscene url argument, for example.
https://richiecarmichael.github.io/gamepad/index.html?webscene=f85419bfd3414e1696c389dd9b6e9360

 

Known Issues

  • When the app starts, the camera may spontaneously creep without any controller interaction. Occasionally it may be an erratic spin. To correct this, after a few seconds press the start button. This will reset the controller.
  • Occasionally when the app starts, scene layers (e.g. buildings) may no fully load. To correct this refresh the browser and wait 5-10 seconds before using the controller.

 

Caveats

  • The app is experimental. The app is based on draft implementations of the gamepad API in modern browsers (see W3C and MDN for details).
  • The app has not been tested with a Sony PlayStation controller.

This blog posting was first published in August 2013 on the previous blog infrastructure.

 

In the 2008 article ‘Where Did Water Flow on Mars? Modeling Mars’ surface in search of ancient rivers and oceans’ Witold Fraczek demonstrated how GIS can furnish support for the theory that at some time in the past, water did flow on the Martian surface. By utilizing NASA’s available Martian DEM and other supporting data layers, a hydrologic network was created by running a series of hydro functions. For this analysis, a selected section of the Martian DEM was treated in exactly the same way that a DEM from Earth would have been handled. A series of cylindrical projections were then exported from ArcMap and wrapped around 3D spheres to represent Mars. These 3D planet models were then imported into CityEngine as Collada where small selectable domes were added to represent the many probes that have successfully landed on Mars. Finally this model was exported as a 3D Web Scene and uploaded to ArcGIS online to easily share with the public. Since 3D Web Scenes are based on WebGL technology, no plug-in is required for most browsers.

 

To read more about how GIS helped to derive the Martian Ocean click here,

 

Exporting to a 3D Web Scene is currently available for CityEngine, ArcGlobe and ArcScene. 3D scenes and the ability to publish directly on the web is revolutionizing the way we share, collaborate, and communicate analysis results or design proposals with decision makers or the public. After all, our world is in 3D.

 

ArcMap is used to analyze the digital terrain model for Mars’ hydrological network.

 

The cylindrical projection is then wrapped around a 3D sphere and imported into CityEngine as Collada.

rcarmichael-esristaff

Motion Mapper

Posted by rcarmichael-esristaff Employee Nov 6, 2017

First published on 14 January, 2013.

 

Motion Mapper is an application built using Esri’s ArcGIS Runtime for WPF and Microsoft’s Kinect for Window'SDK. The application uses Kinect’s audio and motion recognition to interact with the map and exploit Landsat satellite imagery without the use of a keyboard or mouse.

 

The source code is available here.

 

The video embedded in this post shows a person gesturing and speaking to a desktop mapping application. The text within the black banner represents voice commands available to the user. Below is a detailed description of the operations being performed by the operator in the video (spoken commands in bold):

 

  1. The user activates the pan tool and navigates from the Middle East to Europe by pointing in the intended direction of travel,
  2. The user activates the zoom tool and moves his hands away from the screen to zoom out.
    Pointing directly at the screen with either (or both) hands will zoom in.
  3. The user displays the bookmark menu and then zooms to the Dubai preset extent.
  4. The user activates the swipe tool and selects the year 2005. As his hands move across the screen, Landsat imagery from 2005 clearly shows the impressive Palm Jebel Ali and Palm Jumeira archipelagos.
  5. Then the user selects 2000 to reveal that these engineering marvels did not exist five years earlier!
  6. The user zooms out to a smaller scale and activates the Landsat tool that commences a download of all individual Landsat scenes that overlap the map display. Details about each image appear in the upper left hand corner of the screen whenever his hand hovers over an image. Information boxes are colored blue and yellow to represent images selected with the left and right hands respectively.
  7. The rotate tool is activated so that the map can be pivoted in three dimensions revealing the chronological order of imagery. Older imagery is located at the bottom close to the map and newer imagery is located near the top.
  8. Lastly, the user places his hand over a single image and says open to view the image at full resolution. The image is traversed using the same panning technique described in (1) above.


Just over a year ago we published an add-in for ArcGlobe that allowed a user to navigate in three dimensions using hand gestures. When observing other people using this app we quickly realized that the hand and arm rules were too complicated and clearly not as intuitive as they could be. Based on these observations and recommendations from Microsoft we researched alternative techniques of Kinect integration.

 

Inspired by Netflix and other apps for the Xbox 360 gaming console we decided that speech was the key to compartmentalizing mapping tools. Rather than using complicated gestures to differentiate between mapping operations we choose to use speech to switch between panning, zooming and other tools. Overall this meant that hand gesturing could be much simpler but at the cost of a slightly more time consuming experience.

 

The Kinect sensor features a directional four microphone audio array, ideal for noise cancellation. Within our offices, speech recognition works very well but we have yet to test its proficiency in a noisy environment such as an exhibition hall at a large a conference.

 

The stacked temporal view of Landsat Imagery is achieved using WPF’s Viewport3D and Esri’s Map hosted in a Viewport2DVisual3D visual. This works well with no significant performance degradation but coding in three dimensional space is considerably more difficult than 2D! One must define texture coordinates, vertex mapping and odd things like ambient lighting. Something that needs additional work is better management of 2D scaling of the map in the 3D viewport.

 

In summary, developing Kinect-based apps is both challenging and rewarding. Challenging because Microsoft technology does not natively support “motion”. Developers must interpret and present raw video, depth and skeleton feeds for themselves. A developer’s job would be a lot easier if Microsoft extended the Kinect SDK to support fundamental gestures like “swipe left” and include fingers in the skeleton model. It is unlikely our trusted keyboard and mouse will be redundant anytime soon but it is very rewarding to experiment with technology that may augment our lives in the near future.

rcarmichael-esristaff

Landsat Viewer

Posted by rcarmichael-esristaff Employee Sep 7, 2017

Landsat Viewer Demonstration

The lab has just completed an experimental viewer designed to sort, filter and extract individual Landsat scenes. The viewer is a web application developed using Esri's JavaScript API and a three.js-based external renderer.

 

Click here for the live application.

Click here for the source code.

 

The application has a wizard-like workflow. First, the user is prompted to sketch a bounding box representation the area of interest. The next step defines the imagery source and minimum selection criteria for the image scenes. For example, in the screenshot below the user is interested in any scene taken over the past 45+ years but those scenes must have 10% or less cloud cover.

 

 

Finally, once preview scenes have been downloaded the user can advance to the final step of sorting, filtering and interrogating individual Landsat images. In the screenshot below the images have been sorted by cloud cover with cloudless images located at the top of the stack. Also, on the right hand side of the screenshot below one image has been identified. From the identify window one can naturally peruse the image's attribution but also add the image to the map as a normal image layer.

 

 

For more information about Landsat imagery hosted by the USGS and Esri and associated apps, please visit:

Experimental Water Effects

At last year's Developer SummitJesse van den Kieboom demonstrated how realistic water effects can be applied to a JavaScript based web application (see slides, demo and source).  The Prototype Lab modified Jesse's code to work with coastal inundation areas hosted in an AGOL feature service.  This sample is based on version 4.3 of the ArcGIS API for JavaScript and three.js.

 

Click here for the live application.

Click here for the source code.

(click here for complete video)

 

We are happy to announce the availability of the HoloLens Terrain Viewer tutorial. This tutorial provides step-by-step instructions for the creation of a HoloLens application that can construct holographic terrains dynamically from voice commands. The tutorial describes how to config the preset list of named locations. The tutorial includes scripts that automate the conversion of AGOL content (imagery and elevation) to Unity terrain objects.

GitHub - Esri/hololens-terrain-viewer: Holographic mapping powered by ArcGIS 

rcarmichael-esristaff

Landsat Lens 2

Posted by rcarmichael-esristaff Employee Apr 24, 2017

Landsat Lens

Landsat Lens is a touch and mouse friendly application for browsing past and present Landsat satellite imagery hosted by Esri.

Click here for the live app.

Click here for the source code.

 

Using a mouse, a lens can be moved around the map with a standard left mouse click and drag operation. Scrolling the mouse wheel will enlarge or decrease the size of a lens depending on the direction.

 

With a touch device like an iPad, a lens can moved with an intuitive press and drag. To resize, pinch or expand two or more fingers within a lens. Likewise, rotating a lens is achieved by twisting two or more fingers. Unlike with a mouse, touch screens allow the user to manipulate two or more lenses concurrently.

 

By default, the app starts with a lens dated 2017 located close to the Palm Jebel Ali in Dubai. To pick a preset location choose from one of the entries from the Bookmarks dropdown menu. Alternative you can pan or zoom to any area of interest.

 

For one of the preset locations, or your own area of interest, you may want to view changes over time. To do so, use the Windows dropdown menu to add a window showing 2002, 2005, 2010, 2015 or 2017 imagery. By swiping lenses over the basemap and one another you can easily see changes in vegetation, coastlines, rivers and human activity. Use the last option in the dropdown menu to removal all lenses from the map.

 

Known Issues:

  • Support for W3C's touch events is extremely varied across browsers and operating systems. However the author notes that the most consistent behavior has been with Chrome browsers.
  • The Esri hosted Landsat image services ms and ps currently contain imagery from the year 2000 til present. However the imagery is not uniformly distributed over time. Imagery prior to 2014 is fairly sparse. This will likely change over time.

Animation of building interaction sample

This developer sample demonstrates how to interact with a multi-level building in three dimensions using Esri's ArcGIS API for JavaScript version 4.3. With a simple click/tap and drag operation a building can be intuitively and smoothly repositioned on the Earth's surface. Note that each floor is a separate graphic but are treated as a single entity when manipulated.

 

Click here for the live application.

Click here for the source code.

 

It is important to note that the sample uses a few undocumented API calls to achieve this behavior. As such we caution against using these calls in a production environment as they are unsupported and will very likely change in the near future.

 

The code may seem overly or unnecessarily complicated, below is a summary of what the code does:

  • Building Construction
    For portability reasons, building floors are constructed from a set of hardcoded values. It is important to note that each floor is an individual graphic attributed with a building identifier. This identifier is used to group all adjacent floors for highlighting and spatial translation.
  • Dragging
    The code is using the SceneView's drag event to respond to the three phases of a pointer's drag operation, namely start, update and end. During the "start" phase, a hittest is performed to identify the building floor (if any) under the pointer. If a floor is found, adjacent floors are identified and highlighted. In the "update" phase, two undocumented methods SceneView._stage.pick and SceneView._computeMapPointFromIntersectionResult are used to tracking pointer displacement in real world coordinates. These methods return the map location directly beneath the pointer, ignoring features and graphics.
  • Disabling/Enabling Pointer Interaction
    When a building is dragged to a new location, it is necessary to disable map interaction so that map does not pan. To achieve this we used another undocumented method, SceneView.inputManager, to add and then restore handlers.
  • Moving
    At present, it is not possible to update the geometry of an existing graphic. In order to show a graphic moving it must be deleted and then re-added. This may seem somewhat cumbersome but the performance hit is negligible.
  • Throttling
    It is likely that the rate of the drag event is fired will exceed the display frame rate. If the display is performing at 60 frames per second (or better) then excessive drag events are ignored.

 

Once again I would like to stress that this is a developer sample specifically for version 4.3 of the Esri JavaScript API. It is very likely that some of the code used is this application will be either obsolete or redundant (or both) in future releases.

 

Special thanks to Johannes Schmid for his technical expertise!

Arctic Dem is a prototype app developed a few years ago in conjunction with President Barack Obama's executive order calling to "enhance coordination of national efforts in the Arctic" . With a small preliminary dataset from the Polar Geospatial Center we created this proof of concept. Our intention was to experiment with the dynamic rendering of ArcGIS Image Services. For example, the first two sliders define the sun's position used by the image service to dynamically generate a hillshade from the elevation dataset. The second group of sliders are used to highlight a subset of elevation pixels that satisfy the height, slope and aspect criteria. Likewise, this rendering is performed dynamically using out-of-the-box rendering functions.

 

Easter Egg: Click the "hillshade" label to toggle between the standard hillshade function and a multi-directional hillshade custom function. Please click here to access a global multi-directional hillshade.

 

For a detailed description of the data and a user guide please click the orange buttons in the lower left hand corner of the application.

 

Click here for the live application.

Click here for the source code.

 

For the production application please visit the ArcticDEM Explorer and read the associated press release. Special thanks to David Johnson for preparing and republishing this service.

Welcome to the new Applications Prototype Lab GeoNet group!  Our blog, discussions and other updates are now hosted on the GeoNet Community for better integration with industry, product and developer forums.  For your convenience in the future many of our more popular historic blog postings will be ported to, and redirected to, our new GeoNet home.

 

A little bit about us.  We are small group of twelve geospatial professionals engaged in pre-sales activities, corporate assignments and applied research and development.  We are based at Esri's headquarters in Redlands, California.

 

The Prototype Lab

Left to right, front to back: Lenny K., John Grayson, Bob Gerlt, David Johnson, Carol Sousa, Richie CarmichaelAl Pascual, Mark Smith (Manager), Witold Fraczek, Mark Deaton, Thomas Emge and Hugh Keegan.

 

Please be sure to click "follow" in the top right corner of the overview page to be alerted to new announcements, apps, snippets or postings.

 

If you're new to GeoNet, we also encourage you to check out the GeoNet Help group for tips and FAQs on how to get started and get the most out of your community experience.

 

Thanks for joining us and we look forward to seeing your contributions!

4.png

The parallel fences generated in the X and Y directions by two separate runs of the Parallel Fences tool.



Note:  The 3D Fences Toolbox was updated Feb 28, 2017.  

This article introduces and discusses a geoprocessing toolbox that can perform geostatistics on vertical slices of three dimensional point clouds.Click here to download the toolbox.

For years the users of ArcGIS and its Geostatistical Analyst extension have been able to perform sophisticated geostatistical interpolation of the data samples in two dimensions. With that, the creation of continuous maps of any phenomenon that was measured only at selected locations was feasible. With the growing popularity of analysis in 3D and the availability of 3D data, the necessity of an option to perform geostatistics on 3D data is in demand. Studies of atmospheric cross-sections, geologic profiles, and bathymetric transects became an integral part of GIS. In each one of these three classical elements (air, earth and water), we can measure natural phenomena, such as the gradient of air temperature, the content of an ore at various depth of the geologic strata, or the salinity of the ocean along adjacent vertical transect lines.

In GIS we analyze not only the natural phenomena, but also those that are man-made. Human impacts on air, earth, and water are also measured. Among those human contributions to the environment that could be analyzed in 3D are plumes of air pollution and oil spills. The later may migrate down in the ground and be moved by the ground water drift, or migrate from the bottom of an ocean up to its surface while sometimes being dragged by ocean currents. Having an insight into such occurrences can greatly increase our understanding of the phenomena. This was the motivation for creating the 3DFences Toolbox.

A slice is a vertical subset of the 3D data analogous to a slice of bread from a loaf. While typically narrow in one dimension it still is a 3D object. A fence is a 2D representation of a slice of 3D data. All points which belong to a slice are projected, or pressed onto a 2D plane. The term of a fence diagram, or a fence, is used in geology to illustrate a cross section of geologic strata generated from an interpolation of the data coming from a linear array of vertical drillings. The equivalent of a fence in the atmospheric sciences is usually referred to as a curtain.
1.png

Top view of a slice of points shown in purple, and the resulting fence presented as a colored line in the center of the input points.

2.png

Side view of the input slice points located on one side of the resulting fence. The points are the measurements of oil in sea water after an oil spill.



To be explicit, the tools in the 3D Fences toolbox do not perform 3D interpolation. The approach offered by the 3DFences Toolbox does not implement geostatistical analysis directly on the vertical slices of the 3D data. Instead, the tools transform a slice of the 3D data, with its X, Y, Z, and the measure of a phenomenon component by rotating it by 90° to a horizontal 2D plane. The geostatistical interpolation method of Empirical Bayesian Kriging (EBK) is performed on these points producing either the geostatistical surface of Prediction, which is a continuous map of the concentration or intensity of something, or a map of the Prediction Standard Error, which can be explained as a map of a degree of confidence in the Prediction map at each location of the map. The resulting output is converted to a point dataset where the points represent the interpolated value at the center of the raster cells. The points are then placed back into the original coordinate space as a regular matrix of points resembling a fence. The fence is positioned in the center of the selected points of the initial slice, when displayed in ArcScene or ArcGIS Pro. The raster is converted to a point dataset because ArcGIS does not currently support display of raster data as a vertical plane and point symbology options provide added flexibility in displaying results.

Any of the ArcGIS standard or geostatistical point interpolation tools could have been implemented in the tools. For the prototype, we have chosen the Empirical Bayesian Kriging (EBK) geostatistical method known for its best fitting default parameters and accurate predictions.3.png

The 3DFences toolbox consists of three separate tools to support different methods of generating fences. The Parallel Fences tool can generate sets of parallel fences in the directions that are related to either longitudes, latitudes or depths.  In other words, the output sets of parallel fences stretch from N to S, W to E, or through the Z dimension.  The number of these fences in each set is determined by the user. All of the tools support selection sets to create fences from a subset of sample point features.
5.png

A fence created along digitized "S" shape.



The Interactive Fences tool can generate fences based on lines digitized on the map. The user sets the buffer distance from the digitized line. All points that are located within the buffer will be used for the geostatistical analysis. The user may digitize multiple lines and even self-intersecting lines with many vertices. The third tool, called Feature based Fences, creates fences based on existing features in a polyline feature. In this case the fence shape is determined by the existing feature(s) and extends through the Z dimension of the selected sample points. As an example, it might be applied to investigate for oil leaks above an oil pipeline placed on the sea floor.

All of the tools contain options enabling the user to determine the minimum number of sample points and fence size required to generate a reasonable geostatistical surface. The tools are also time aware. If the sample data contains a date-time field and the option is enabled, a fence will be generated for each time interval if the samples for that interval and location meet the minimum requirements set by the user. The resulting fences representing consecutive time windows are positioned at the same locations. Thus, to enable better visual analysis, these should be displayed as time animations.
6.png

An example of an atmospheric curtain created from backscatter data acquired by NASA from its Calipso satellite orbiting at 32 km above the Earth's surface.

Contributed by Bob G. and Witold F.


Ürümqi in China is the remotest location on Earth, geographically speaking of course. This discovery and the analysis behind it are discussed in a recently published story map entitled Poles of Inaccessibility.

Vilhjalmur Stefansson, an Icelandic explorer, introduced the world to the concept of inaccessibility with his 1920's computation of the Arctic's pole of inaccessible. Story map authors Dr Witold Frączek and Mr Lenny Kneller recomputed this Arctic location and inaccessible locations in six continents. Frączek and Kneller computed and compared remote locations using geodesic and planar computations. Differences were small with the exception of the Eurasian continent which has a variation of approximately 11 kilometers.

The authors discovered that South America was essentially bi-polar with respect to inaccessibility. While both locations are located in Brazil, and separated by 1,400 km, the difference in inaccessibility was less than a kilometer. It is conceivable that the order of remoteness may change with the interpolation and generalization of the coastline.

Lastly, I would encourage you to take a moment to read and view Frączek and Kneller's Poles of Inaccessibility.

hierarchy.gif

Hydro Hierarchy is an experimental web application for visualizing the US river network.

Click here to view the live application.

Source code is available on agol and github.

There are approximately a quarter of a million rivers in the United States, but only 2,500 are displayed in this application.  This subset represents streams with a Strahler stream order classification of four or greater.  The stream data used by this application is derived from the USGS's National Hydrographic Dataset and has undergone significant spatial editing.  Streams geometries have been adjusted to ensure connectivity, generalized for small scale mapping and re-oriented in the direction of flow.

River flow data was acquired from the USGS's WaterWatch website.  Each river segment is attributed with the average flow for each month in 2014 and the ten year monthly average.  Computed values, in cubic feet per second, represent the flow at the downstream end of each river.  Flow data is displayed as a column chart on the left hand side of the browser window whenever the user's mouse passes over a stream.

The preview animated image at the beginning of this post may look sped up.  It is not.  Upstream and downstream rivers are highlighted in real time as the user moves his or her cursor over the hydrologic network.  This performance is achieved using connectivity information loaded when the application first starts from this file.  The file was creating in ArcMap from a network dataset that included the river feature class.  Using this script, connectivity information for each network node was extracted and arranged into a hierarchical data structure.

The radial and column charts on the left hand side of the application are generated using the D3 graphics library.  The column chart displays 2014 flow data for any river segment that is directly below the user's mouse.  The horizontal red line represents the ten year mean monthly flow.  Note that for most river segments, only one or two months ever exceeded the ten year average.  This is indicative of 2014's drought, at least with respect to river flows over the past decade.Contributed by Richie C. and Witold F.

tutorial.gif

Solar Eclipse Finder is a JavaScript-based web application that displays past and future solar eclipses that pass through a user defined location.  The source data is published as an AGOL hosted service with the paths of 905 solar eclipses from 1601 to 2200.  The eclipse paths were prepared by Michael Zeiler from data courtesy of Xavier Jubier.

The live application is available here.

Source code is available on agol and github.

Originally developed 2½ years ago as a Silverlight-based web application (see blog posting here), we wanted to confirm that the same performance and advanced symbology is achievable today with HTML5/JavaScript in modern browsers.

jQuery & Bootstrap

jQuery is a JavaScript framework for DOM manipulating.  It is important to note that jQuery is not a prerequisite for mapping apps using Esri's ArcGIS API for JavaScipt. It is however a prerequisite of many third party JavaScript libraries like Bootstrap, a popular user interface framework.  This application uses Bootstap's popover tooltips in the fly-out attribute window and its modal dialog during start-up.

D3


The tapered symbol used by eclipse shadow paths is achieved using a linear gradient fill.  Linear gradient fills are not supported by ArcGIS API for JavaScript.  However linear gradient fills are supported by SVG, the underlying technology used by Esri's JavaScript API for renderering vectors.  We used Mike Bostock's D3.js JavaScript library to insert and apply linear gradient fills directly to the map's embedded SVG node.

Conclusion


Updating this application was a two step process.  First the eclipse dataset was republished as an AGOL hosted feature service and, second, the app was rewritten in HTML/JS.  Both tasks were relatively effortless and only took a couple of days in total.

Filter Blog

By date: By tag: