Skip navigation
All Places > GIS > Imagery and Remote Sensing > Blog
1 2 Previous Next

Imagery and Remote Sensing

27 posts

One of the most important components in a supervised image classification is excellent training sites. Training an accurate classification model requires that your training samples represent distinct spectral responses recorded from the remote sensing platform – a training sample for vegetation should not include pixels with snow or pavement, samples for water classification should not include pixels with bare earth. Using the spectral profiles chart, you can evaluate your training samples before you train your model.

If you use the Training Samples Manager, it’s one simple step to create the chart. If you created your training samples separately, where each polygon or point is a different record in the feature class, it just takes a quick geoprocessing tool before creating the chart if you want to look at the average spectral profiles for each class all on one graph.

The purpose of this blog is not to go through the entire image classification workflow from end-to-end, but simply to show you how to use spectral profiles to guide you in creating training samples. For example, the spectral profile example below tells you that the Water training sites are significantly distinct, but that Golf Course and Healthy Vegetation may be too similar to yield an accurate result.

 

Example of spectral profile

Of course, remotely sensed imagery with large-ish pixel sizes (e.g. Landsat with 30m resolution) is bound to have multiple land cover categories within a single pixel. Still, it’s important to create good training samples in regions where pixels are easily identifiable as a given land cover type, and these samples become even more important when working with lower resolution data or when trying to identify more land cover categories.

In this example, I used image classification to get an understanding of the amount of land used for agriculture in the Imperial Valley in Southern California, a region situated in the Colorado Desert with high temperatures and very little rainfall.

Imperial Valley in Imperial County, California

 

Scenario 1: With the Training Samples Manager

Using the Training Samples Manager in ArcGIS Pro to generate training samples allows you to create a feature class that’s already organized by class name and class ID according to a schema.

In this analysis, I’m using a schema made up of five land cover types: Barren, Planted/Cultivated, Shrubland, Developed, and Water. Using the drawing tools, I’ve created several training samples for each category. Each time I draw a new training sample, a new record is added to the list in the Training Samples Manager. If I tried to create a Spectral Profile Chart with that many training samples, I’d have to select every record for each land cover class. Instead, I’ll use the Collapse tool to combine all the training samples for a given class into a single record. Then I’ll click the Save button to save my training samples as a feature class.

 

Collapse training samples for each categoryScenario 2: Without the Training Samples Manager

If you have a feature class with training samples that you created outside of the Training Samples Manager, where each training site is a separate record in the feature class, you need to run the Dissolve geoprocessing tool before creating a chart if you want to see the average spectral profiles for all your training samples at once. Use the class name or class value as the Dissolve field to combine all records associated with a given land cover class into a single multi-part polygon.

To view the spectral profile for one training sample at a time interactively (e.g. to view each individual training site for Developed), skip this step entirely and start working with your chart.

Use Dissolve to "collapse" records in the training samples feature classCharting the Spectral Profiles

At this point, using your imagery and the training samples feature class, you can create your spectral profiles chart:

  1. Right-click on the image to be classified in the Contents pane
  2. Select Create Chart > Spectral Profile
  3. In the Chart Properties pane, choose Mean Line as the Plot Type.
  4. Use the Feature Selector tool to select one of the polygons. Remember that because we used Collapse or Dissolve, selecting one polygon means you are selecting all the training sites for the land cover category represented by that polygon.
  5. Symbolize the profile lines to match the color of the land cover type and change the label name so you can easily assess the chart.
    **  Pro Tip: To change the label of the profile, type the name in the Label field on the Chart Properties pane and hit TAB  **
  6. Try out different chart types to see the types of information you can glean from them – do you see outliers? Consistent trends? Similar profiles? Distinct categories.

Below is the spectral profile chart I created using the imagery and training samples for the Imperial Valley study. I used the “Medium” (grey) theme in the chart to make it easier to view the profiles.

Spectral profile of land cover training samples in Imperial Valley studyAssessment of Spectral Profiles

At first glance, I can tell that the Planted/Cultivated, Water, and Barren land cover classes have profiles that are distinct enough that I can expect good initial results for classification of these classes. However, the Developed and Shrubland profiles are a little too close for comfort: they have the same general shape and the average reflectance values are similar at each wavelength. From this, I can choose whether I want to re-create my training samples or simply combine the two categories into a single class. Theoretically, combining the Shrubland and Developed into one class shouldn’t impact my analysis because my main focus is an accurate estimate of Planted/Cultivated land cover.

Before making my decision, I’ll take a deeper look at the data. The chart below is the same data in a Boxes plot, and I can hover my mouse over the boxes to get the statistics for each land cover class at each wavelength band.

From the Boxes chart, I can see that the Developed and Shrubland land cover classes have similar average values and similar distribution. However, the Developed land cover type has much higher maximum reflectance values across all wavelengths, and Shrubland has lower minimum values. This makes sense – I would expect developed areas (buildings, roads, parking lots, etc.) to be brighter in general than shrubby areas.
Since the Boxes chart tells me that the minimum and maximum values vary so much between the classes, combining these two classes into a single class could potentially confuse my classification model and impact the overall accuracy. Instead, I’m going to re-create the training samples for the Developed class to capture those higher reflectance values.
The charts below include the spectral profiles for my modified training samples.
Now, in the visible and near infrared bands especially, you can see distinctly higher reflectance values for the Developed land cover training sample data compared to the Shrubland spectral response. With these results, I would be comfortable moving forward with my classification workflow by training my model with all my training samples.

Extra Credit

For bonus points, I used the Multispectral Landsat image service from the Living Atlas to quickly visualize NDVI in the Imperial Valley area. Then I used a spectral profile chart to compare NDVI averages in different areas of interest for vegetation health assessment. Use the steps below to try it yourself:
  1. In ArcGIS Pro,  open the Map tab and select Add Data.
  2. From the menu on the left, expand the Portal option and select Living Atlas. Use the Search box to search for “Multispectral Landsat.”
  3. Select the Multispectral Landsat image service and click OK.
  4. Zoom to Imperial Valley or your area of interest.
  5. Make sure the Multispectral Landsat service is highlighted in the Contents pane. In the Image Service contextual tab set, select the Data tab.
  6. In the Processing group, click the Processing Templates drop-down.
  7. Scroll down to NDVI Colorized. Select this template to display the colormap for NDVI.
  8. Right-click on the Multispectral Landsat image service in Contents and select Create Chart > Spectral Profile.
  9. Use the drawing tools to select multiple small areas of interest to compare NDVI distribution throughout the region.

 

Want to know more?

Try the Image Classification Wizard tutorial

Learn more about the Training Samples Manager

Learn more about image classification

Learn more about charting tools

Given the growing number of people using commercial drones these days, a common question is: “What do I do with all this imagery?”

 

The simple answer is that it depends on what you’re trying to accomplish.

 

If you just want to share the imagery as-is, and aren’t worried about making sure it’s georeferenced to be an accurate depiction of the ground, Oriented Imagery is probably your answer. If you’re capturing video, Full Motion Video in the Image Analyst extension for ArcGIS Pro is your best bet. Ultimately, though, many users plan to turn the single frame images acquired by drones into authoritative mapping products—orthorectified mosaics, digital surface models (DSMs), digital terrain models (DTMs), 3D point clouds, or 3D textured meshes.

 

Esri has three possible solutions for producing authoritative mapping products from drone imagery, each targeted for different users— (1) Drone2Map for ArcGIS, (2) the ortho mapping capability of ArcGIS Pro Advanced, and (3) the Ortho Maker app included with ArcGIS Enterprise. Read on to get an overview of all three solutions, and to figure out which one is best for your application.

 

Drone2Map for ArcGIS

For individual GIS users, Drone2Map is an easy-to-use, standalone app that supports a complete drone-processing workflow.

 

Drone2Map includes guided templates for creating orthorectified mosaics and digital elevation models. It’s also the only ArcGIS product that creates 3D products from drone imagery, including RGB point clouds and 3D textured meshes. Once you’ve processed your imagery, it’s easy to share the final products—2D web maps and 3D web scenes can be easily published on ArcGIS Online with a single step. ArcGIS Desktop isn’t required to run Drone2Map, but products created with Drone2Map are Desktop-compatible. That’s important, because it gives you the option to use ArcGIS Pro as an image management solution, or to serve your imagery products as dynamic image services using ArcGIS Image Server.

 

Ortho mapping capability of ArcGIS Pro Advanced

For GIS professionals, the ortho mapping capability of ArcGIS Pro Advanced enables you to create orthomosaics and digital elevation models from drone images (as well as from modern aerial imagery, historical film, and satellite data) in the familiar ArcGIS Desktop environment.

 

There are added benefits to processing your drone imagery in ArcGIS Pro. For users with very large imagery collections, Pro’s image management capabilities are especially valuable. Managing drone imagery using mosaic datasets makes it easy to query images and metadata, mosaic your imagery, and build footprints. Image management and processing workflows in ArcGIS Pro can also be automated using Python or Model Builder. Finally, sharing your imagery is straightforward. While you can publish your products to ArcGIS Online, you can also use ArcGIS Pro in conjunction with ArcGIS Image Server to publish drone products as dynamic image services.  

 

Ortho Maker app in ArcGIS Enterprise 10.6.1+

For ArcGIS Enterprise users, the Ortho Maker app offers a solution for organizations with multiple users who want simple, web-based workflows to create orthomosaics and DEMs from drone imagery.

 

Ortho Maker provides an easy-to-use web interface for uploading drone imagery and managing the ortho mapping workflow, while behind the scenes it uses the distributed processing and storage capability of Enterprise and ArcGIS Image Server to quickly process even very large collections of drone imagery. (That also means it requires ArcGIS Image Server configured for raster analysis.) The ArcGIS API for Python can be used to automate the ortho mapping process. Sharing Ortho Maker products is virtually automatic—they become imagery layer items accessible in your Enterprise portal, easily shared with users throughout your organization.

 

What do typical users say?

things typical users of each ArcGIS option for processing imagery might say

Next steps

Now that you have a better idea which solution makes sense for your application, it’s time to take one for a test drive. Drone2Map offers a free 15-day trial, plus a hands-on Learn lesson to get started. You can try ArcGIS Pro Advanced free for 21 days, and read more about getting started with ortho mapping for drone imagery.  For users with Enterprise 10.6.1+ and raster analysis enabled, Ortho Maker is included—find out how to get started.  Other Enterprise users should contact their administrator to see about getting access. If you still have questions, contact Esri for more product information.

In Part I of this blog series, we explained what an ortho mapping workspace is and how to create one for digital aerial imagery. At this point, the imagery has been organized and managed so that we can access all the necessary metadata, information, tools and functionality to work with our imagery, but we haven’t yet performed a bundle block adjustment.

 

Ortho Mapping blog series part 2

 

Block adjustment is the process of adjusting the parameters in the image support data to get an accurate transformation between the image and the ground. The process is based on the relationship between overlapping images, control points, the camera model, and topography – then computing a transformation for the group of images (a block). With aerial digital data, it consists of three key components:

  • Tie points – Common points that appear in overlapping images, tying the overlapping images to each other to minimize misalignment between the images. These are automatically identified by the software.
  • Ground control points – These are usually obtained with ground survey, and they provide references from features visible in the images to known ground coordinates.
  • Aerial triangulation – Computes an accurate camera model, ground position (X, Y, Z), and orientation (omega, phi, kappa) for each image, which are necessary to transform the images to match the control points and the elevation model.

When we created our workspace, we provided the Frames and Cameras tables, which contain the orientation and camera information needed to make up our camera model and to establish the relationship between the imagery and the ground. We also provided an elevation model which we obtained from the Terrain image service available through the Living Atlas of the World. Now we’re ready to move on to the next step in the ortho mapping process.

Performing a Block Adjustment for Digital Aerial Data

 

  1. In the ortho mapping workspace, open the Ortho Mapping tab and select Adjustment Options from the Adjust group. This is where we can define the parameters used in computing the block adjustment, which includes computing tie points. For more information on each parameter, check out the Adjustment Options help documentation.

Ortho Mapping Adjustment Options and GCP Import

 

 

  1. Next, we want to add Ground Control Points (GCPs) to our workspace to improve the overall georeferencing and accuracy of the adjustment. To do this, select the Manage GCPs tool in the Ortho Mapping tab and choose Import GCPs. We have a CSV table with X, Y and Z coordinates and accuracy to be used for this analysis.
    • If you have an existing table of GCPs, use this Import option and map the fields in the Import GCPs dialog for the X, Y, and Z coordinates, GCP label, and accuracy fields in your table. You may have photos of each GCP location for reference – if so, you can import the folder of photos for reference when you are measuring (or linking) the GCPs to the overlapping images.
    • You may also have secondary GCPs, or control points that were not obtained in a survey but from an existing orthoimage with known accuracy. You can import those here as well, or you can manually add them using the GCP Manager.
    • Once you have added GCPs to the workspace, use the GCP Manager to add tie points to the associated locations on each overlapping image. Select one of the GCPs in the GCP Manager table, then iterate through the overlapping images in the Image list below and use your cursor to place a tie point on the site that is represented by the GCP

 

Add tie points for each GCP and change some to check points

A few notes:

Check Points: Be sure to change some of your GCPs to Check Points (right-click on the GCP in the GCP Manager and select “Change to Check Point) so you can view the check point deviation in the Adjustment Report after running the adjustment. This is essentially changing the point from a control point that facilitates the adjustment process to a control point that assesses the adjustment results.The icon in the GCP table will change from a circle to a triangle, and the check points appear as pink triangles in the workspace map.

Drone imagery: If you are performing a block adjustment with drone imagery, you must run the Adjust tool before adding GCPs. In this blog, we’re focusing on aerial digital data.

 

  1. Finally, we click the Adjust tool to compute the block adjustment. This will take some time – transforming a number of images so that they align with each other and the ground is complicated work – so get up, maybe do some stretches or get yourself a cup of coffee. The log window will let you know when the process is complete. When the adjustment is finished, you’ll see new options available in the ortho mapping tab that enable you to assess the results of the adjustment.

 

Assessing the Block Adjustment

 

  1. Run the Analyze Tie Points tool to generate QA/QC data in your ortho mapping workspace. The Overlap Polygons feature class contains control point coverage in areas where images overlap, and the Coverage Polygons feature class contains control point coverage for each image in the image collection.  Inspect these feature classes to identify areas that need additional control points to improve block adjustment results.
QA/QC outputs in the ortho mapping workspace

 

  1. Open the Adjustment Report to view the components and results of the adjustment report. Here you will find information about the number of control points used in the adjustment, the average residual error, tie point sets, and connectivity of overlapping imagery. In our case, the Mean Reprojection Error of our adjustment is 0.38 pixels.

Now what?

The block adjustment tools allow for an iterative computation, so that you can check on the quality of the adjustment, modify options, add or delete GCPs, or recompute tie points before re-running the adjustment. If you are unsatisfied with the error in the Adjustment Report, try adding GCPs in the Manage GCPs pane, or try modifying some of the Adjustment Options. You can also change some of your check points back into GCPs, and choose a few other GCPs to be your check points. Re-run the adjustment and see how this impacts the shift.

Once you are satisfied with the accuracy of your adjusted imagery, it’s time to make ortho products! Check out the final installment in our blog series to see how it’s done.

Any remote sensing image, whether it’s a drone image, aerial photograph, or data from a satellite sensor, will inherently be impacted by some form of geometric distortion. The shape of local terrain, the sensor angle and altitude, the motion of the sensor system, and the curvature of the Earth all make it difficult to represent three dimensional ground features accurately in a two dimensional map. Image orthorectification corrects for these types of distortion so you can have a measurable, map-accurate image.

 

Distortion caused by camera tilt and terrain displacement

 

Everyone working with GIS data to make well-informed decisions needs up-to-date information about the natural, man-made, and cultural features on the ground: roads, land cover types, buildings, water bodies, and other features that fill the landscape. Much of the vector data that describes these features was actually created from orthorectified imagery, and can be combined with new imagery to update your landbase.

 

A landbase is a layer or combination of layers making up the backdrop of your GIS

 

Esri’s Ortho Mapping suite enables you to orthorectify your remote sensing imagery to make it map-accurate. It also makes it easy to create other products like orthomosaics (mosaicked images corrected for distortion) and digital elevation models (terrain or surface models) which can be used as basemaps, part of a landbase, or for further analysis in stereo models, 3D analysis and feature extraction.

 

The workflow to create ortho mapping products will be presented in a three-part blog series, each with a short video:

  • Creating a workspace
  • Performing a block adjustment
  • Creating ortho mapping products

 

Let's get started!

 

Creating an Ortho Mapping Workspace

 

The first step in any project is getting organized – and creating an Ortho Mapping Workspace in ArcGIS Pro makes this easy to do.

The Ortho Mapping Workspace is a sub-project in ArcGIS Pro; it’s the interface you work with when interacting with ortho mapping workflows. The workspace is defined by the type of imagery you are working with (drone, aerial or satellite). In turn, the workspace is integrated with the tools and wizards to properly guide you through each step in the workflow. When you create a new workspace, an Ortho Mapping folder appears in your project folder structure in Catalog, and a new table of contents list view allows you to List By Ortho Mapping Entities. Again, the types of feature classes and tables you see in the Contents pane depend on the type of imagery you are working with.

Similar to Maps or Scenes within a project, a workspace is an object stored in the folder structure of a project and it can be accessed by other projects. All the feature classes and tables needed to orthorectify your imagery are created and managed in the workspace.

5 Simple Steps

Step 1: Open the Imagery tab in your ArcGIS Pro project. This is where you can analyze and manage any raster data you want to work with in Pro. In the Ortho Mapping group, you’ll see the New Workspace menu that allows you to create a New Ortho Mapping Workspace, add an existing Ortho Mapping Workspace with a reference to that workspace, or import an Ortho Mapping Workspace by creating a copy of an existing workspace and storing the new copy in your project. Select New Workspace.

Step 2: The New Ortho Mapping Workspace wizard appears. Here you’ll give your workspace a name (required) which identifies your project in the Contents and Catalog panes. You can also provide a description (optional) and you’ll select the type of imagery you want to import. In our workflow, we’re using aerial imagery acquired by Vexcel Imaging covering an area over Hollywood, California, so we’ll select Aerial – Digital as the type. Click Next.

Step 3: The Image Collection page opens. Here you’ll enter specific information about the type of sensor used to collect your imagery. You can choose from MATCH-AT, ISAT, or Applanix, or you can select the Generic Frame Camera, which requires you to provide the exterior and interior orientation information with the Frames tables and Cameras tables, respectively. Entering the Frames and Cameras information will provide the information necessary to correct for sensor-related distortion.

The Frames table has a specific schema that is required in the ortho mapping workspace for aerial imagery. It contains the exterior orientation and other information specific for each image comprising your image collection. The Cameras table contains all the camera calibration information for computing the interior orientation, but you can add the camera information manually in the wizard or as a table. To edit the Camera parameters, you can hover over the Camera ID and click the Edit Properties button. You’ll also need to specify the Frame Spatial Reference, which be provided with your data.

In this workflow, we used the exterior orientation information that was provided along with our source imagery to create the Frames table in the necessary schema. We then pointed to a table that has the information for one camera, with CameraID = 0 (see the screen shot below - there's a check mark next to the 0 under Cameras). 

 

*We are updating this for ArcGIS Pro 2.3 to be more user-friendly for a better experience!  

 

 

Step 4 To correct for terrain displacement, you need to include an elevation source. The cool thing about working with the ArcGIS platform is you can access the thousands of maps, apps and data layers available in the ArcGIS Living Atlas, so if you don’t have your own elevation data you can search for one and use it into your project. Here’s what we did:

  1. In our ArcGIS Pro project, zoom to the area of interest in Hollywood.
  2. On the Map tab, click Add Data and add data to the map. 
  3. Select the Living Atlas option under the Portal group and search for "Terrain." Add the Terrain imagery layer. At first, you might not be able to see much variation in the terrain. Click on the Appearance tab under the Image Service Layer group and select DRA (Dynamic Range Adjustment) to stretch the terrain imagery in the extent you are viewing.
  4. In the Contents pane, right-click on the Terrain imagery layer and select Data > Export Raster. 
  5. In the Export Raster settings, specify the output raster dataset and set the Clipping Geometry to the Current Display Extent. 
  6. Click Export.

 

               Now we can add our new DEM to the workspace. To do this, open the Data Loader Options pane in the Image                Collection page. Click the browse button to navigate to the DEM created above, or use your own DEM.

               Step 5: Finally, we left all the other values as default and clicked Finish.

 

             

 

 

Log window will tell you how the creation of the workspace is coming along, and if there are any problems, an error message will be displayed. When it’s complete, you’ll see the new Ortho Mapping Entities in your Contents pane:  various control points including Ground Control Points, Check Points and Tie Points, the mosaic dataset that was created using your source data, and placeholders for Data Products, Solution Data, and QA/QC Data that haven’t been created yet.

 

Make sure to zoom and pan around the map to check out your Image Collection. With the Image Collection selected in the Contents pane, you can open the Data tab from the Mosaic Layer context menu. Here you can change the Sort and Overlap options for your mosaic dataset. We recommend using the Closest to Center or Closest to Nadir options for viewing.

 

Now that you have all your ortho mapping components organized in your workspace, the next step is to block adjust your data to make sure it’s map-accurate. Stay tuned for the next part of this blog series, Ortho Mapping with Aerial Data Part II: Getting Adjusted, where we’ll show you how to perform a block adjustment to make sure your data is ready for product generation and stereo compilation!

 

 

We showed you how to set up an ortho mapping workspace for aerial imagery. For an example of how to set up an ortho mapping workspace for satellite data, check out this short video!

 

Many thanks to Jeff Liedtke for co-authoring this article!

ArcGIS Enterprise configured for Raster Analytics enables large and small organizations to distribute and scale raster processing, storage and sharing to meet requirements for unique projects. This flexibility and elasticity also allows you to pursue projects that were previously out of reach due to hardware, software, personnel, or cost constraints. An overview of Raster Analytics concepts and advantages is described in the article Imagery Superpowers – Raster analytics expands imagery use in GIS.

Raster Analytics Processing Workflow

To help you become familiar with the benefits of Raster Analytics, Esri is offering a new Learn Lesson for ArcGIS Enterprise users. The lesson guides you through the process of configuring your Enterprise system for Raster Analytics, shows you how to use raster processing tools and functions to assess potential landslide risk associated with wildfire. The analysis is run on your distributed processing system, and the results are published to your Enterprise portal for ease of sharing across your organization. The lesson is a practical guide for implementing a Raster Analytics deployment, and demonstrating how standard ArcGIS Pro tools and functionality can be used to run distributed processes behind your firewall and in the cloud, and shared with stakeholders across your enterprise. Check out this story map, which gives you a more detailed overview of what the lesson involves.

Drag and drop tools into the function editor to create raster function chains.

Ready to try it out? If you want to extend your capabilities with Raster Analytics for increased productivity, test out the lesson and see why users are excited about the opportunity to address demanding projects in a more effective and efficient manner.

 

Many Thanks to Katy Nesbitt (knesbitt@esri.com) for co-authoring this article.

For FMV in ArcGIS (ArcGIS Pro 2.2 with Image Analyst Extension, or ArcMap 10.x with the FMV add-in) to display videos and link the footprint into the proper location on the map, the video must include georeferencing metadata multiplexed into the video stream.  The metadata must be in MISB (motion industry standards board) format, originally designed for military systems.  Information is here http://www.gwg.nga.mil/misb/index.html, but drone users do not need to study this specification.  For non-MISB datasets, Esri has created a geoprocessing tool called the “Video Multiplexer” that will process a video file with a separate metadata text file to create a MISB-compatible video.  This is described more completely (e.g. format for the metadata about camera location, orientation, field of view, etc.) in the FMV Manual at http://esriurl.com/FMVmanual.

 

For those with DJI drones, the challenge then becomes “where is the required metadata?”.  DJI drones write a binary formatted metadata file with extension *.dat (or possibly *.srt, depending on drone and firmware) for every flight.  There is a free utility called “DatCon” at this link https://datfile.net/DatCon/downloads.html which will reportedly convert the DJI files to ASCII format. 

 

Key points:

  • Esri has not tested and cannot endorse this free utility. If you choose to use it, as with any download from the internet, you should check it for viruses etc.
  • DJI has changed the format of the metadata in this file on multiple occasions, so depending on your drone and date of its firmware, you will find differences in the metadata content. Esri does not have a specification for this metadata at any version, so cannot advise you what to expect to be included in (or missing from) this file.
  • Another key point is that the DJI *.dat file was created for the purpose of troubleshooting. It was not created with the intent of supporting geospatial professionals seeking a complete metadata record for the drone, gimbal, and camera.  As a result, users will typically find temporal gaps in the metadata.  As a result, processing this metadata through the FMV Multiplexer will likely generate inaccurate results, unless you are willing to apply manual effort (requiring trial and error, and substantial time) to identify the temporal gaps and fill in your own estimated or interpolated values for the missing times and missing fields.
  • IMPORTANT: This blog was written in September 2018, and it is very possible that DJI will make firmware changes in the future to change the readability and completeness of their metadata.

 

There is an alternative to this, but it is not an Esri solution.  CompassDrone, an Esri business partner and DJI authorized distributor, has built a flight planning and flight control application called CIRRUAS using the DJI API.  This application has access to the DJI metadata in flight, and (among other features) is explicitly designed to capture complete metadata as defined by Esri for FMV support.  If you are using the CIRRUAS app, a metadata file will be captured and exported from the drone, and this will feed directly into the FMV multiplexer. 

 

The CIRRUAS app is available here https://compassdrone.com/software/dji2fmv-cirruas-app/.  For further discussion, please refer to the blog on this topic written by CompassDrone:  https://compassdrone.com/dat-srt-vs-cirruas/

 

A few final notes:

  • Our testing of the CIRRUAS app has yielded good results, but Esri does not provide technical support for the app.
  • Note that the CIRRUAS app must be used to plan and fly the mission, and this will initiate the recording of complete metadata. It cannot be applied to video that was previously recorded, since the metadata records will not be complete.
  • It is not known if there are other alternatives which provide a solution for processing video from DJI drones for ArcGIS FMV.

 

Check back in this blog for updates as more capabilities are developed.

#EsriFMVDJI

With the Image Analyst extension in ArcGIS Pro 2.1, non-orthorectified and suitably overlapping images with appropriate metadata can be viewed in stereo!  This stereoscopic viewing experience can enable 3D feature extraction.  See more information at http://esriurl.com/stereo.

 

If your organization has a collection of images and you’d like to use the stereo viewing capability in ArcGIS Pro, where do you start?   The key questions are: 

  1. What type of sensor collected the data, and
  2. What orientation data do you have along with the images?

 

In order to display images as stereo pairs, ArcGIS must have detailed information about the location of the sensor (x,y,z) as well as its orientation – and this is unique information for every image.  Information about the sensor (typically called a camera model or sensor model) is also required. 

Graphic Showing Geometry of One Stereo Image Pair

 

There are a few conceptually simple cases, although each has important details to follow within its own workflow and documentation.

 

  • If you have two overlapping satellite images, you can go directly to stereo viewing.
  • If you have a collection of satellite images, you can build a mosaic dataset and ingest the images using the specific raster type for that satellite, run the Build Stereo Model geoprocessing tool, then proceed to the stereo view.  The raster type for the satellite reads the required orientation data.
  • If your imagery came from a professional aerial camera system:
    • If you have an output project file from aerotriangulation (AT) software (e.g. Match-AT or ISAT), ArcGIS includes raster types which ingest the orientation data for you, so this is similar to the satellite case: build a mosaic dataset with the proper raster type, Build Stereo Model, and proceed to stereo viewing.
    • If you have a project file from AT software not currently supported, Python raster types are under development for additional sensors e.g. for the Vexcel Ultracam. For more information, watch for announcements on GeoNet or on http://esriurl.com/ImageryWorkflows.  Alternatively, if you have a table of camera and frame orientation values, see the next bullet.
    • If you have a table of data values representing the exterior orientation as well as a camera model (interior orientation), you will build a mosaic dataset and ingest the images using the “Frame camera” raster type. 
    • If you have scanned film but without the results of AT software, refer to the FrameCameraDetailedWorkflow. With ArcGIS Pro 2.1, some values may have to be estimated, and the positional accuracy may not be optimum.  ArcGIS Pro 2.2 will support fiducial measurement.
  • If your imagery was captured using a drone, you will need to use photogrammetric software to generate the camera model and orientation data.   
    • If you process your drone imagery using Ortho Mapping in ArcGIS Pro (see http://esriurl.com/OrthoMappingHelp), after the Adjust step is completed, the Image Collection mosaic dataset will be ready for viewing in stereo (after Build Stereo Model).
    • If you are using Drone2Map, please see this item ArcGIS Online http://esriurl.com/D2Mmanagement to download a geoprocessing tool which can ingest the images into a mosaic dataset.

 

For those interested in trying an example, a downloadable sample is available in this item on ArcGIS Online: http://esriurl.com/FrameCameraSample

Raster analytics using ArcGIS Enterprise is a flexible raster processing, storage, and sharing system that employs distributed computing and storage technology. Use raster analytics to apply the rich set of raster processing tools and functions offered in ArcGIS, build your own custom functions and tools, or combine multiple tools and functions into raster processing chains to execute your custom algorithms on large collections of raster data. Source data and processed results are stored, published and shared across your enterprise accordingly.

 

This extensive capability can be further expanded by leveraging cloud computing capabilities and resources.  The net result: image processing and analysis jobs that used to take days or weeks can now be done in minutes or hours, and jobs that were impossibly large or too daunting are now within easy reach.

 

What can raster analytics do?

By leveraging ArcGIS Enterprise, raster analytics enables you to:

  • Quickly process massive imagery or raster datasets in a scalable environment
  • Execute advanced, customized raster analysis
  • Share results with individuals, departments, and organizations within or outside your enterprise

 

Raster analytics is ArcGIS Image Server configured for raster analysis in a processing and storage environment that maximizes processing speed and efficiency.  Built-in tools and functions cover preprocessing, orthorectification and mosaicking, remote sensing analysis, and an extensive range of math and trigonometry operators; your custom functions can extend the platform’s analytical capabilities even further.

 

Fully utilize your existing ArcGIS Image Server on-site, or exploit the elastic processing and storage capacity of cloud computing and storage platforms such as Amazon Web Services and Microsoft Azure to dynamically increase or reduce your capacity depending on the size and urgency of your projects.  The scalable environment of raster analytics empowers you to implement computationally intensive image processing that used to be out of reach or cost-prohibitive. This implementation saves you time, money, and resources.

 

Raster analytics is also designed to streamline and simplify collaboration and sharing. Users across your enterprise can contribute data, processing models, and expertise to your imagery project, and share results with individuals, departments, and organizations in your enterprise.

 

Finally, Raster analytics using ArcGIS Enterprise integrates your image processing and analysis with the world’s leading GIS platform, and allows users to seamlessly draw on the world’s largest collection of online digital maps and imagery.

 

How does raster analytics work?

ArcGIS Image Server configured for the role of raster analytics provides software and user interfaces to organize and manage your processing, storage, and sharing of raster and feature data, maps, and other geographic information on a variety of devices. This integrated system manages the dissemination of processing and storage of results (1) on-premises and behind the firewall for classified deployments, (2) in cloud processing and storage environments, or (3) a combination of both environments.

 

The foundation of raster analytics is ArcGIS Enterprise, which includes an Enterprise GIS Portal, ArcGIS Data Store, Image Server configured for raster analytics, raster data store and ArcGIS Web Adaptor. ArcGIS Enterprise integrates the components of the raster analytics system to support scalable, real-world workflows.

 

Scale your powerful processing and storage capabilities by deploying ArcGIS Enterprise in the cloud via Microsoft Azure or Amazon Web Services (AWS). For example, you can automatically scale capacity up and down according to conditions you define, or automatically dispense application traffic across multiple instances for better performance. ArcGIS Enterprise makes deployment easier by providing Cloud Builder for Microsoft Azure or AWS CloudFormation with sample templates to configure and deploy your system in the cloud.

 

Develop, test and optimize your raster processing chains using Esri’s rich set of more than 200 functions and tools in the familiar ArcGIS Desktop or web map viewer. Once verified and optimized in the dynamic on-the-fly processing environment, submit your processing chain to ArcGIS Portal, which manages the distribution of processing, storage, and publication of results.

 

The ideal deployment of raster analytics is comprised of three server sites to perform the primary roles of the portal host server, raster analysis server, and the image hosting server. Two licenses are required for raster analytics, ArcGIS Enterprise and Image Server.

Raster Analytics System Diagram

The hosting server is your portal’s server for standard portal administration and operations such as managing and dispensing processing, storage, and publication of results to raster analysis servers, image servers, and data stores.  It also hosts the ArcGIS Data Store for GIS data and allows users to publish data and maps to a wider audience as web services.

 

Raster analytics jobs are processed by image servers dedicated for raster analytics, comprised of one or more servers, each with multiple processing cores. The image processing and raster analytics tasks are distributed at the tile level or scene level depending on the tools and functions used. Raster analytics manages the processing results to either the ArcGIS Data Store on the hosting server for feature data products, or to the raster data store for imagery and raster data products. The raster data store can be implemented using distributed file share storage or using cloud storage such as Amazon S3 or Microsoft Azure blob storage.

 

The image hosting server hosts all the image services generated by the raster analysis server. It includes the raster data store configured with the Image Server Manager, which manages distributed file share storage and cloud storage of image services using Amazon S3 or Microsoft Azure blob storage. The image hosting server stores and returns results requested by members of your enterprise.

 

System configuration apps assign the roles of the servers and data stores, and also set the permission structure for all the users across your enterprise. This facilitates optimal flexibility in configuring and implementing your raster analytics system to address specific projects. Multiple servers can be scaled up for raster analytics processing and storage as required.

 

See the tutorial to set up a base ArcGIS Enterprise deployment.

 

More Information

To learn more about raster analytics using ArcGIS Enterprise and ArcGIS Image Server, check out this video.

Explore these help topics to get started with raster analytics:

To see how raster analytics is being used, check out the Chesapeake Conservancy and Distributed Image Processing presentation, or attend the Plenary session at the 2017 Esri User Conference in San Diego to hear about Chesapeake Conservancy’s experience processing and sharing the entire Chesapeake watershed using raster analytics.

 

Please plan to attend a few presentations addressing raster analytics at the 2017 Esri User Conference:

Raster Analytics at Esri UC2017

The June 2017 update of ArcGIS Online includes some useful capabilities for displaying imagery served by your image services. These capabilities give you greater control for visualizing the information contained in your image services. When we talk about rendering, we’re not talking about making soap out of fat. Here at Esri, rendering is the process of displaying your data. How an image service is rendered depends on what type of data it contains and what you want to show.

 

Once you search for and add a layer, and your image is displayed in Map Viewer, click the More Options icon then Display to open the Image Display pane.

Image Display Options

You see a new category named Image Enhancement. This is where the real fun begins.

Image Enhancement pane

The Symbology Type options include Unique Values, Stretch and Classify. Unique Values and Classify renderers work with single-band image services, while the Stretch renderer works on both single and multiple band images.

 

Unique Values Renderer

Unique values symbolize each value in the raster layer individually and are supported on single band layers with Raster Attribute table. The symbology can be based on one of more attribute fields in the dataset. The colors are read from the Raster Attribute table and if they are not available the renderer assigns a color to each value in your dataset. This symbology type is often used with single band thematic data, such as land cover, because of its limited number of categories. It can also be used with continuous data if you choose a color ramp that is a gradient.

Unique Values Renderer

  1. Use the Field drop-down to select the field you want to map. The field is displayed in the table.
  2. Click the Color Ramp drop-down and click on a color scheme. If your image service already has a color ramp, such as the NLCD service in this example, it is displayed by default.
  3. The colors in the Symbol column and Labels can be edited as required.
  4. Click Apply to display the rendering in the layer

 

Stretch

The stretch parameters improve the appearance of your image by adjusting the image histogram controlling brightness and contrast enhancements. Either single or multiple band images can be stretched. For multiple band images, the stretch is applied to the band combination previously chosen in the RGB Composite options. The stretch options enhance various ground features in your imagery to optimize information content.

1.   Click the Stretch Type drop-down arrow and choose the stretch type to use. The following contrast enhancements determine the range of values that are displayed.

  • None – No additional image enhancement will be performed
  • Minimum and Maximum – Displays the entire range of values in your image. Additional changes can be made by editing the values in the Min-Max grid (available only when Dynamic range adjustment is turned off.)
  • Standard Deviation – Display values between a specified number of standard deviations
  • Percent Clip – Set a range of values to display. Use the two text boxes to edit the top and bottom percentages.

2.   If the Stretch type is set to an option other than None, the following additional image enhancement options will be available.

  • Dynamic range adjustment – Performs one of the selected stretches, but limits the range of values to what is currently in the display window. This option is always turned on if the imagery layer does not have global statistics.
  • Gamma – Stretches the middle values in an image but keeps the extreme high and low values constant.

3.   For single-band layers, you can optionally choose a new color scheme from the Color Ramp drop-down menu after applying a stretch method on the layer.

4.   Click Apply to display the rendering in the layer.

Here’s a WorldView-2 natural color image of Charlotte, NC, using the default no stretch:

Multispectral Image, No Stretch

And here is the same imagery layer with the top 2% and bottom 20% of the histogram omitted:

Multispectral Imagery, Percent Stretch

Classify Renderer

Classify symbology is supported by single band layers. It allows you to group pixels together in a specified number of classes. The following are the different settings available with the Classify symbology.

  • Field – Represents the values of the data.
  • Method – Refers to how the break points are calculated.
  • Defined Interval – You specify an interval to divide the range of pixel values and the number of classes will be automatically calculated.
  • Equal Interval – The range of pixel values are divided into equally sized classes where you specify the number of classes.
  • Natural Breaks – The class breaks are determined statistically by finding adjacent feature pairs between which there is a relatively large difference in data value.
  • Quantile – Each class contains equal number of pixels.
  • Classes – Sets the number of groups.
  • Color Ramp – Allows you to choose the color ramp for displaying the data.

Classify symbology works with single band layers that have either a Raster Attribute Table or Histogram values. If a histogram is absent, it is generated when you select the symbology type.

 

Here’s the classified map of Charlotte, specifying 15 classes and using the Natural Breaks method for determining class breaks:

Class Map

Summary

These new Map Viewer image rendering capabilities are similar to what you are used to in ArcMap and ArcGIS Pro. Since this release, Scene Viewer also supports imagery layers, however we are still working on bringing the new Map Viewer image rendering capabilities into Scene Viewer. Check out these new imagery capabilities in ArcGIS Online and see how they can enhance the stories behind your data.

 

Please leave us comments below for any future enhancements you’d like to see. And check back in a few months; we have a lot of other cool stuff planned for imagery in upcoming releases.

Imagery can add valuable information and context to a wide array of GIS projects. For example, you can detect impervious surfaces for storm water management, map and manage riparian corridors, or track what’s changing in your county. Sometimes, though, incorporating imagery into your GIS can feel overwhelming—how can your system handle that much data?

 

Enter raster analytics, a distributed processing, storage, and sharing system designed to quickly process large collections of aerial, drone, or satellite imagery, then extract and share meaningful information for critical decision support. Raster analytics can be run locally, but you can also pair it with distributed cloud computing to maximize efficiency. Image processing and analysis jobs that used to take days or weeks can be completed in minutes or hours, bringing imagery projects that were impossibly large or daunting within reach.

 

Raster analytics leverages ArcGIS Enterprise, expanded with ArcGIS Image Server configured for distributed raster analysis, to integrate the components of the raster analytics system to support scalable, real-world workflows

 

What can raster analytics do?

By leveraging ArcGIS Enterprise with ArcGIS Image Server, raster analytics enables you to:

  • Quickly process massive imagery or raster datasets in a scalable environment
  • Execute advanced, customized raster analysis
  • Share results with individuals, departments, and organizations within or outside your enterprise

The scalable environment of raster analytics empowers you to perform computationally intensive image processing that would otherwise be out of reach or cost-prohibitive. When implemented on-site, raster analytics uses distributed processing to improve efficiency. You can also maximize efficiency by exploiting cloud platforms such as Amazon Web Services or Microsoft Azure, which allow you to dynamically increase or reduce your capacity based on the size and urgency of your projects.  Either implementation can save you time, money, and resources.

 

Raster analytics uses all the advanced image processing and analysis capabilities of ArcGIS Pro to maximum advantage. Built-in raster functions cover preprocessing, orthorectification and mosaicking, remote sensing analysis, and an extensive range of math and trigonometry operators, while your custom functions can extend the platform’s analytical capabilities even further.

Raster Analytics System Diagram

Raster analytics is also designed to streamline collaboration and sharing. Users across your enterprise can contribute data, processing models, and expertise to your imagery project, then share results with individuals, departments, and organizations in your enterprise.

 

Finally, raster analytics integrates your image processing and analysis with the world’s leading GIS platform, and allows users to seamlessly draw on Living Atlas of the World, the world’s largest collection of online digital maps and imagery.

 

How is raster analytics used today?

The Chesapeake Conservancy, working with the University of Vermont and WorldView Solutions, was tasked by the Chesapeake Bay Program to produce one-meter-resolution land cover maps covering 100,000 square miles of the Chesapeake Bay watershed. These high-resolution land cover maps, which classify natural and man-made landscape features, are crucial for supporting watershed and storm water management, conservation, and for reducing pollution into the bay.

 

To produce this essential dataset, the Chesapeake Conservancy needed to process over 20 terabytes of raster data and categorize it into twelve land cover types. This project took a daunting 18 months to complete using their local machine resources. As a result, Chesapeake Conservancy is now working with raster analytics in the cloud to make this timeline more efficient and cost-effective going forward.

As a proof of concept, they used raster analytics to produce a persistent one-meter land cover dataset of Kent County, Delaware (798 square miles). The Kent County project—comprised of more than 30GB and 3.8 billion pixels of raster data—ran on a ten-machine cluster, each with twenty cores, and completed in less than 5 minutes. This same job took days to to process on their local machines.

 

The Chesapeake Conservancy is now engaged in reprocessing the entire Chesapeake watershed to benchmark time and cost savings using raster analytics for the project. Using raster analytics for projects in the future will mean that the Chesapeake Conservancy can accomplish ambitious projects in a timely and cost-effective manner, without having to spend resources to acquire, configure, and maintain a large computing and storage infrastructure.

See the Chesapeake Conservancy and Distributed Image Processing presentation for more details, or check out the Plenary session at the 2017 Esri User Conference in San Diego to hear about Chesapeake Conservancy’s experience processing and sharing the entire Chesapeake watershed using raster analytics

 

More Information:

To learn more about raster analytics using ArcGIS Enterprise and ArcGIS Image Server, check out this video.

Explore these help topics to get started with raster analytics:

Please plan to attend a couple presentations addressing raster analytics at the 2017 Esri User Conference:

Raster Analytics at Esri UC2017

Esri and Garmin are pleased to announce that Garmin’s VIRB Action Cameras (VIRB Ultra 30, VIRB X, VIRB XE, and VIRB Elite) have full support for Full Motion Video (FMV) for ArcGIS!  

 

To leverage this feature, users can download the latest version of VIRB Edit software (version 5.1.1 and above) from http://www8.garmin.com/support/download_details.jsp?id=6591 or simply search for “Virb edit software” at http://www.garmin.com.

 

The Full Motion Video add-in is a free download for ArcMap 10.3 through 10.5, and will be coming in ArcGIS Pro version 2.1 by the end of 2017.  Current users of ArcMap can find information on FMV at http://esri.com/FMV.  The Full Motion Video add-in allows users to manage, display, and analyze geospatially enabled videos within their GIS.  Feature data can be digitized from video frames, and GIS features can be overlaid onto the video during playback.  The video search tool provides a powerful data management capability, enabling users to quickly find archived videos based on attribute data or a simple geographic search. 

 

The Garmin VIRB cameras record GPS and camera orientation data with the video.  This position and orientation metadata enables FMV for ArcGIS to locate the sensor on the map, and if the camera footprint (field of view) is aimed toward the ground, the moving video footprint can also be displayed in ArcGIS. 

 

Instructions for extracting the VIRB position and orientation metadata and then processing with the Full Motion Video Multiplexer are available in this document:  http://esriurl.com/GarminVirbFMV

Imagery and lidar are an indispensable part of your GIS. From background imagery to change detection to feature extraction and more, imagery and lidar are transforming the geospatial world. At the Esri Imaging & Mapping Forum (Saturday, July 8 – Tuesday, July 11) and the Esri User Conference (Monday, July 10 – Friday, July 14) in San Diego next month, check out the following events to learn the latest about Esri’s imagery and lidar capabilities. (And if you haven’t already, don’t forget to register online for the Esri UC and IMF!)

 

 

Esri Imaging & Mapping Forum

Before the UC kicks off, you can dive into the world of imagery, lidar, and 3D at the Esri Imagery & Mapping Forum in San Diego (Saturday, July 8 – Tuesday, July 11). At this unique forum, you'll get a close, hands-on look at capturing and mapping technologies that integrate imaging, lidar, 3D, drone technology, multidimensional analysis, and modeling to meet organizational challenges. Register online today.

 

Imagery@UC

At the 2017 Esri UC, Imagery@UC sessions offer a valuable opportunity to hear about the latest developments in imagery from Esri’s imagery leaders. Check them out Tuesday morning:

 

Room 29C, San Diego Convention Center

Tuesday, July 11

8:30 am –9:45 am             Modernizing Remote Sensing with the Science of Where in Esri’s New Imagery Products

10:15 am –11:30 am        Expanding the ArcGIS Platform with Advanced Image Processing and Analytics

 

Imagery Showcase

Check out the Imagery Showcase in the Esri Expo, where you can:

  • Learn the latest about best practices for managing, analyzing, and sharing your imagery and lidar using the ArcGIS platform
  • Explore how Esri helps you leverage imagery and lidar to support scalable, real-world workflows
  • Connect with Esri professionals to answer your imagery and lidar questions and help you get started

 

The Imagery Showcase will be open:

Exhibit Hall B1, San Diego Convention Center

Tuesday, July 11                9:00 am–6:00 pm

Wednesday, July 12         9:00 am–6:00 pm

Thursday, July 13              9:00 am–1:30 pm

 

2017 Esri UC Imagery and Lidar Sessions

Finally, don’t forget to explore the imagery and lidar demo theaters and technical workshops offered this year at Esri UC. See something interesting? Use the links below to add the session to your online 2017 Esri UC Agenda.

 

Still have more questions about imagery at the UC? Check out the Esri UC Q&A for Imagery and Remote Sensing.

 

Tuesday, July 11

8:30 am–9:45 am

Modernizing Remote Sensing with the Science of Where in Esri’s New Imagery Products

SDCC Room 29C

9:30 am–10:15 am

Automating Imagery Workflows with Python Scripting

SDCC Demo Theater 14

9:30 am–10:15 am

Geoprocessing Sample Tools for LiDAR

SDCC Demo Theater 07

10:15 am–11:30 am

Drone Technology and Solutions with the ArcGIS Platform

SDCC Room 27B

10:15 am–11:30 am

Expanding the ArcGIS Platform with Advanced Image Processing and Analytics

SDCC Room 29C

10:15 am–11:30 am

Archaeology - Remotely Sensed Aerial Imagery

SDCC Room 23A

10:30 am–11:15 am

Building Python Raster Functions

SDCC Demo Theater 14

10:30 am–11:15 am

Creating a Hydrologically Conditioned DEM

SDCC Demo Theater 07

10:30 am–11:15 am

Working with Elevation Services

SDCC Demo Theater 05

11:00 am–11:30 am

Producing Ortho Imagery in ArcGIS

SDCC Tech Theater 19 Exhibit Hall A

11:30 am–12:15 pm

Creating Story Maps with Imagery

SDCC Demo Theater 14

12:00 pm–1:00 pm

3D Mapping from Lidar and Imagery Special Interest Group

SDCC Room 24A

1:30 pm–2:45 pm

2D and 3D Feature Extraction from New and Historical Lidar / Imagery for Change Detection with ArcGIS

SDCC Room 29C

1:30 pm–2:45 pm

Applying Elevation in your Analytic Workflows

SDCC Room 09

1:30 pm–2:15 pm

Best Practices for Managing and Serving Processed Ortho Imagery

SDCC Demo Theater 14

1:30 pm–2:45 pm

Drone2Map: An Introduction

SDCC Ballroom 06C

2:30 pm–3:15 pm

Enterprise: Building Raster Analytics Workflows

SDCC Demo Theater 14

3:15 pm–4:30 pm

Empowering Your Organization with Time Enabled Imagery and Bathymetry

SDCC Room 29C

3:15 pm–4:30 pm

Imagery Analysis and Use in Desktop

SDCC Room 07A

3:30 pm–4:15 pm

Drone2Map: Workflows for Processing a Dataset

SDCC Demo Theater 14

3:30 pm–4:15 pm

LiDAR Analysis in ArcGIS: An Introduction

SDCC Demo Theater 07

4:30 pm–5:15 pm

Enterprise: Sharing Imagery in Portal

SDCC Demo Theater 14

5:30 pm–6:15 pm

Workflows for Frame Cameras

SDCC Demo Theater 14

 

Wednesday, July 12

8:30 am–9:45 am

Drone2Map: An Introduction

Hilton Sapphire Ballroom I

8:30 am–9:45 am

Imagery Modernization Best Practices for Organizational Sharing and Management with ArcGIS

SDCC Room 28C

8:30 am–9:45 am

LiDAR and GIS: Applications and Examples

SDCC Room 08

8:30 am–9:45 am

Scientific and Multidimensional Raster Support in ArcGIS

SDCC Room 17A

9:30 am–10:15 am

ArcMap and Pro: Working with FMV Data using the Multiplexer

SDCC Demo Theater 14

9:30 am–10:15 am

Working with Elevation Services

SDCC Demo Theater 10 Online

10:15 am–11:30 am

LiDAR and ArcGIS Pro: What’s New

SDCC Room 16A

10:30 am–11:15 am

Enterprise: Managing Imagery in the Cloud

SDCC Demo Theater 14

11:30 am–12:15 pm

Enterprise: Standing Up NAIP and Landsat Image Services as a Processing Resource

SDCC Demo Theater 14

11:30 am–12:15 pm

Refining 3D Buildings Extracted from LiDAR

SDCC Demo Theater 13

12:00 pm–1:00 pm

Imagery in Electric Transmission Special Interest Group

SDCC Room 24C

12:00 pm–1:00 pm

Statistics (Imagery Focus) Special Interest Group

SDCC Room 26B

12:30 am–1:15 pm

Point Clouds and 3D Mesh

SDCC Demo Theater 13

12:30 am–1:15 pm

Working with Historical Aerial Imagery

SDCC Demo Theater 14

1:30 am–2:15 pm

Enterprise: Building Multi-Modal Image Services

SDCC Demo Theater 14

2:00 am–2:30 pm

Producing Ortho Imagery in ArcGIS

SDCC Tech Theater 19 Exhibit Hall A

2:30 pm–3:15 pm

Pro: Introduction to Stereo Imagery

SDCC Demo Theater 14

3:00 pm–3:30 pm

FMV Support in ArcGIS

SDCC Tech Theater 18 Exhibit Hall A

3:15 pm–4:30 pm

Raster Analytics in Image Server: An Introduction

SDCC Room 15B

3:15 pm–4:30 pm

Using Living Atlas Elevation Layers in Your GIS Workflows

SDCC Room 01A

3:30 pm–4:15 pm

Web AppBuilder Imagery Widgets

SDCC Demo Theater 14

4:30 pm–5:15 pm

Workflows for Managing and Serving Elevation Data

SDCC Demo Theater 14

5: 30 pm–6:15 pm

Workflows for Sharing Oblique Imagery

SDCC Demo Theater 14

 

Thursday, July 13

8:30 am–9:45 am

Image Management Using Mosaic Datasets and Image Services

SDCC Room 03

10:15 am–11:30 am

Desktop ArcMap and ArcGIS Pro: Exploiting Imagery

SDCC Room 16A

10:15 am–11:30 am

Imagery Analysis and Use in Desktop

SDCC Room 07A

10:15 am–11:30 am

LiDAR and GIS: Applications and Examples

SDCC Room 03

10:30 am–11:15 am

Raster Function Processing

SDCC Demo Theater 14

11:30 am–12:15 pm

Image Segmentation and Classification

SDCC Demo Theater 14

12:30 am–1:15 pm

Enterprise: Building Mosaic Datasets

SDCC Demo Theater 14

12:30 am–1:15 pm

Using the National Water Model to Inform Flood Preparedness and Response

SDCC Demo Theater 16

1:30 am–2:45 pm

Imagery Sources and Uses in ArcGIS

SDCC Room 01A

3:15 pm–4:30 pm

Imagery Support for Emergency Management

SDCC Room 29D

3:15 pm–4:30 pm

Image Segmentation and Classification in ArcGIS Pro

SDCC Room 15A

 

**SDCC = San Diego Conference Center

With the largest release of data to date, the Polar Geospatial Center (PGC) has significantly expanded the high-resolution coverage of the Arctic Elevation dataset. The new data is available to ArcGIS users as a part of Esri’s ready-to-use Arctic DEM and Arctic Elevation layers, as well as Esri’s ArcticDEM Explorer and ArcticDEM Change web apps.

The Arctic Elevation dataset provides two-, five-, and eight-meter elevation data for land north of 60°N; to date, 65% of the Arctic is covered—over 51 million square kilometers. As the PGC releases new digital elevation models (DEMs) throughout 2017, high-resolution, two-meter DEMs will gradually expand and replace the coverage of older, eight-meter data. This release alone expands Esri’s Arctic Elevation dataset by an additional six terabytes of two-meter data and one terabyte of five-meter data covering Canada, Greenland, Alaska, and more. The result is more detail than ever—see the improvement for New Siberian Islands (left), Ellesmere Island (center), and Wrangel Island (right).

Improvement using high-resolution elevation data

Arctic Elevation, offered as easy-to-access layers, maps, and apps, has numerous applications. Users can deploy these elevation layers in ArcGIS Pro, ArcMap, and custom web apps, viewing and analyzing them on-the-fly using dynamic functions like slope, aspect, hillshade, multi-dimensional hillshade, and others. Check out the Columbia Glacier below, for example, visualized with the ArcticDEM Explorer web app using an elevation tinted hillshade.

Columbia Glacier in ArcticDEM Explorer web app

Users can also explore Arctic elevation by acquisition date to analyze how Arctic topography changes in different seasons or years. Ever wonder how quickly a particular glacier is receding? Now you can find out for yourself.

Interested in learning more? Dive in to Esri’s ArcticDEM Explorer web app to interactively explore the latest high-resolution elevation data. Or, if you’re ready to start using Esri’s Arctic Elevation in your own applications, get started today with the Arctic DEM or with a free 60-day trial of ArcGIS Desktop Advanced.

Explore the planet more deeply with Esri’s new Landsat Explorer web app! Use the app to both visualize our planet and understand how the Earth has changed over time. Now you can instantly analyze more than 500,000 Landsat 8 and GLS scenes, offering global coverage with over 500 new scenes added every day. The app is free, with no download or installation required.

 

Landsat Explorer enables you use different spectral bands to go beyond what the eye can see. But the Landsat Explorer app isn’t just about viewing images using different band combinations or enhancements. The analysis tools enable you to perform change detection, create custom masks, make your own indexes, generate spectral and temporal profiles, and more, all on the fly. Use the app to instantly access multispectral and temporal Landsat imagery to reveal how the Earth's surface has changed over the last forty years.

 

Curious to see how your hometown has expanded since you were a kid? Zoom to your hometown and use the time slider to compare before and after images.  Want to quantify areas of agriculture usage or forest burn? Use the Mask tool to identify specific types of landcover, interactively setting thresholds. Want to measure the extent of a flood, like in Allahabad, India, below? Select two points in time and use the Change Detection tool to highlight the affected areas.

With Landsat Explorer, you can easily and dynamically investigate questions about geology, vegetation, agriculture, and cities anywhere in the world, including the places that matter most to you. And, if you want to share your discoveries, you can save your results to ArcGIS online or as local files. The app is driven by publicly accessible images services hosted on AWS and directly usable in a wide range of applications including ArcGIS Desktop.

 

Ready to get started? Open the Landsat Explorer app and click the Tutorial icon for a guided tutorial, or check out the Unlock Earth’s Secrets page to learn more about all our Landsat apps.

Planet’s high-resolution satellite constellations image the entire world every single day. If you rely on imagery from Planet, managing this wealth of available data can be a challenge. That’s why Esri has added tools for managing PlanetScope and RapidEye imagery to its array of free, open-source tools that simplify image management in ArcGIS.

 

Planet satellite image of Key West

 

With the Python toolbox for managing Planet imagery, users can create mosaic datasets to manage Planet imagery within the familiar ArcGIS environment. With these scripts, data managers can streamline or automate the creation of scalable mosaic datasets, which can then be shared as image services with internal users or users outside your organization.

Imagery from Planet’s PlanetScope and RapidEye satellite constellations supports numerous applications—including mapping, deep learning, disaster response, precision agriculture, or simple temporal image analytics—which generate rich information products. Once managed with mosaic datasets, it’s straightforward for GIS or image analysts to exploit the temporal aspects of Planet imagery via an image service, to analyze the spectral data to draw actionable conclusions, to use the imagery to provide context within a GIS, and more.

The Python tools work by drawing on the Planet API to query, activate, and download PlanetScope or RapidEye Basic scenes over a given timeframe and area of interest. The imagery is then made accessible to users via the Planet Explorer app or the Planet API.  The Planet toolbox is supported in ArcMap 10.5, and can also be used in ArcGIS Pro.

Ready to get started? Download the Python toolbox for managing Planet imagery, learn more about Planet’s high-resolution imagery, or try ArcGIS’s imagery capabilities with a free 60-day trial of ArcGIS Desktop.