Skip navigation
All Places > GIS > Imagery and Remote Sensing > Blog
1 2 3 Previous Next

Imagery and Remote Sensing

33 posts

In the aftermath of a natural disaster, response and recovery efforts can be drastically slowed down by manual data collection. Traditionally, insurance assessors and government officials have to rely on human interpretation of imagery and site visits to assess damage and loss. But depending on the scope of a disaster, this necessary process could delay relief to disaster victims.

Article Snapshot: At this year’s Esri User Conference plenary session, the United Services Automobile Association (USAA) demonstrated the use of deep learning capabilities in ArcGIS to perform automated damage assessment of homes after the devastating Woolsey fire. This work was a collaborative prototype between Esri and USAA to show the art of the possible in doing this type of damage assessment using the ArcGIS platform.

The Woolsey Fire burned for 15 days, burning almost 97,000 acres, and damaging or destroying thousands of structures. Deep learning within ArcGIS was used to quickly identify damaged structures within the fire perimeter, fast tracking the time for impacted residents and businesses to have their adjuster process the insurance claims.

The process included capturing training samples, training the deep learning model, running inferencing tools and detecting damaged homes – all done within the ArcGIS platform. In this blog, we’ll walk through each step in the process.

Step1: Managing the imagery

Before the fires were extinguished, DataWing flew drones in the fire perimeter and captured high resolution imagery of impacted areas. The imagery totaled 40 GB in size and was managed using a mosaic dataset. The mosaic dataset is the primary image management model for ArcGIS to manage large volumes of imagery.

Step2. Labelling and preparing training samples

Prior to training a deep learning model, training samples must be created to represent areas of interest – in this case, the USAA was interested in damaged and undamaged buildings. The building footprint data provided by LA County, was overlaid on the high resolution drone imagery in ArcGIS Pro, and several hundred homes were manually labelled as Damaged or Undamaged  (a new field called “ClassValue” in the building footprint feature class was attributed with this information). These training features were used to export training samples using the Export Training Data for Deep Learning tool in ArcGIS Pro, with the metadata output format set to ‘Labeled Tiles’.

                             Resultant image chips (Labeled Tiles used for training the Damage Classification model)
               Resultant image chips (Labeled Tiles used for training the Damage Classification model)

Step 3: Training the deep learning model

ArcGIS Notebooks was used for training purposes. ArcGIS Notebooks is pre-configured with the necessary deep learning libraries, so no extra setup was required. With a few lines of code, the training samples exported from ArcGIS Pro were augmented. Using the arcgis.learn module in the ArcGIS Python API, optimum training parameters for the damage assessment model were set, and the deep learning model was trained using a ResNet34 architecture to classify all buildings in the imagery as either damaged or undamaged.

                                       The model converged around 99% accuracy                      

Once complete, the ground truth labels were compared to the model classification results to get a quick qualitative idea on how well the model performed.

         Model Predictions
                                                                           Model Predictions

For complete details on the training process see our post on Medium

Finally, with the function, the model can be saved and used for inferencing purposes.

Step 4: Running the inferencing tools

Inferencing was performed using the ArcGIS API for Python. By running inferencing inside of ArcGIS Enterprise using the model.classify_features function in Notebooks, we can take the inferencing to scale.

The result is a feature service that can be viewed in ArcGIS Pro. (Here’s a link to the web map).

Over nine thousand buildings were automatically classified using deep learning capabilities within ArcGIS!

The map below shows the damaged buildings marked in red, and the undamaged buildings in green. With 99% accuracy, the model is approaching the performance of a trained adjuster – what used to take us days or weeks, now we can do in a matter of hours.

               Inference results
                                                Inference results

Step 5: Deriving valuable insights

Business Analyst: Now that we had a better understanding of the impacted area, we wanted to understand who were the members impacted by the fires. When deploying mobile response units to disaster areas, it’s important to know where the most at-risk populations are located, for example, the elderly or children. Using Infographics from ArcGIS Business Analyst, we extracted valuable characteristics and information about the impacted community and generated a report to help mobile units make decisions faster.

Get location intelligence with ArcGIS Business Analyst
                                       Get location intelligence with ArcGIS Business Analyst

Operations Dashboard: Using operations dashboard containing enriched feature layers, we created easy dynamic access to the status of any structure, the value of the damaged structures, the affected population and much more.



Using deep learning, imagery and data enrichment capabilities in the ArcGIS platform, we can quickly distinguish damaged from undamaged buildings, identify the most at-risk populations, and organizations can use this information for rapid response and recovery activities.

 More Resources:

Deep Learning in ArcGIS Pro

Distributed Processing using Raster Analytics

Image Analysis Workflows

Details on the model training of the damage assessment 

ArcGIS Notebooks


Vinay Viswambharan

Product manager on the Imagery team at Esri, with a zeal for remote sensing and everything imagery.

Rohit Singh

Development Lead - ArcGIS API for Python. Applying deep learning to the Science of Where @Esri.

The new Getting to Know ArcGIS Image Analyst guide gives GIS professionals and imagery analysts hands-on experience with the functionality available with the ArcGIS Image Analyst extension.

It’s a complete training guide to help you get started with complex image processing workflows. It includes a checklist of tutorials, videos and lessons along with links to additional help topics.


Task Checklist for getting started with ArcGIS Image Analyst


This guide is useful to anyone interested in learning how to work with the powerful image processing and visualization capabilities available with the ArcGIS Image Analyst. Complete the checklist provided in the guide and you’ll get hands on experience with:


  • Setting up ArcGIS Image Analyst in ArcGIS Pro
  • Extracting features from imagery using machine learning image classification and deep learning methods
  • Processing imagery quickly using raster functions
  • Visualizing and creating data in a stereo map
  • Creating and measuring features in image space
  • Working with Full Motion Video


Download the guide and let us know what you think! Take the guide survey to provide us with direct feedback.


The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later).


If you created or modified a mosaic dataset using Pro 2.3 or 10.7, you can update it and make it compatible with earlier versions by following the steps below.


  1. Open Pro 2.3.2.
  2. In the Catalog pane, navigate to your mosaic dataset. Right-click and select Properties from the drop-down menu.
  3. Click Defaults which displays Image Properties. Scroll down to Maximum Number of Rasters Per Mosaic, and change the value to any number. Press <Tab> to update the field.
  4. Change the Maximum Number of Rasters Per Mosaic property back to the original value and press <Tab> to update the field again. 















This resets the mosaic dataset object to the new Pro 2.3.2 version.


Update to ArcGIS Pro 2.3.2 by going to My Esri or by using the in-app software updater.

If you create or modify a mosaic dataset in Pro 2.3, it can only be read and modified by ArcGIS Pro 2.3 and ArcMap 10.7 and served with ArcGIS Image Server 10.7 or newer. If you intend to publish your mosaic dataset to an image server prior to 10.7, do not create or edit it using Pro 2.3.


Note that for ArcGIS Pro 2.3, significant changes were made to the internal structure of the mosaic dataset so once modified using Pro 2.3, the updated mosaic dataset cannot be read on older versions.


In general, mosaic datasets created with older versions of ArcGIS can be read and handled with newer versions of ArcGIS. However, a mosaic dataset created with a newer version of ArcGIS may not be backwards compatible with older versions.


See the table below for mosaic dataset compatibility:


 Mosaic Dataset compatibility between versions


Users utilizing a mosaic dataset created with a new version that does not use any new features in that version, have been able to read a mosaic dataset with an older version.  However, this may cause incompatibility issues.



The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later). Read more about it by clicking here.

Do you have imagery from an aerial photography camera (whether a modern digital camera or scanned film) and the orientation data either by direct georeferencing or the results of aerial triangulation? If yes, you’ll want to work with a mosaic dataset, and load the imagery with the proper raster type.


The mosaic dataset provides the foundation for many different use cases, including:

  • On-the-fly orthorectification of images in a dynamic mosaic, for direct use in ArcGIS Pro or sharing through ArcGIS Image Server.
  • Production of custom basemaps from source imagery.
  • Managing and viewing aerial frame imagery in stereo
  • Accessing images in their Image Coordinate System (ICS).  

There are different raster types that support the photogrammetric model for frame imagery.  If you have existing orientation data from ISAT or Match-AT, you can use the raster types with those names to directly load the data (see
Help here). 


For a general frame camera, you’ll want to know how to use the Frame Camera raster type and we have recently updated some helpful resources:  

UI for automated script


Further information:

  • Note that if your imagery is oblique, the Frame Camera raster type supports multi-sensor oblique images. Refer to the for configuration advice.
  • If you want to extract a digital terrain model (DTM) from the imagery, or improve the accuracy of the aerial triangulation, see the Ortho Mapping capabilities of ArcGIS Pro (advanced license).
  • If you are seeking additional detail on the photogrammetric model used within the Frame Camera raster type, see this supplemental document

Did you know there is a huge repository of powerful Python Raster Functions that you can use for raster analysis and visualization? On the Esri/raster-functions repository on GitHub, you can browse, download, and utilize customized raster functions for on-the-fly processing on your desktop or in the cloud.

Esri's raster functions GitHub repository

What are Python raster functions, you ask?

A raster function is a sneaky way to perform complex raster analysis and visualization without taking up more space on your disk or more time in your day, with on-the-fly processing. A single raster function performs an analysis on an input raster, then displays the result on your screen. No new dataset is created, and pixels get processed as you pan and zoom around the image. You can connect multiple raster functions in a raster function chain and you can turn it into a raster function template by setting parameters as variables.

A Python raster function is simply a custom raster function. A lot of raster functions come with ArcGIS out-of-the-box, but if you don’t find what you’re looking for or you want to create something specific to your needs, you can script your own with Python.

There are a lot of Python raster functions already written and posted for everyone to use, and they’re easy to download and use in ArcGIS. And some of them are unbelievably cool.

For example: Topographic Correction function

The Topographic C Correction function, written by Gregory Brunner from the St. Louis Regional Services office, essentially removes the hillshade from orthophotos. As you can imagine, imagery over mountainous areas or regions with rugged terrain can be difficult to classify accurately because pixels may belong to the same land cover class but some fall into shadow due to varying slopes and aspects. With the topographic correction function, you can get a better estimate of pixel values that would otherwise be impacted by hillshade. The result is a sort of flattening of the image, and it involves some fairly complex math.

Hillshade removal effect

Why should you care?

Okay, so now you know there’s a repository of Python raster functions. What’s next?

  1. Explore the functions you may need.
    Some of the functions on the repository were written for specialized purposes and aren’t included with the ArcGIS installation, such as the Topographic C Correctionfunction (above) or the Linear Spectral Unmixing function [contributed by Jacob Wasilkowski, also from the St. Louis Esri Regional office].
  2. Try writing your own Python raster function.
    A lot of what’s on the GitHub repository is already in the list of out-of-the-box raster functions, but you can open the Python scripts associated with each one, customize them, and save them as new Python raster functions. This can be a great learning tool for those new to the process.
  3. Watch the repo for more functions.
    There are currently over 40 functions listed, and we are continually adding more.
  4. Contribute!
    Have you written something that you can share with the broader community? Do you have ideas for cool raster functions? Add to the conversation by commenting below!


Get Started

To easily access all the Python Raster Functions in the GitHub repository, simply click the Clone or Download button on the repository code page, and choose to download the raster functions as a ZIP file.

Click download ZIP button to get the full repo

Extract the zip folder to your disk, then use this helpful Wiki to read about using the Python Raster Functions in ArcGIS Pro.


For an example tutorial on using the Python Raster Functions, check out the blog on the Aspect-Slope function.


Enjoy exploring!

One of the most important components in a supervised image classification is excellent training sites. Training an accurate classification model requires that your training samples represent distinct spectral responses recorded from the remote sensing platform – a training sample for vegetation should not include pixels with snow or pavement, samples for water classification should not include pixels with bare earth. Using the spectral profiles chart, you can evaluate your training samples before you train your model.

If you use the Training Samples Manager, it’s one simple step to create the chart. If you created your training samples separately, where each polygon or point is a different record in the feature class, it just takes a quick geoprocessing tool before creating the chart if you want to look at the average spectral profiles for each class all on one graph.

The purpose of this blog is not to go through the entire image classification workflow from end-to-end, but simply to show you how to use spectral profiles to guide you in creating training samples. For example, the spectral profile example below tells you that the Water training sites are significantly distinct, but that Golf Course and Healthy Vegetation may be too similar to yield an accurate result.


Example of spectral profile

Of course, remotely sensed imagery with large-ish pixel sizes (e.g. Landsat with 30m resolution) is bound to have multiple land cover categories within a single pixel. Still, it’s important to create good training samples in regions where pixels are easily identifiable as a given land cover type, and these samples become even more important when working with lower resolution data or when trying to identify more land cover categories.

In this example, I used image classification to get an understanding of the amount of land used for agriculture in the Imperial Valley in Southern California, a region situated in the Colorado Desert with high temperatures and very little rainfall.

Imperial Valley in Imperial County, California


Scenario 1: With the Training Samples Manager

Using the Training Samples Manager in ArcGIS Pro to generate training samples allows you to create a feature class that’s already organized by class name and class ID according to a schema.

In this analysis, I’m using a schema made up of five land cover types: Barren, Planted/Cultivated, Shrubland, Developed, and Water. Using the drawing tools, I’ve created several training samples for each category. Each time I draw a new training sample, a new record is added to the list in the Training Samples Manager. If I tried to create a Spectral Profile Chart with that many training samples, I’d have to select every record for each land cover class. Instead, I’ll use the Collapse tool to combine all the training samples for a given class into a single record. Then I’ll click the Save button to save my training samples as a feature class.


Collapse training samples for each categoryScenario 2: Without the Training Samples Manager

If you have a feature class with training samples that you created outside of the Training Samples Manager, where each training site is a separate record in the feature class, you need to run the Dissolve geoprocessing tool before creating a chart if you want to see the average spectral profiles for all your training samples at once. Use the class name or class value as the Dissolve field to combine all records associated with a given land cover class into a single multi-part polygon.

To view the spectral profile for one training sample at a time interactively (e.g. to view each individual training site for Developed), skip this step entirely and start working with your chart.

Use Dissolve to "collapse" records in the training samples feature classCharting the Spectral Profiles

At this point, using your imagery and the training samples feature class, you can create your spectral profiles chart:

  1. Right-click on the image to be classified in the Contents pane
  2. Select Create Chart > Spectral Profile
  3. In the Chart Properties pane, choose Mean Line as the Plot Type.
  4. Use the Feature Selector tool to select one of the polygons. Remember that because we used Collapse or Dissolve, selecting one polygon means you are selecting all the training sites for the land cover category represented by that polygon.
  5. Symbolize the profile lines to match the color of the land cover type and change the label name so you can easily assess the chart.
    **  Pro Tip: To change the label of the profile, type the name in the Label field on the Chart Properties pane and hit TAB  **
  6. Try out different chart types to see the types of information you can glean from them – do you see outliers? Consistent trends? Similar profiles? Distinct categories.

Below is the spectral profile chart I created using the imagery and training samples for the Imperial Valley study. I used the “Medium” (grey) theme in the chart to make it easier to view the profiles.

Spectral profile of land cover training samples in Imperial Valley studyAssessment of Spectral Profiles

At first glance, I can tell that the Planted/Cultivated, Water, and Barren land cover classes have profiles that are distinct enough that I can expect good initial results for classification of these classes. However, the Developed and Shrubland profiles are a little too close for comfort: they have the same general shape and the average reflectance values are similar at each wavelength. From this, I can choose whether I want to re-create my training samples or simply combine the two categories into a single class. Theoretically, combining the Shrubland and Developed into one class shouldn’t impact my analysis because my main focus is an accurate estimate of Planted/Cultivated land cover.

Before making my decision, I’ll take a deeper look at the data. The chart below is the same data in a Boxes plot, and I can hover my mouse over the boxes to get the statistics for each land cover class at each wavelength band.

From the Boxes chart, I can see that the Developed and Shrubland land cover classes have similar average values and similar distribution. However, the Developed land cover type has much higher maximum reflectance values across all wavelengths, and Shrubland has lower minimum values. This makes sense – I would expect developed areas (buildings, roads, parking lots, etc.) to be brighter in general than shrubby areas.
Since the Boxes chart tells me that the minimum and maximum values vary so much between the classes, combining these two classes into a single class could potentially confuse my classification model and impact the overall accuracy. Instead, I’m going to re-create the training samples for the Developed class to capture those higher reflectance values.
The charts below include the spectral profiles for my modified training samples.
Now, in the visible and near infrared bands especially, you can see distinctly higher reflectance values for the Developed land cover training sample data compared to the Shrubland spectral response. With these results, I would be comfortable moving forward with my classification workflow by training my model with all my training samples.

Extra Credit

For bonus points, I used the Multispectral Landsat image service from the Living Atlas to quickly visualize NDVI in the Imperial Valley area. Then I used a spectral profile chart to compare NDVI averages in different areas of interest for vegetation health assessment. Use the steps below to try it yourself:
  1. In ArcGIS Pro,  open the Map tab and select Add Data.
  2. From the menu on the left, expand the Portal option and select Living Atlas. Use the Search box to search for “Multispectral Landsat.”
  3. Select the Multispectral Landsat image service and click OK.
  4. Zoom to Imperial Valley or your area of interest.
  5. Make sure the Multispectral Landsat service is highlighted in the Contents pane. In the Image Service contextual tab set, select the Data tab.
  6. In the Processing group, click the Processing Templates drop-down.
  7. Scroll down to NDVI Colorized. Select this template to display the colormap for NDVI.
  8. Right-click on the Multispectral Landsat image service in Contents and select Create Chart > Spectral Profile.
  9. Use the drawing tools to select multiple small areas of interest to compare NDVI distribution throughout the region.


Want to know more?

Try the Image Classification Wizard tutorial

Learn more about the Training Samples Manager

Learn more about image classification

Learn more about charting tools

Given the growing number of people using commercial drones these days, a common question is: “What do I do with all this imagery?”


The simple answer is that it depends on what you’re trying to accomplish.


If you just want to share the imagery as-is, and aren’t worried about making sure it’s georeferenced to be an accurate depiction of the ground, Oriented Imagery is probably your answer. If you’re capturing video, Full Motion Video in the Image Analyst extension for ArcGIS Pro is your best bet. Ultimately, though, many users plan to turn the single frame images acquired by drones into authoritative mapping products—orthorectified mosaics, digital surface models (DSMs), digital terrain models (DTMs), 3D point clouds, or 3D textured meshes.


Esri has three possible solutions for producing authoritative mapping products from drone imagery, each targeted for different users— (1) Drone2Map for ArcGIS, (2) the ortho mapping capability of ArcGIS Pro Advanced, and (3) the Ortho Maker app included with ArcGIS Enterprise. Read on to get an overview of all three solutions, and to figure out which one is best for your application.


Drone2Map for ArcGIS

For individual GIS users, Drone2Map is an easy-to-use, standalone app that supports a complete drone-processing workflow.


Drone2Map includes guided templates for creating orthorectified mosaics and digital elevation models. It’s also the only ArcGIS product that creates 3D products from drone imagery, including RGB point clouds and 3D textured meshes. Once you’ve processed your imagery, it’s easy to share the final products—2D web maps and 3D web scenes can be easily published on ArcGIS Online with a single step. ArcGIS Desktop isn’t required to run Drone2Map, but products created with Drone2Map are Desktop-compatible. That’s important, because it gives you the option to use ArcGIS Pro as an image management solution, or to serve your imagery products as dynamic image services using ArcGIS Image Server.


Ortho mapping capability of ArcGIS Pro Advanced

For GIS professionals, the ortho mapping capability of ArcGIS Pro Advanced enables you to create orthomosaics and digital elevation models from drone images (as well as from modern aerial imagery, historical film, and satellite data) in the familiar ArcGIS Desktop environment.


There are added benefits to processing your drone imagery in ArcGIS Pro. For users with very large imagery collections, Pro’s image management capabilities are especially valuable. Managing drone imagery using mosaic datasets makes it easy to query images and metadata, mosaic your imagery, and build footprints. Image management and processing workflows in ArcGIS Pro can also be automated using Python or Model Builder. Finally, sharing your imagery is straightforward. While you can publish your products to ArcGIS Online, you can also use ArcGIS Pro in conjunction with ArcGIS Image Server to publish drone products as dynamic image services.  


Ortho Maker app in ArcGIS Enterprise 10.6.1+

For ArcGIS Enterprise users, the Ortho Maker app offers a solution for organizations with multiple users who want simple, web-based workflows to create orthomosaics and DEMs from drone imagery.


Ortho Maker provides an easy-to-use web interface for uploading drone imagery and managing the ortho mapping workflow, while behind the scenes it uses the distributed processing and storage capability of Enterprise and ArcGIS Image Server to quickly process even very large collections of drone imagery. (That also means it requires ArcGIS Image Server configured for raster analysis.) The ArcGIS API for Python can be used to automate the ortho mapping process. Sharing Ortho Maker products is virtually automatic—they become imagery layer items accessible in your Enterprise portal, easily shared with users throughout your organization.


What do typical users say?

things typical users of each ArcGIS option for processing imagery might say

Next steps

Now that you have a better idea which solution makes sense for your application, it’s time to take one for a test drive. Drone2Map offers a free 15-day trial, plus a hands-on Learn lesson to get started. You can try ArcGIS Pro Advanced free for 21 days, and read more about getting started with ortho mapping for drone imagery.  For users with Enterprise 10.6.1+ and raster analysis enabled, Ortho Maker is included—find out how to get started.  Other Enterprise users should contact their administrator to see about getting access. If you still have questions, contact Esri for more product information.

In Part I of this blog series, we explained what an ortho mapping workspace is and how to create one for digital aerial imagery. At this point, the imagery has been organized and managed so that we can access all the necessary metadata, information, tools and functionality to work with our imagery, but we haven’t yet performed a bundle block adjustment.


Ortho Mapping blog series part 2


Block adjustment is the process of adjusting the parameters in the image support data to get an accurate transformation between the image and the ground. The process is based on the relationship between overlapping images, control points, the camera model, and topography – then computing a transformation for the group of images (a block). With aerial digital data, it consists of three key components:

  • Tie points – Common points that appear in overlapping images, tying the overlapping images to each other to minimize misalignment between the images. These are automatically identified by the software.
  • Ground control points – These are usually obtained with ground survey, and they provide references from features visible in the images to known ground coordinates.
  • Aerial triangulation – Computes an accurate camera model, ground position (X, Y, Z), and orientation (omega, phi, kappa) for each image, which are necessary to transform the images to match the control points and the elevation model.

When we created our workspace, we provided the Frames and Cameras tables, which contain the orientation and camera information needed to make up our camera model and to establish the relationship between the imagery and the ground. We also provided an elevation model which we obtained from the Terrain image service available through the Living Atlas of the World. Now we’re ready to move on to the next step in the ortho mapping process.

Performing a Block Adjustment for Digital Aerial Data


  1. In the ortho mapping workspace, open the Ortho Mapping tab and select Adjustment Options from the Adjust group. This is where we can define the parameters used in computing the block adjustment, which includes computing tie points. For more information on each parameter, check out the Adjustment Options help documentation.

Ortho Mapping Adjustment Options and GCP Import



  1. Next, we want to add Ground Control Points (GCPs) to our workspace to improve the overall georeferencing and accuracy of the adjustment. To do this, select the Manage GCPs tool in the Ortho Mapping tab and choose Import GCPs. We have a CSV table with X, Y and Z coordinates and accuracy to be used for this analysis.
    • If you have an existing table of GCPs, use this Import option and map the fields in the Import GCPs dialog for the X, Y, and Z coordinates, GCP label, and accuracy fields in your table. You may have photos of each GCP location for reference – if so, you can import the folder of photos for reference when you are measuring (or linking) the GCPs to the overlapping images.
    • You may also have secondary GCPs, or control points that were not obtained in a survey but from an existing orthoimage with known accuracy. You can import those here as well, or you can manually add them using the GCP Manager.
    • Once you have added GCPs to the workspace, use the GCP Manager to add tie points to the associated locations on each overlapping image. Select one of the GCPs in the GCP Manager table, then iterate through the overlapping images in the Image list below and use your cursor to place a tie point on the site that is represented by the GCP


Add tie points for each GCP and change some to check points

A few notes:

Check Points: Be sure to change some of your GCPs to Check Points (right-click on the GCP in the GCP Manager and select “Change to Check Point) so you can view the check point deviation in the Adjustment Report after running the adjustment. This is essentially changing the point from a control point that facilitates the adjustment process to a control point that assesses the adjustment results.The icon in the GCP table will change from a circle to a triangle, and the check points appear as pink triangles in the workspace map.

Drone imagery: If you are performing a block adjustment with drone imagery, you must run the Adjust tool before adding GCPs. In this blog, we’re focusing on aerial digital data.


  1. Finally, we click the Adjust tool to compute the block adjustment. This will take some time – transforming a number of images so that they align with each other and the ground is complicated work – so get up, maybe do some stretches or get yourself a cup of coffee. The log window will let you know when the process is complete. When the adjustment is finished, you’ll see new options available in the ortho mapping tab that enable you to assess the results of the adjustment.


Assessing the Block Adjustment


  1. Run the Analyze Tie Points tool to generate QA/QC data in your ortho mapping workspace. The Overlap Polygons feature class contains control point coverage in areas where images overlap, and the Coverage Polygons feature class contains control point coverage for each image in the image collection.  Inspect these feature classes to identify areas that need additional control points to improve block adjustment results.
QA/QC outputs in the ortho mapping workspace


  1. Open the Adjustment Report to view the components and results of the adjustment report. Here you will find information about the number of control points used in the adjustment, the average residual error, tie point sets, and connectivity of overlapping imagery. In our case, the Mean Reprojection Error of our adjustment is 0.38 pixels.

Now what?

The block adjustment tools allow for an iterative computation, so that you can check on the quality of the adjustment, modify options, add or delete GCPs, or recompute tie points before re-running the adjustment. If you are unsatisfied with the error in the Adjustment Report, try adding GCPs in the Manage GCPs pane, or try modifying some of the Adjustment Options. You can also change some of your check points back into GCPs, and choose a few other GCPs to be your check points. Re-run the adjustment and see how this impacts the shift.

Once you are satisfied with the accuracy of your adjusted imagery, it’s time to make ortho products! Check out the final installment in our blog series to see how it’s done.

Any remote sensing image, whether it’s a drone image, aerial photograph, or data from a satellite sensor, will inherently be impacted by some form of geometric distortion. The shape of local terrain, the sensor angle and altitude, the motion of the sensor system, and the curvature of the Earth all make it difficult to represent three dimensional ground features accurately in a two dimensional map. Image orthorectification corrects for these types of distortion so you can have a measurable, map-accurate image.


Distortion caused by camera tilt and terrain displacement


Everyone working with GIS data to make well-informed decisions needs up-to-date information about the natural, man-made, and cultural features on the ground: roads, land cover types, buildings, water bodies, and other features that fill the landscape. Much of the vector data that describes these features was actually created from orthorectified imagery, and can be combined with new imagery to update your landbase.


A landbase is a layer or combination of layers making up the backdrop of your GIS


Esri’s Ortho Mapping suite enables you to orthorectify your remote sensing imagery to make it map-accurate. It also makes it easy to create other products like orthomosaics (mosaicked images corrected for distortion) and digital elevation models (terrain or surface models) which can be used as basemaps, part of a landbase, or for further analysis in stereo models, 3D analysis and feature extraction.


The workflow to create ortho mapping products will be presented in a three-part blog series, each with a short video:

  • Creating a workspace
  • Performing a block adjustment
  • Creating ortho mapping products


Let's get started!


Creating an Ortho Mapping Workspace


The first step in any project is getting organized – and creating an Ortho Mapping Workspace in ArcGIS Pro makes this easy to do.

The Ortho Mapping Workspace is a sub-project in ArcGIS Pro; it’s the interface you work with when interacting with ortho mapping workflows. The workspace is defined by the type of imagery you are working with (drone, aerial or satellite). In turn, the workspace is integrated with the tools and wizards to properly guide you through each step in the workflow. When you create a new workspace, an Ortho Mapping folder appears in your project folder structure in Catalog, and a new table of contents list view allows you to List By Ortho Mapping Entities. Again, the types of feature classes and tables you see in the Contents pane depend on the type of imagery you are working with.

Similar to Maps or Scenes within a project, a workspace is an object stored in the folder structure of a project and it can be accessed by other projects. All the feature classes and tables needed to orthorectify your imagery are created and managed in the workspace.

5 Simple Steps

Step 1: Open the Imagery tab in your ArcGIS Pro project. This is where you can analyze and manage any raster data you want to work with in Pro. In the Ortho Mapping group, you’ll see the New Workspace menu that allows you to create a New Ortho Mapping Workspace, add an existing Ortho Mapping Workspace with a reference to that workspace, or import an Ortho Mapping Workspace by creating a copy of an existing workspace and storing the new copy in your project. Select New Workspace.

Step 2: The New Ortho Mapping Workspace wizard appears. Here you’ll give your workspace a name (required) which identifies your project in the Contents and Catalog panes. You can also provide a description (optional) and you’ll select the type of imagery you want to import. In our workflow, we’re using aerial imagery acquired by Vexcel Imaging covering an area over Hollywood, California, so we’ll select Aerial – Digital as the type. Click Next.

Step 3: The Image Collection page opens. Here you’ll enter specific information about the type of sensor used to collect your imagery. You can choose from MATCH-AT, ISAT, or Applanix, or you can select the Generic Frame Camera, which requires you to provide the exterior and interior orientation information with the Frames tables and Cameras tables, respectively. Entering the Frames and Cameras information will provide the information necessary to correct for sensor-related distortion.

The Frames table has a specific schema that is required in the ortho mapping workspace for aerial imagery. It contains the exterior orientation and other information specific for each image comprising your image collection. The Cameras table contains all the camera calibration information for computing the interior orientation, but you can add the camera information manually in the wizard or as a table. To edit the Camera parameters, you can hover over the Camera ID and click the Edit Properties button. You’ll also need to specify the Frame Spatial Reference, which be provided with your data.

In this workflow, we used the exterior orientation information that was provided along with our source imagery to create the Frames table in the necessary schema. We then pointed to a table that has the information for one camera, with CameraID = 0 (see the screen shot below - there's a check mark next to the 0 under Cameras). 


*We are updating this for ArcGIS Pro 2.3 to be more user-friendly for a better experience!  



Step 4 To correct for terrain displacement, you need to include an elevation source. The cool thing about working with the ArcGIS platform is you can access the thousands of maps, apps and data layers available in the ArcGIS Living Atlas, so if you don’t have your own elevation data you can search for one and use it into your project. Here’s what we did:

  1. In our ArcGIS Pro project, zoom to the area of interest in Hollywood.
  2. On the Map tab, click Add Data and add data to the map. 
  3. Select the Living Atlas option under the Portal group and search for "Terrain." Add the Terrain imagery layer. At first, you might not be able to see much variation in the terrain. Click on the Appearance tab under the Image Service Layer group and select DRA (Dynamic Range Adjustment) to stretch the terrain imagery in the extent you are viewing.
  4. In the Contents pane, right-click on the Terrain imagery layer and select Data > Export Raster. 
  5. In the Export Raster settings, specify the output raster dataset and set the Clipping Geometry to the Current Display Extent. 
  6. Click Export.


               Now we can add our new DEM to the workspace. To do this, open the Data Loader Options pane in the Image                Collection page. Click the browse button to navigate to the DEM created above, or use your own DEM.

               Step 5: Finally, we left all the other values as default and clicked Finish.





Log window will tell you how the creation of the workspace is coming along, and if there are any problems, an error message will be displayed. When it’s complete, you’ll see the new Ortho Mapping Entities in your Contents pane:  various control points including Ground Control Points, Check Points and Tie Points, the mosaic dataset that was created using your source data, and placeholders for Data Products, Solution Data, and QA/QC Data that haven’t been created yet.


Make sure to zoom and pan around the map to check out your Image Collection. With the Image Collection selected in the Contents pane, you can open the Data tab from the Mosaic Layer context menu. Here you can change the Sort and Overlap options for your mosaic dataset. We recommend using the Closest to Center or Closest to Nadir options for viewing.


Now that you have all your ortho mapping components organized in your workspace, the next step is to block adjust your data to make sure it’s map-accurate. Stay tuned for the next part of this blog series, Ortho Mapping with Aerial Data Part II: Getting Adjusted, where we’ll show you how to perform a block adjustment to make sure your data is ready for product generation and stereo compilation!



We showed you how to set up an ortho mapping workspace for aerial imagery. For an example of how to set up an ortho mapping workspace for satellite data, check out this short video!


Many thanks to Jeff Liedtke for co-authoring this article!

ArcGIS Enterprise configured for Raster Analytics enables large and small organizations to distribute and scale raster processing, storage and sharing to meet requirements for unique projects. This flexibility and elasticity also allows you to pursue projects that were previously out of reach due to hardware, software, personnel, or cost constraints. An overview of Raster Analytics concepts and advantages is described in the article Imagery Superpowers – Raster analytics expands imagery use in GIS.

Raster Analytics Processing Workflow

To help you become familiar with the benefits of Raster Analytics, Esri is offering a new Learn Lesson for ArcGIS Enterprise users. The lesson guides you through the process of configuring your Enterprise system for Raster Analytics, shows you how to use raster processing tools and functions to assess potential landslide risk associated with wildfire. The analysis is run on your distributed processing system, and the results are published to your Enterprise portal for ease of sharing across your organization. The lesson is a practical guide for implementing a Raster Analytics deployment, and demonstrating how standard ArcGIS Pro tools and functionality can be used to run distributed processes behind your firewall and in the cloud, and shared with stakeholders across your enterprise. Check out this story map, which gives you a more detailed overview of what the lesson involves.

Drag and drop tools into the function editor to create raster function chains.

Ready to try it out? If you want to extend your capabilities with Raster Analytics for increased productivity, test out the lesson and see why users are excited about the opportunity to address demanding projects in a more effective and efficient manner.


Many Thanks to Katy Nesbitt ( for co-authoring this article.

For FMV in ArcGIS (ArcGIS Pro 2.2 with Image Analyst Extension, or ArcMap 10.x with the FMV add-in) to display videos and link the footprint into the proper location on the map, the video must include georeferencing metadata multiplexed into the video stream.  The metadata must be in MISB (motion industry standards board) format, originally designed for military systems.  Information is here, but drone users do not need to study this specification.  For non-MISB datasets, Esri has created a geoprocessing tool called the “Video Multiplexer” that will process a video file with a separate metadata text file to create a MISB-compatible video.  This is described more completely (e.g. format for the metadata about camera location, orientation, field of view, etc.) in the FMV Manual at


For those with DJI drones, the challenge then becomes “where is the required metadata?”.  DJI drones write a binary formatted metadata file with extension *.dat (or possibly *.srt, depending on drone and firmware) for every flight.  There is a free utility called “DatCon” at this link which will reportedly convert the DJI files to ASCII format. 


Key points:

  • Esri has not tested and cannot endorse this free utility. If you choose to use it, as with any download from the internet, you should check it for viruses etc.
  • DJI has changed the format of the metadata in this file on multiple occasions, so depending on your drone and date of its firmware, you will find differences in the metadata content. Esri does not have a specification for this metadata at any version, so cannot advise you what to expect to be included in (or missing from) this file.
  • Another key point is that the DJI *.dat file was created for the purpose of troubleshooting. It was not created with the intent of supporting geospatial professionals seeking a complete metadata record for the drone, gimbal, and camera.  As a result, users will typically find temporal gaps in the metadata.  As a result, processing this metadata through the FMV Multiplexer will likely generate inaccurate results, unless you are willing to apply manual effort (requiring trial and error, and substantial time) to identify the temporal gaps and fill in your own estimated or interpolated values for the missing times and missing fields.
  • IMPORTANT: This blog was written in September 2018, and it is very possible that DJI will make firmware changes in the future to change the readability and completeness of their metadata.


There is an alternative to this, but it is not an Esri solution.  CompassDrone, an Esri business partner and DJI authorized distributor, has built a flight planning and flight control application called CIRRUAS using the DJI API.  This application has access to the DJI metadata in flight, and (among other features) is explicitly designed to capture complete metadata as defined by Esri for FMV support.  If you are using the CIRRUAS app, a metadata file will be captured and exported from the drone, and this will feed directly into the FMV multiplexer. 


The CIRRUAS app is available here  For further discussion, please refer to the blog on this topic written by CompassDrone:


A few final notes:

  • Our testing of the CIRRUAS app has yielded good results, but Esri does not provide technical support for the app.
  • Note that the CIRRUAS app must be used to plan and fly the mission, and this will initiate the recording of complete metadata. It cannot be applied to video that was previously recorded, since the metadata records will not be complete.
  • It is not known if there are other alternatives which provide a solution for processing video from DJI drones for ArcGIS FMV.


Check back in this blog for updates as more capabilities are developed.


With the Image Analyst extension in ArcGIS Pro 2.1 (or later), non-orthorectified and suitably overlapping images with appropriate metadata can be viewed in stereo!  This stereoscopic viewing experience can enable 3D feature extraction.  See more information at


If your organization has a collection of images and you’d like to use the stereo viewing capability in ArcGIS Pro, where do you start?   The key questions are: 

  1. What type of sensor collected the data, and
  2. What orientation data do you have along with the images?


In order to display images as stereo pairs, ArcGIS must have detailed information about the location of the sensor (x,y,z) as well as its orientation – and this is unique information for every image.  Information about the sensor (typically called a camera model or sensor model) is also required. 

Graphic Showing Geometry of One Stereo Image Pair


There are a few conceptually simple cases, although each has important details to follow within its own workflow and documentation.


  • If you have two overlapping satellite images, you can go directly to stereo viewing.
  • If you have a collection of satellite images, you can build a mosaic dataset and ingest the images using the specific raster type for that satellite, run the Build Stereo Model geoprocessing tool, then proceed to the stereo view.  The raster type for the satellite reads the required orientation data.
  • If your imagery came from a professional aerial camera system:
    • If you have an output project file from aerotriangulation (AT) software (e.g. Match-AT or ISAT), ArcGIS includes raster types which ingest the orientation data for you, so this is similar to the satellite case: build a mosaic dataset with the proper raster type, Build Stereo Model, and proceed to stereo viewing.
    • If you have a project file from AT software not currently supported, Python raster types are under development for additional sensors e.g. for the Vexcel Ultracam. For more information, watch for announcements on GeoNet or on  Alternatively, if you have a table of camera and frame orientation values, see the next bullet.
    • If you have a table of data values representing the exterior orientation as well as a camera model (interior orientation), you will build a mosaic dataset and ingest the images using the “Frame camera” raster type. 
    • If you have scanned film but without the results of AT software, refer to the FrameCameraBestPractices. With ArcGIS Pro 2.1, some values may have to be estimated, and the positional accuracy may not be optimum.  ArcGIS Pro 2.2 (and later versions) support fiducial measurement.
  • If your imagery was captured using a drone, you will need to use photogrammetric software to generate the camera model and orientation data.   
    • If you process your drone imagery using Ortho Mapping in ArcGIS Pro Advanced (see, after the Adjust step is completed, the Image Collection mosaic dataset will be ready for viewing in stereo (after Build Stereo Model).
    • If you are using Drone2Map, please see this item ArcGIS Online to download a geoprocessing tool which can ingest the images into a mosaic dataset.


For those interested in trying an example, a downloadable sample is available in this item on ArcGIS Online:

Raster analytics using ArcGIS Enterprise is a flexible raster processing, storage, and sharing system that employs distributed computing and storage technology. Use raster analytics to apply the rich set of raster processing tools and functions offered in ArcGIS, build your own custom functions and tools, or combine multiple tools and functions into raster processing chains to execute your custom algorithms on large collections of raster data. Source data and processed results are stored, published and shared across your enterprise accordingly.


This extensive capability can be further expanded by leveraging cloud computing capabilities and resources.  The net result: image processing and analysis jobs that used to take days or weeks can now be done in minutes or hours, and jobs that were impossibly large or too daunting are now within easy reach.


What can raster analytics do?

By leveraging ArcGIS Enterprise, raster analytics enables you to:

  • Quickly process massive imagery or raster datasets in a scalable environment
  • Execute advanced, customized raster analysis
  • Share results with individuals, departments, and organizations within or outside your enterprise


Raster analytics is ArcGIS Image Server configured for raster analysis in a processing and storage environment that maximizes processing speed and efficiency.  Built-in tools and functions cover preprocessing, orthorectification and mosaicking, remote sensing analysis, and an extensive range of math and trigonometry operators; your custom functions can extend the platform’s analytical capabilities even further.


Fully utilize your existing ArcGIS Image Server on-site, or exploit the elastic processing and storage capacity of cloud computing and storage platforms such as Amazon Web Services and Microsoft Azure to dynamically increase or reduce your capacity depending on the size and urgency of your projects.  The scalable environment of raster analytics empowers you to implement computationally intensive image processing that used to be out of reach or cost-prohibitive. This implementation saves you time, money, and resources.


Raster analytics is also designed to streamline and simplify collaboration and sharing. Users across your enterprise can contribute data, processing models, and expertise to your imagery project, and share results with individuals, departments, and organizations in your enterprise.


Finally, Raster analytics using ArcGIS Enterprise integrates your image processing and analysis with the world’s leading GIS platform, and allows users to seamlessly draw on the world’s largest collection of online digital maps and imagery.


How does raster analytics work?

ArcGIS Image Server configured for the role of raster analytics provides software and user interfaces to organize and manage your processing, storage, and sharing of raster and feature data, maps, and other geographic information on a variety of devices. This integrated system manages the dissemination of processing and storage of results (1) on-premises and behind the firewall for classified deployments, (2) in cloud processing and storage environments, or (3) a combination of both environments.


The foundation of raster analytics is ArcGIS Enterprise, which includes an Enterprise GIS Portal, ArcGIS Data Store, Image Server configured for raster analytics, raster data store and ArcGIS Web Adaptor. ArcGIS Enterprise integrates the components of the raster analytics system to support scalable, real-world workflows.


Scale your powerful processing and storage capabilities by deploying ArcGIS Enterprise in the cloud via Microsoft Azure or Amazon Web Services (AWS). For example, you can automatically scale capacity up and down according to conditions you define, or automatically dispense application traffic across multiple instances for better performance. ArcGIS Enterprise makes deployment easier by providing Cloud Builder for Microsoft Azure or AWS CloudFormation with sample templates to configure and deploy your system in the cloud.


Develop, test and optimize your raster processing chains using Esri’s rich set of more than 200 functions and tools in the familiar ArcGIS Desktop or web map viewer. Once verified and optimized in the dynamic on-the-fly processing environment, submit your processing chain to ArcGIS Portal, which manages the distribution of processing, storage, and publication of results.


The ideal deployment of raster analytics is comprised of three server sites to perform the primary roles of the portal host server, raster analysis server, and the image hosting server. Two licenses are required for raster analytics, ArcGIS Enterprise and Image Server.

Raster Analytics System Diagram

The hosting server is your portal’s server for standard portal administration and operations such as managing and dispensing processing, storage, and publication of results to raster analysis servers, image servers, and data stores.  It also hosts the ArcGIS Data Store for GIS data and allows users to publish data and maps to a wider audience as web services.


Raster analytics jobs are processed by image servers dedicated for raster analytics, comprised of one or more servers, each with multiple processing cores. The image processing and raster analytics tasks are distributed at the tile level or scene level depending on the tools and functions used. Raster analytics manages the processing results to either the ArcGIS Data Store on the hosting server for feature data products, or to the raster data store for imagery and raster data products. The raster data store can be implemented using distributed file share storage or using cloud storage such as Amazon S3 or Microsoft Azure blob storage.


The image hosting server hosts all the image services generated by the raster analysis server. It includes the raster data store configured with the Image Server Manager, which manages distributed file share storage and cloud storage of image services using Amazon S3 or Microsoft Azure blob storage. The image hosting server stores and returns results requested by members of your enterprise.


System configuration apps assign the roles of the servers and data stores, and also set the permission structure for all the users across your enterprise. This facilitates optimal flexibility in configuring and implementing your raster analytics system to address specific projects. Multiple servers can be scaled up for raster analytics processing and storage as required.


See the tutorial to set up a base ArcGIS Enterprise deployment.


More Information

To learn more about raster analytics using ArcGIS Enterprise and ArcGIS Image Server, check out this video.

Explore these help topics to get started with raster analytics:

To see how raster analytics is being used, check out the Chesapeake Conservancy and Distributed Image Processing presentation, or attend the Plenary session at the 2017 Esri User Conference in San Diego to hear about Chesapeake Conservancy’s experience processing and sharing the entire Chesapeake watershed using raster analytics.


Please plan to attend a few presentations addressing raster analytics at the 2017 Esri User Conference:

Raster Analytics at Esri UC2017

The June 2017 update of ArcGIS Online includes some useful capabilities for displaying imagery served by your image services. These capabilities give you greater control for visualizing the information contained in your image services. When we talk about rendering, we’re not talking about making soap out of fat. Here at Esri, rendering is the process of displaying your data. How an image service is rendered depends on what type of data it contains and what you want to show.


Once you search for and add a layer, and your image is displayed in Map Viewer, click the More Options icon then Display to open the Image Display pane.

Image Display Options

You see a new category named Image Enhancement. This is where the real fun begins.

Image Enhancement pane

The Symbology Type options include Unique Values, Stretch and Classify. Unique Values and Classify renderers work with single-band image services, while the Stretch renderer works on both single and multiple band images.


Unique Values Renderer

Unique values symbolize each value in the raster layer individually and are supported on single band layers with Raster Attribute table. The symbology can be based on one of more attribute fields in the dataset. The colors are read from the Raster Attribute table and if they are not available the renderer assigns a color to each value in your dataset. This symbology type is often used with single band thematic data, such as land cover, because of its limited number of categories. It can also be used with continuous data if you choose a color ramp that is a gradient.

Unique Values Renderer

  1. Use the Field drop-down to select the field you want to map. The field is displayed in the table.
  2. Click the Color Ramp drop-down and click on a color scheme. If your image service already has a color ramp, such as the NLCD service in this example, it is displayed by default.
  3. The colors in the Symbol column and Labels can be edited as required.
  4. Click Apply to display the rendering in the layer



The stretch parameters improve the appearance of your image by adjusting the image histogram controlling brightness and contrast enhancements. Either single or multiple band images can be stretched. For multiple band images, the stretch is applied to the band combination previously chosen in the RGB Composite options. The stretch options enhance various ground features in your imagery to optimize information content.

1.   Click the Stretch Type drop-down arrow and choose the stretch type to use. The following contrast enhancements determine the range of values that are displayed.

  • None – No additional image enhancement will be performed
  • Minimum and Maximum – Displays the entire range of values in your image. Additional changes can be made by editing the values in the Min-Max grid (available only when Dynamic range adjustment is turned off.)
  • Standard Deviation – Display values between a specified number of standard deviations
  • Percent Clip – Set a range of values to display. Use the two text boxes to edit the top and bottom percentages.

2.   If the Stretch type is set to an option other than None, the following additional image enhancement options will be available.

  • Dynamic range adjustment – Performs one of the selected stretches, but limits the range of values to what is currently in the display window. This option is always turned on if the imagery layer does not have global statistics.
  • Gamma – Stretches the middle values in an image but keeps the extreme high and low values constant.

3.   For single-band layers, you can optionally choose a new color scheme from the Color Ramp drop-down menu after applying a stretch method on the layer.

4.   Click Apply to display the rendering in the layer.

Here’s a WorldView-2 natural color image of Charlotte, NC, using the default no stretch:

Multispectral Image, No Stretch

And here is the same imagery layer with the top 2% and bottom 20% of the histogram omitted:

Multispectral Imagery, Percent Stretch

Classify Renderer

Classify symbology is supported by single band layers. It allows you to group pixels together in a specified number of classes. The following are the different settings available with the Classify symbology.

  • Field – Represents the values of the data.
  • Method – Refers to how the break points are calculated.
  • Defined Interval – You specify an interval to divide the range of pixel values and the number of classes will be automatically calculated.
  • Equal Interval – The range of pixel values are divided into equally sized classes where you specify the number of classes.
  • Natural Breaks – The class breaks are determined statistically by finding adjacent feature pairs between which there is a relatively large difference in data value.
  • Quantile – Each class contains equal number of pixels.
  • Classes – Sets the number of groups.
  • Color Ramp – Allows you to choose the color ramp for displaying the data.

Classify symbology works with single band layers that have either a Raster Attribute Table or Histogram values. If a histogram is absent, it is generated when you select the symbology type.


Here’s the classified map of Charlotte, specifying 15 classes and using the Natural Breaks method for determining class breaks:

Class Map


These new Map Viewer image rendering capabilities are similar to what you are used to in ArcMap and ArcGIS Pro. Since this release, Scene Viewer also supports imagery layers, however we are still working on bringing the new Map Viewer image rendering capabilities into Scene Viewer. Check out these new imagery capabilities in ArcGIS Online and see how they can enhance the stories behind your data.


Please leave us comments below for any future enhancements you’d like to see. And check back in a few months; we have a lot of other cool stuff planned for imagery in upcoming releases.