Skip navigation
All Places > GIS > Imagery and Remote Sensing > Blog > Author: JLenhardt-esristaff

The new Getting to Know ArcGIS Image Analyst guide gives GIS professionals and imagery analysts hands-on experience with the functionality available with the ArcGIS Image Analyst extension.

It’s a complete training guide to help you get started with complex image processing workflows. It includes a checklist of tutorials, videos and lessons along with links to additional help topics.


Task Checklist for getting started with ArcGIS Image Analyst


This guide is useful to anyone interested in learning how to work with the powerful image processing and visualization capabilities available with the ArcGIS Image Analyst. Complete the checklist provided in the guide and you’ll get hands on experience with:


  • Setting up ArcGIS Image Analyst in ArcGIS Pro
  • Extracting features from imagery using machine learning image classification and deep learning methods
  • Processing imagery quickly using raster functions
  • Visualizing and creating data in a stereo map
  • Creating and measuring features in image space
  • Working with Full Motion Video


Download the guide and let us know what you think! Take the guide survey to provide us with direct feedback.


The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later).


If you created or modified a mosaic dataset using Pro 2.3 or 10.7, you can update it and make it compatible with earlier versions by following the steps below.


  1. Open Pro 2.3.2.
  2. In the Catalog pane, navigate to your mosaic dataset. Right-click and select Properties from the drop-down menu.
  3. Click Defaults which displays Image Properties. Scroll down to Maximum Number of Rasters Per Mosaic, and change the value to any number. Press <Tab> to update the field.
  4. Change the Maximum Number of Rasters Per Mosaic property back to the original value and press <Tab> to update the field again. 















This resets the mosaic dataset object to the new Pro 2.3.2 version.


Update to ArcGIS Pro 2.3.2 by going to My Esri or by using the in-app software updater.

If you create or modify a mosaic dataset in Pro 2.3, it can only be read and modified by ArcGIS Pro 2.3 and ArcMap 10.7 and served with ArcGIS Image Server 10.7 or newer. If you intend to publish your mosaic dataset to an image server prior to 10.7, do not create or edit it using Pro 2.3.


Note that for ArcGIS Pro 2.3, significant changes were made to the internal structure of the mosaic dataset so once modified using Pro 2.3, the updated mosaic dataset cannot be read on older versions.


In general, mosaic datasets created with older versions of ArcGIS can be read and handled with newer versions of ArcGIS. However, a mosaic dataset created with a newer version of ArcGIS may not be backwards compatible with older versions.


See the table below for mosaic dataset compatibility:


 Mosaic Dataset compatibility between versions


Users utilizing a mosaic dataset created with a new version that does not use any new features in that version, have been able to read a mosaic dataset with an older version.  However, this may cause incompatibility issues.



The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later). Read more about it by clicking here.

Did you know there is a huge repository of powerful Python Raster Functions that you can use for raster analysis and visualization? On the Esri/raster-functions repository on GitHub, you can browse, download, and utilize customized raster functions for on-the-fly processing on your desktop or in the cloud.

Esri's raster functions GitHub repository

What are Python raster functions, you ask?

A raster function is a sneaky way to perform complex raster analysis and visualization without taking up more space on your disk or more time in your day, with on-the-fly processing. A single raster function performs an analysis on an input raster, then displays the result on your screen. No new dataset is created, and pixels get processed as you pan and zoom around the image. You can connect multiple raster functions in a raster function chain and you can turn it into a raster function template by setting parameters as variables.

A Python raster function is simply a custom raster function. A lot of raster functions come with ArcGIS out-of-the-box, but if you don’t find what you’re looking for or you want to create something specific to your needs, you can script your own with Python.

There are a lot of Python raster functions already written and posted for everyone to use, and they’re easy to download and use in ArcGIS. And some of them are unbelievably cool.

For example: Topographic Correction function

The Topographic C Correction function, written by Gregory Brunner from the St. Louis Regional Services office, essentially removes the hillshade from orthophotos. As you can imagine, imagery over mountainous areas or regions with rugged terrain can be difficult to classify accurately because pixels may belong to the same land cover class but some fall into shadow due to varying slopes and aspects. With the topographic correction function, you can get a better estimate of pixel values that would otherwise be impacted by hillshade. The result is a sort of flattening of the image, and it involves some fairly complex math.

Hillshade removal effect

Why should you care?

Okay, so now you know there’s a repository of Python raster functions. What’s next?

  1. Explore the functions you may need.
    Some of the functions on the repository were written for specialized purposes and aren’t included with the ArcGIS installation, such as the Topographic C Correctionfunction (above) or the Linear Spectral Unmixing function [contributed by Jacob Wasilkowski, also from the St. Louis Esri Regional office].
  2. Try writing your own Python raster function.
    A lot of what’s on the GitHub repository is already in the list of out-of-the-box raster functions, but you can open the Python scripts associated with each one, customize them, and save them as new Python raster functions. This can be a great learning tool for those new to the process.
  3. Watch the repo for more functions.
    There are currently over 40 functions listed, and we are continually adding more.
  4. Contribute!
    Have you written something that you can share with the broader community? Do you have ideas for cool raster functions? Add to the conversation by commenting below!


Get Started

To easily access all the Python Raster Functions in the GitHub repository, simply click the Clone or Download button on the repository code page, and choose to download the raster functions as a ZIP file.

Click download ZIP button to get the full repo

Extract the zip folder to your disk, then use this helpful Wiki to read about using the Python Raster Functions in ArcGIS Pro.


For an example tutorial on using the Python Raster Functions, check out the blog on the Aspect-Slope function.


Enjoy exploring!

One of the most important components in a supervised image classification is excellent training sites. Training an accurate classification model requires that your training samples represent distinct spectral responses recorded from the remote sensing platform – a training sample for vegetation should not include pixels with snow or pavement, samples for water classification should not include pixels with bare earth. Using the spectral profiles chart, you can evaluate your training samples before you train your model.

If you use the Training Samples Manager, it’s one simple step to create the chart. If you created your training samples separately, where each polygon or point is a different record in the feature class, it just takes a quick geoprocessing tool before creating the chart if you want to look at the average spectral profiles for each class all on one graph.

The purpose of this blog is not to go through the entire image classification workflow from end-to-end, but simply to show you how to use spectral profiles to guide you in creating training samples. For example, the spectral profile example below tells you that the Water training sites are significantly distinct, but that Golf Course and Healthy Vegetation may be too similar to yield an accurate result.


Example of spectral profile

Of course, remotely sensed imagery with large-ish pixel sizes (e.g. Landsat with 30m resolution) is bound to have multiple land cover categories within a single pixel. Still, it’s important to create good training samples in regions where pixels are easily identifiable as a given land cover type, and these samples become even more important when working with lower resolution data or when trying to identify more land cover categories.

In this example, I used image classification to get an understanding of the amount of land used for agriculture in the Imperial Valley in Southern California, a region situated in the Colorado Desert with high temperatures and very little rainfall.

Imperial Valley in Imperial County, California


Scenario 1: With the Training Samples Manager

Using the Training Samples Manager in ArcGIS Pro to generate training samples allows you to create a feature class that’s already organized by class name and class ID according to a schema.

In this analysis, I’m using a schema made up of five land cover types: Barren, Planted/Cultivated, Shrubland, Developed, and Water. Using the drawing tools, I’ve created several training samples for each category. Each time I draw a new training sample, a new record is added to the list in the Training Samples Manager. If I tried to create a Spectral Profile Chart with that many training samples, I’d have to select every record for each land cover class. Instead, I’ll use the Collapse tool to combine all the training samples for a given class into a single record. Then I’ll click the Save button to save my training samples as a feature class.


Collapse training samples for each categoryScenario 2: Without the Training Samples Manager

If you have a feature class with training samples that you created outside of the Training Samples Manager, where each training site is a separate record in the feature class, you need to run the Dissolve geoprocessing tool before creating a chart if you want to see the average spectral profiles for all your training samples at once. Use the class name or class value as the Dissolve field to combine all records associated with a given land cover class into a single multi-part polygon.

To view the spectral profile for one training sample at a time interactively (e.g. to view each individual training site for Developed), skip this step entirely and start working with your chart.

Use Dissolve to "collapse" records in the training samples feature classCharting the Spectral Profiles

At this point, using your imagery and the training samples feature class, you can create your spectral profiles chart:

  1. Right-click on the image to be classified in the Contents pane
  2. Select Create Chart > Spectral Profile
  3. In the Chart Properties pane, choose Mean Line as the Plot Type.
  4. Use the Feature Selector tool to select one of the polygons. Remember that because we used Collapse or Dissolve, selecting one polygon means you are selecting all the training sites for the land cover category represented by that polygon.
  5. Symbolize the profile lines to match the color of the land cover type and change the label name so you can easily assess the chart.
    **  Pro Tip: To change the label of the profile, type the name in the Label field on the Chart Properties pane and hit TAB  **
  6. Try out different chart types to see the types of information you can glean from them – do you see outliers? Consistent trends? Similar profiles? Distinct categories.

Below is the spectral profile chart I created using the imagery and training samples for the Imperial Valley study. I used the “Medium” (grey) theme in the chart to make it easier to view the profiles.

Spectral profile of land cover training samples in Imperial Valley studyAssessment of Spectral Profiles

At first glance, I can tell that the Planted/Cultivated, Water, and Barren land cover classes have profiles that are distinct enough that I can expect good initial results for classification of these classes. However, the Developed and Shrubland profiles are a little too close for comfort: they have the same general shape and the average reflectance values are similar at each wavelength. From this, I can choose whether I want to re-create my training samples or simply combine the two categories into a single class. Theoretically, combining the Shrubland and Developed into one class shouldn’t impact my analysis because my main focus is an accurate estimate of Planted/Cultivated land cover.

Before making my decision, I’ll take a deeper look at the data. The chart below is the same data in a Boxes plot, and I can hover my mouse over the boxes to get the statistics for each land cover class at each wavelength band.

From the Boxes chart, I can see that the Developed and Shrubland land cover classes have similar average values and similar distribution. However, the Developed land cover type has much higher maximum reflectance values across all wavelengths, and Shrubland has lower minimum values. This makes sense – I would expect developed areas (buildings, roads, parking lots, etc.) to be brighter in general than shrubby areas.
Since the Boxes chart tells me that the minimum and maximum values vary so much between the classes, combining these two classes into a single class could potentially confuse my classification model and impact the overall accuracy. Instead, I’m going to re-create the training samples for the Developed class to capture those higher reflectance values.
The charts below include the spectral profiles for my modified training samples.
Now, in the visible and near infrared bands especially, you can see distinctly higher reflectance values for the Developed land cover training sample data compared to the Shrubland spectral response. With these results, I would be comfortable moving forward with my classification workflow by training my model with all my training samples.

Extra Credit

For bonus points, I used the Multispectral Landsat image service from the Living Atlas to quickly visualize NDVI in the Imperial Valley area. Then I used a spectral profile chart to compare NDVI averages in different areas of interest for vegetation health assessment. Use the steps below to try it yourself:
  1. In ArcGIS Pro,  open the Map tab and select Add Data.
  2. From the menu on the left, expand the Portal option and select Living Atlas. Use the Search box to search for “Multispectral Landsat.”
  3. Select the Multispectral Landsat image service and click OK.
  4. Zoom to Imperial Valley or your area of interest.
  5. Make sure the Multispectral Landsat service is highlighted in the Contents pane. In the Image Service contextual tab set, select the Data tab.
  6. In the Processing group, click the Processing Templates drop-down.
  7. Scroll down to NDVI Colorized. Select this template to display the colormap for NDVI.
  8. Right-click on the Multispectral Landsat image service in Contents and select Create Chart > Spectral Profile.
  9. Use the drawing tools to select multiple small areas of interest to compare NDVI distribution throughout the region.


Want to know more?

Try the Image Classification Wizard tutorial

Learn more about the Training Samples Manager

Learn more about image classification

Learn more about charting tools

In Part I of this blog series, we explained what an ortho mapping workspace is and how to create one for digital aerial imagery. At this point, the imagery has been organized and managed so that we can access all the necessary metadata, information, tools and functionality to work with our imagery, but we haven’t yet performed a bundle block adjustment.


Ortho Mapping blog series part 2


Block adjustment is the process of adjusting the parameters in the image support data to get an accurate transformation between the image and the ground. The process is based on the relationship between overlapping images, control points, the camera model, and topography – then computing a transformation for the group of images (a block). With aerial digital data, it consists of three key components:

  • Tie points – Common points that appear in overlapping images, tying the overlapping images to each other to minimize misalignment between the images. These are automatically identified by the software.
  • Ground control points – These are usually obtained with ground survey, and they provide references from features visible in the images to known ground coordinates.
  • Aerial triangulation – Computes an accurate camera model, ground position (X, Y, Z), and orientation (omega, phi, kappa) for each image, which are necessary to transform the images to match the control points and the elevation model.

When we created our workspace, we provided the Frames and Cameras tables, which contain the orientation and camera information needed to make up our camera model and to establish the relationship between the imagery and the ground. We also provided an elevation model which we obtained from the Terrain image service available through the Living Atlas of the World. Now we’re ready to move on to the next step in the ortho mapping process.

Performing a Block Adjustment for Digital Aerial Data


  1. In the ortho mapping workspace, open the Ortho Mapping tab and select Adjustment Options from the Adjust group. This is where we can define the parameters used in computing the block adjustment, which includes computing tie points. For more information on each parameter, check out the Adjustment Options help documentation.

Ortho Mapping Adjustment Options and GCP Import



  1. Next, we want to add Ground Control Points (GCPs) to our workspace to improve the overall georeferencing and accuracy of the adjustment. To do this, select the Manage GCPs tool in the Ortho Mapping tab and choose Import GCPs. We have a CSV table with X, Y and Z coordinates and accuracy to be used for this analysis.
    • If you have an existing table of GCPs, use this Import option and map the fields in the Import GCPs dialog for the X, Y, and Z coordinates, GCP label, and accuracy fields in your table. You may have photos of each GCP location for reference – if so, you can import the folder of photos for reference when you are measuring (or linking) the GCPs to the overlapping images.
    • You may also have secondary GCPs, or control points that were not obtained in a survey but from an existing orthoimage with known accuracy. You can import those here as well, or you can manually add them using the GCP Manager.
    • Once you have added GCPs to the workspace, use the GCP Manager to add tie points to the associated locations on each overlapping image. Select one of the GCPs in the GCP Manager table, then iterate through the overlapping images in the Image list below and use your cursor to place a tie point on the site that is represented by the GCP


Add tie points for each GCP and change some to check points

A few notes:

Check Points: Be sure to change some of your GCPs to Check Points (right-click on the GCP in the GCP Manager and select “Change to Check Point) so you can view the check point deviation in the Adjustment Report after running the adjustment. This is essentially changing the point from a control point that facilitates the adjustment process to a control point that assesses the adjustment results.The icon in the GCP table will change from a circle to a triangle, and the check points appear as pink triangles in the workspace map.

Drone imagery: If you are performing a block adjustment with drone imagery, you must run the Adjust tool before adding GCPs. In this blog, we’re focusing on aerial digital data.


  1. Finally, we click the Adjust tool to compute the block adjustment. This will take some time – transforming a number of images so that they align with each other and the ground is complicated work – so get up, maybe do some stretches or get yourself a cup of coffee. The log window will let you know when the process is complete. When the adjustment is finished, you’ll see new options available in the ortho mapping tab that enable you to assess the results of the adjustment.


Assessing the Block Adjustment


  1. Run the Analyze Tie Points tool to generate QA/QC data in your ortho mapping workspace. The Overlap Polygons feature class contains control point coverage in areas where images overlap, and the Coverage Polygons feature class contains control point coverage for each image in the image collection.  Inspect these feature classes to identify areas that need additional control points to improve block adjustment results.
QA/QC outputs in the ortho mapping workspace


  1. Open the Adjustment Report to view the components and results of the adjustment report. Here you will find information about the number of control points used in the adjustment, the average residual error, tie point sets, and connectivity of overlapping imagery. In our case, the Mean Reprojection Error of our adjustment is 0.38 pixels.

Now what?

The block adjustment tools allow for an iterative computation, so that you can check on the quality of the adjustment, modify options, add or delete GCPs, or recompute tie points before re-running the adjustment. If you are unsatisfied with the error in the Adjustment Report, try adding GCPs in the Manage GCPs pane, or try modifying some of the Adjustment Options. You can also change some of your check points back into GCPs, and choose a few other GCPs to be your check points. Re-run the adjustment and see how this impacts the shift.

Once you are satisfied with the accuracy of your adjusted imagery, it’s time to make ortho products! Check out the final installment in our blog series to see how it’s done.

Any remote sensing image, whether it’s a drone image, aerial photograph, or data from a satellite sensor, will inherently be impacted by some form of geometric distortion. The shape of local terrain, the sensor angle and altitude, the motion of the sensor system, and the curvature of the Earth all make it difficult to represent three dimensional ground features accurately in a two dimensional map. Image orthorectification corrects for these types of distortion so you can have a measurable, map-accurate image.


Distortion caused by camera tilt and terrain displacement


Everyone working with GIS data to make well-informed decisions needs up-to-date information about the natural, man-made, and cultural features on the ground: roads, land cover types, buildings, water bodies, and other features that fill the landscape. Much of the vector data that describes these features was actually created from orthorectified imagery, and can be combined with new imagery to update your landbase.


A landbase is a layer or combination of layers making up the backdrop of your GIS


Esri’s Ortho Mapping suite enables you to orthorectify your remote sensing imagery to make it map-accurate. It also makes it easy to create other products like orthomosaics (mosaicked images corrected for distortion) and digital elevation models (terrain or surface models) which can be used as basemaps, part of a landbase, or for further analysis in stereo models, 3D analysis and feature extraction.


The workflow to create ortho mapping products will be presented in a three-part blog series, each with a short video:

  • Creating a workspace
  • Performing a block adjustment
  • Creating ortho mapping products


Let's get started!


Creating an Ortho Mapping Workspace


The first step in any project is getting organized – and creating an Ortho Mapping Workspace in ArcGIS Pro makes this easy to do.

The Ortho Mapping Workspace is a sub-project in ArcGIS Pro; it’s the interface you work with when interacting with ortho mapping workflows. The workspace is defined by the type of imagery you are working with (drone, aerial or satellite). In turn, the workspace is integrated with the tools and wizards to properly guide you through each step in the workflow. When you create a new workspace, an Ortho Mapping folder appears in your project folder structure in Catalog, and a new table of contents list view allows you to List By Ortho Mapping Entities. Again, the types of feature classes and tables you see in the Contents pane depend on the type of imagery you are working with.

Similar to Maps or Scenes within a project, a workspace is an object stored in the folder structure of a project and it can be accessed by other projects. All the feature classes and tables needed to orthorectify your imagery are created and managed in the workspace.

5 Simple Steps

Step 1: Open the Imagery tab in your ArcGIS Pro project. This is where you can analyze and manage any raster data you want to work with in Pro. In the Ortho Mapping group, you’ll see the New Workspace menu that allows you to create a New Ortho Mapping Workspace, add an existing Ortho Mapping Workspace with a reference to that workspace, or import an Ortho Mapping Workspace by creating a copy of an existing workspace and storing the new copy in your project. Select New Workspace.

Step 2: The New Ortho Mapping Workspace wizard appears. Here you’ll give your workspace a name (required) which identifies your project in the Contents and Catalog panes. You can also provide a description (optional) and you’ll select the type of imagery you want to import. In our workflow, we’re using aerial imagery acquired by Vexcel Imaging covering an area over Hollywood, California, so we’ll select Aerial – Digital as the type. Click Next.

Step 3: The Image Collection page opens. Here you’ll enter specific information about the type of sensor used to collect your imagery. You can choose from MATCH-AT, ISAT, or Applanix, or you can select the Generic Frame Camera, which requires you to provide the exterior and interior orientation information with the Frames tables and Cameras tables, respectively. Entering the Frames and Cameras information will provide the information necessary to correct for sensor-related distortion.

The Frames table has a specific schema that is required in the ortho mapping workspace for aerial imagery. It contains the exterior orientation and other information specific for each image comprising your image collection. The Cameras table contains all the camera calibration information for computing the interior orientation, but you can add the camera information manually in the wizard or as a table. To edit the Camera parameters, you can hover over the Camera ID and click the Edit Properties button. You’ll also need to specify the Frame Spatial Reference, which be provided with your data.

In this workflow, we used the exterior orientation information that was provided along with our source imagery to create the Frames table in the necessary schema. We then pointed to a table that has the information for one camera, with CameraID = 0 (see the screen shot below - there's a check mark next to the 0 under Cameras). 


*We are updating this for ArcGIS Pro 2.3 to be more user-friendly for a better experience!  



Step 4 To correct for terrain displacement, you need to include an elevation source. The cool thing about working with the ArcGIS platform is you can access the thousands of maps, apps and data layers available in the ArcGIS Living Atlas, so if you don’t have your own elevation data you can search for one and use it into your project. Here’s what we did:

  1. In our ArcGIS Pro project, zoom to the area of interest in Hollywood.
  2. On the Map tab, click Add Data and add data to the map. 
  3. Select the Living Atlas option under the Portal group and search for "Terrain." Add the Terrain imagery layer. At first, you might not be able to see much variation in the terrain. Click on the Appearance tab under the Image Service Layer group and select DRA (Dynamic Range Adjustment) to stretch the terrain imagery in the extent you are viewing.
  4. In the Contents pane, right-click on the Terrain imagery layer and select Data > Export Raster. 
  5. In the Export Raster settings, specify the output raster dataset and set the Clipping Geometry to the Current Display Extent. 
  6. Click Export.


               Now we can add our new DEM to the workspace. To do this, open the Data Loader Options pane in the Image                Collection page. Click the browse button to navigate to the DEM created above, or use your own DEM.

               Step 5: Finally, we left all the other values as default and clicked Finish.





Log window will tell you how the creation of the workspace is coming along, and if there are any problems, an error message will be displayed. When it’s complete, you’ll see the new Ortho Mapping Entities in your Contents pane:  various control points including Ground Control Points, Check Points and Tie Points, the mosaic dataset that was created using your source data, and placeholders for Data Products, Solution Data, and QA/QC Data that haven’t been created yet.


Make sure to zoom and pan around the map to check out your Image Collection. With the Image Collection selected in the Contents pane, you can open the Data tab from the Mosaic Layer context menu. Here you can change the Sort and Overlap options for your mosaic dataset. We recommend using the Closest to Center or Closest to Nadir options for viewing.


Now that you have all your ortho mapping components organized in your workspace, the next step is to block adjust your data to make sure it’s map-accurate. Stay tuned for the next part of this blog series, Ortho Mapping with Aerial Data Part II: Getting Adjusted, where we’ll show you how to perform a block adjustment to make sure your data is ready for product generation and stereo compilation!



We showed you how to set up an ortho mapping workspace for aerial imagery. For an example of how to set up an ortho mapping workspace for satellite data, check out this short video!


Many thanks to Jeff Liedtke for co-authoring this article!