Skip navigation
All Places > GIS > Imagery and Remote Sensing > Blog
1 2 3 Previous Next

Imagery and Remote Sensing

44 posts

Users of the Oriented Imagery Catalog Management Tools in ArcGIS Pro 2.5 may have encountered a crash when browsing for an Oriented Imagery Catalog (OIC) as input in any of the tools in the Oriented Imagery Catalog toolbox. 

 

This bug will be fixed in the next release of ArcGIS Pro, but there is a workaround in the meantime. To avoid the crash, don't click the Browse folder icon to navigate to your OIC. Instead of browsing to the file, you should copy the path to the OIC file and paste it into the input field of the GP tool.

 

To do this in Windows:

  1. Open Windows File Explorer.
  2. Browse to the OIC file. (If you’ve created this in your project’s geodatabase, the OIC file will be located by default at C:\Users\[username]\Documents\ArcGIS\Projects\[Project Name]\[OIC name].)
  3. Select the OIC file, then click Copy Path. (You may have to remove any quotation marks around the file path.)

   Screenshot of Windows File Explorer

4. In ArcGIS Pro, paste the path into the Input Oriented Imagery field of the GP tool.

Drone2Map version 2.1 is now available.  Current users can view “About” in the main menu on the left side of the screen to verify your version, and download a new version if necessary.  You can also download from My Esri.

 

 

What’s new in Drone2Map for ArcGIS version 2.1? 

In this release we continue to improve the user experience in many areas of the workflow.

 

Camera Model Editor

  • Esri maintains an internal camera database which is updated along with Drone2Map several times per year. In addition to the internal camera database, Drone2Map also has a user camera database. With the Camera Model Editor, users are now able to edit existing cameras from the internal camera database and store the modified camera models in the user camera database.

  • An important use case supported by this capability is to provide support for high quality metric cameras, where the photogrammetric lens parameters such as focal length, principal point and distortion are stable and known. Since Drone2Map supports consumer cameras, these parameters may (by default) be adjusted during processing. For metric cameras, the Camera Model Editor allows users to input known, high accuracy parameters when applicable and maintain those values throughout processing.

  • Additionally, when a successful project has been processed and you are happy with the results, the .d2mx file from that project may be imported into the camera model editor of a new project and those optimized camera parameters from the imported project will be stored in the user camera database and allow those parameters to be used in future processing jobs. This helps to standardize results and reduce processing times.

 

Control Updates

  • In this release there is an improved user experience for managing control using the Control Manager.  Users can view properties of each control point, filter based on the type of control, and launch the links editor, all with a few button clicks.
  • Some geographic features, such as water, can be difficult to generate sufficient tie points and successfully match those tie points using automated algorithms. Now users can create and link manual tie points to images to successfully process imagery in geographic areas that previously caused problems.

  • Linking control to your images can be a time-consuming process. At Drone2Map 2.1, we have introduced assisted image links. This workflow requires initial processing to be run, and after you enter one link, the software is able to automatically find your control markers in subsequent images and provide visual feedback as to the accuracy of that link. Once satisfied with the positioning of the control to the images, simply click Auto Link and Drone2Map will link the verified control for you.

 

 

Share DEM as Elevation Layer

  • Drone2Map users are now able to publish their own custom surfaces on ArcGIS Online or ArcGIS Portal for either an ortho reference DTM or top surface DSM. These surfaces can be used in 3D web scenes to ensure accurate height values for point clouds and meshes generated by Drone2Map.

 

 

Add custom DEM into the Drone2Map project

  • Users may add their own elevation surface into the project (on top of the default World Terrain surface), to ensure that any 3D views incorporate the authoritative elevation surface.  This can be very useful in project areas that are captured on multiple dates (e.g. agriculture) and/or where an accurate input terrain is important (e.g. an airport, construction site, or a site with material stockpiles).

  • In addition, if ground control points are subsequently extracted from the map, the Z values are provided by the custom elevation surface. This is important to ensure date-to-date consistency for sites that are captured repeatedly and analyzed over time.

 

Elevation Profile and Spectral Profile for additional analytical capabilities

  • Users are now able to generate cross-sectional elevation profiles in any Drone2Map projects that are processed to create output surfaces (DSM and/or DTM).

                                Imagery provided by GeoCue Group, Inc.                                                                

 

  • For users with multispectral cameras, Drone2Map also allows extraction of spectral profiles (defined by point samples, linear transects, or 2D areas of interest) to support detailed analysis of vegetation or other landcover surface types.

 

 

Colorized Indices

  • Indices created from multispectral imagery products are now colorized by default.

 

New Inspection Template

  • The inspection template has been added to all users who wish to create projects that are focused on inspecting, annotating, and sharing raw drone images.

 

Browse performance improvements

  • Performance has been improved when browsing folders and files on disk.

Exif reader improvements

  • The performance of reading and extracting Exif data from drone images has improved to significantly reduce the amount of time required to create a project.

Licensing Changes

  • Drone2Map for ArcGIS 2.1 is a “premium app” which is a for-fee add-on to ArcGIS Online or ArcGIS Enterprise.

 

Full release notes for Drone2Map 2.1 are available here

The ArcGIS Image Analyst extension for ArcGIS Pro 2.5 now features expanded deep learning capabilities, enhanced support for multidimensional data, enhanced motion imagery capabilities, and more.

Learn about  new imagery and remote sensing-related features added in this release to improve your image visualization, exploitation, and analysis workflows.

Deep Learning

We’ve introduced several key deep learning features that offer a more comprehensive and user-friendly workflow:

  • The Train Deep Learning Model geoprocessing tool trains deep learning models natively in ArcGIS Pro. Once you’ve installed relevant deep learning libraries (PyTorch, Fast.ai and Torchvision), this enables seamless, end-to-end workflows.
  • The Classify Objects Using Deep Learning geoprocessing tool is an inferencing tool that assigns a class value to objects or features in an image. For instance, after a natural disaster, you can classify structures as damaged or undamaged.
  • The new Label Objects For Deep Learning pane provides an efficient experience  for managing and  labelling training data. The pane also provides the option to export your deep learning data.
  • A new user experience lets you interactively review deep learning results and edit classes as required.
New deep learning tools in ArcGIS Pro 2.5

New deep learning tools in ArcGIS Pro 2.5

Multidimensional Raster Management, Processing and Analysis

New tools and capabilities for multidimensional analysis allow you to extract and manage subsets of a multidimensional raster, calculate trends in your data, and perform predictive analysis.

New user experience

A new contextual tab in ArcGIS Pro makes it easier to work with multidimensional raster layers or multidimensional mosaic dataset layers in your map.

Intuitive user experience to work with multidimensional data

Intuitive user experience to work with multidimensional data

  • You can Intuitively work with multiple variables and step through time and depth.
  • You have direct access to the new functions and tools that are used to manage, analyze and visualize multidimensional data.
  • You can chart multidimensional data using the temporal profile, which has been enhanced with spatial aggregation and charting trends.

New tools for management and analysis

The new multidimensional functions and geoprocessing tools are listed below.

New geoprocessing tools for management

We’ve added two new tools to help you extract data along specific variables, depths, time frames, and other dimensions:

  • Subset Multidimensional Raster
  • Make Multidimensional Raster layer

New geoprocessing tools for analysis

  • Find Argument Statistics allows you to determine when or where a given statistic was reached in multidimensional raster dataset. For instance, you can identify when maximum precipitation occurred over a specific time period.
  • Generate Trend Raster estimates the trend for each pixel along a dimension for one or more variables in a multidimensional raster. For example, you might use this to understanding how sea surface temperature has changed over time.
  • Predict Using Trend Raster computes a forecasted multidimensional raster using the output trend raster from the Generate Trend Raster tool. This could help you predict the probability of a future El Nino event based on trends in historical sea surface temperature data.

Additionally, the following tools have improvements that support new analytical capabilities:

New raster functions for analysis

  • Generate Trend
  • Predict Using Trend
  • Find Argument Statistics
  • Linear Spectral Unmixing
  • Process Raster Collection

New Python raster objects

Developers can take advantage of new classes and functions added to the Python raster object that allow you to work with multidimensional rasters

New classes include:

  • ia.RasterCollection – The RasterCollection object allows a group of rasters to be sorted and filtered easily and prepares a collection for additional processing and analysis.
  • ia.PixelBlock – The PixelBlock object defines a block of pixels within a raster to use for processing. It is used in conjunction with the PixelBlockCollection object to iterate through one or more large rasters for processing.
  • ia.PixelBlockCollection – The PixelBlockCollection object is an iterator of all PixelBlock objects in a raster or a list of rasters. It can be used to perform customized raster processing on a block-by-block basis, when otherwise the processed rasters would be too large to load into memory.

New functions include:

  • ia.Merge() – Creates a raster object by merging a list of rasters spatially or across dimensions.
  • ia.Render (inRaster, rendering_rule={…}) – Creates a rendered raster object by applying symbology to the referenced raster dataset. This function is useful when displaying data in a Jupyter notebook.
  • Raster functions for arcpy.ia – You can now use almost all of the raster functions to manage and analyze raster data using the arcpy API
New tools to analyse multidimensional data

New tools to analyse multidimensional data

Motion Imagery

This release includes enhancements to our motion imagery support, so you can better manage and interactively use video with embedded geospatial metadata:

  • You can now enhance videos in the video player using contrast, brightness, saturation, and gamma adjustments. You can also invert the color to help identify objects in the video.
  • Video data in multiple video players can be synchronized for comparison and analysis.
  • You can now measure objects in the video player, including length, area, and height.
  • You can list and manage videos added to your project with the Video Feed Manager.
Motion imagery in ArcGIS Pro

Pixel Editor

The Pixel Editor provides a suite of tools to interactively manipulate pixel values of raster and imagery data. Use the toolset for redaction, cloud and noise removal, or to reclassify categorical data. You can edit an individual pixel or a group of pixels at once. Apply editing operations to pixels in elevation datasets and multispectral imagery. Key enhancements in this release include the following:

  • Apply a custom raster function template to regions within the image
  • Interpolate elevation surfaces using values from the edges of a selected region

Additional resources

Using your knowledge of geography, geospatial and remote sensing science, and using the image classification tools in ArcGIS, you have produced a pretty good classified raster for your project area. Now it’s time to clean up some of those pesky pixels that were misclassified – like that one pixel labelled “shrub” in the middle of your baseball diamond. The fun part is using the Pixel Editor to interactively edit your classified raster data to be useful and accurate. The resulting map can be used to drive operational applications such as land use inventory and management.

 

For operational management of land use units, a useful classified map may not necessarily be the most accurate in terms of identified features. For example, a small clearing in a forest, cars in a parking lot, or a shed in a backyard are not managed differently than the larger surrounding land use. The Pixel Editor merges and reclassifies groups of pixels, objects and regions quickly and easily into units that can be managed similarly, and result in presentable and easy-to-understand maps for your decision support and management.

 

What is the Pixel Editor?

The Pixel Editor is an interactive group of tools that enables editing of raster data and imagery , and it is included with the ArcGIS Pro Image Analyst. It is a suite of image processing capability, driven by an effective user interface, that allows you to interactively manipulate pixel values. Try different operations using different parameter settings to achieve optimum editing results, then save, publish and share them.

 

The Pixel Editor is contextual to the raster source type of the layer being edited, which means that suites of capability are turned on or off depending on the data type of the layer you are working with. For thematic data, you can reassign pixels, objects and regions to different classes, perform operations such as filtering, shrinking or expanding classes, masking, or even create and populate new classes. Edits can be saved, discarded, and reviewed in the Edits Log.

 

Pixel Editor in action

Because the Pixel Editor is contextual, you need to first load the layer you want to edit. Two datasets are loaded into ArcGIS Pro, the infrared source satellite image and the classified result. The source data is infrared satellite imagery where vegetation is depicted in shades of red depending on coverage and relative vigor. This layer has been classified using the Random Trees classifier in ArcGIS Pro. The class map needs editing to account for classification discrepancies and to support operational land use management.

 

Launch the Pixel Editor

To launch the Pixel Editor, select the classified raster layer in the Contents pane, go to the Imagery tab and click the Pixel Editor button from the Tools group.


The Pixel Editor tab will open. In this example, we’ll be editing a land use map, so the editor will present you with editing tools relevant for thematic data.

The Reclassify dropdown menu

The Region group provides tools for delineating and managing a region of interest. The Edit group provides tools to perform specific operations to reclassify pixels, objects or regions of interest. The Edit group also provides the Operations gallery, which only works on Regions.

 

Reclassify

Reclassify is a great tool to reassign a group of pixels to a different class. In the example below, you can see from the multispectral image that either end of the track infield is in poor condition with very little vegetation, which resulted in that portion of the field being incorrectly classified. We want to reclassify these areas as turf, which is colored bright green in the classified dataset.

 

Infrared image and associated classmap needing edits.

We used the multispectral image as the backdrop to more easily digitize the field, then simply reassigned the incorrect class within the region of interest to the Turf class.

Edited classmap

Majority Filter and Expand
Check out the parking lots south of the track field containing cars, which are undesirable in terms of classified land use. We removed the cars and make the entire parking lot Asphalt with a two-step process:

Parking lot before editing
(1) We digitized the parking lot and removed the cars with a Majority Filter operation with a filter size of 20 pixels – the size of the biggest cars in the lot.

(2) Then we used Expand to reclassify any remaining pixels within the lot to Asphalt.

Parking lot after Majority Filter and Expand operations

Add a new class

Another great feature of the Pixel Editor is the ability to add a new class to your classified raster. Here, we added a Water class to account for water features that we missed in the first classification.

Add new class

New class WATER was added to the classmap

In the New Class drop-down menu, you can add a new class, provide its name, class codes, and define a color for the new class display.

After adding the new class to the class schema, we used the Reclass Object tool to reassign the incorrect Shadow class to the correct Water class. Simply click the object you want to reclassify and encompass it within the circle - and voila! – the object is reclassified to Water.

Reclass incorrect class "Shadow" to correct class "Water"

 

Feature to Region

Sometimes you may have an existing polygon layer with more accurate class polygon boundaries. These could be building footprints, roads, wetland polygons, water bodies and more. Using the Feature to Region option you can easily create a region of pixels to edit by clicking on the desired feature from your feature layers in the map. Then use the Reclass by Feature tool to assign the proper class.

Region from Feature Edit

We see the updated water body now matches the polygon feature from your feature class. The class was also changed from Shadow to its correct value, Water.

 

Summary

The Pixel Editor provides a fast, easy, interactive way to edit your classified rasters. You can edit groups of pixels and objects, and editing operations include reclassification using filtering, expanding and shrinking regions, or by simply selecting or digitizing the areas to reclassify. You can even add an entire new class. Try it out with your own data, and see how quickly you can transform a good classification data set into an effective management tool!

 

Acknowledgement

Thanks to the co-author, Eric Rice, for his contributions to this article.

Do you have blemishes in your image products, such as clouds and shadows that obscure interesting features, or DEMs that don’t represent bare earth? Or perhaps you want to obscure certain confidential features, or correct erroneous class information in your classmap. The Pixel Editor can help you improve your final image products.

 

After you have conducted your scientific remote sensing and image analysis, your results need to be presented to your customers, constituents and stakeholders. Your final products need to be correct and convey the right information for decision support and management. The pixel editor helps you achieve this last important aspect of your workflow – effective presentation of results.

 

Introducing the Pixel Editor

The Pixel Editor, in the Image Analyst extension, provides a suite of tools to interactively manipulate pixel values for raster and imagery data. It allows you to edit an individual pixel or groups of pixels. The types of operations that you can perform depends on the data source type of your raster dataset.

The Pixel Editor tools allows you to perform the following editing tasks on your raster datasets:

Blog Series

We will present a series of blogs addressing the robust capabilities of the Pixel Editor. We will focus on real-world practical applications for improving your imagery products, and provide tips and best practices for getting the most out of your imagery using the Pixel Editor. Stayed tuned for this interesting and worthwhile news.

 

Your comments, inputs and application examples of the Pixel Editor capability are very welcome and appreciated!

In the aftermath of a natural disaster, response and recovery efforts can be drastically slowed down by manual data collection. Traditionally, insurance assessors and government officials have to rely on human interpretation of imagery and site visits to assess damage and loss. But depending on the scope of a disaster, this necessary process could delay relief to disaster victims.

Article Snapshot: At this year’s Esri User Conference plenary session, the United Services Automobile Association (USAA) demonstrated the use of deep learning capabilities in ArcGIS to perform automated damage assessment of homes after the devastating Woolsey fire. This work was a collaborative prototype between Esri and USAA to show the art of the possible in doing this type of damage assessment using the ArcGIS platform.

The Woolsey Fire burned for 15 days, burning almost 97,000 acres, and damaging or destroying thousands of structures. Deep learning within ArcGIS was used to quickly identify damaged structures within the fire perimeter, fast tracking the time for impacted residents and businesses to have their adjuster process the insurance claims.

The process included capturing training samples, training the deep learning model, running inferencing tools and detecting damaged homes – all done within the ArcGIS platform. In this blog, we’ll walk through each step in the process.

Step1: Managing the imagery

Before the fires were extinguished, DataWing flew drones in the fire perimeter and captured high resolution imagery of impacted areas. The imagery totaled 40 GB in size and was managed using a mosaic dataset. The mosaic dataset is the primary image management model for ArcGIS to manage large volumes of imagery.

Step2. Labelling and preparing training samples

Prior to training a deep learning model, training samples must be created to represent areas of interest – in this case, the USAA was interested in damaged and undamaged buildings. The building footprint data provided by LA County, was overlaid on the high resolution drone imagery in ArcGIS Pro, and several hundred homes were manually labelled as Damaged or Undamaged  (a new field called “ClassValue” in the building footprint feature class was attributed with this information). These training features were used to export training samples using the Export Training Data for Deep Learning tool in ArcGIS Pro, with the metadata output format set to ‘Labeled Tiles’.

                             Resultant image chips (Labeled Tiles used for training the Damage Classification model)
               Resultant image chips (Labeled Tiles used for training the Damage Classification model)

Step 3: Training the deep learning model

ArcGIS Notebooks was used for training purposes. ArcGIS Notebooks is pre-configured with the necessary deep learning libraries, so no extra setup was required. With a few lines of code, the training samples exported from ArcGIS Pro were augmented. Using the arcgis.learn module in the ArcGIS Python API, optimum training parameters for the damage assessment model were set, and the deep learning model was trained using a ResNet34 architecture to classify all buildings in the imagery as either damaged or undamaged.

               
                                       The model converged around 99% accuracy                      

Once complete, the ground truth labels were compared to the model classification results to get a quick qualitative idea on how well the model performed.

         Model Predictions
                                                                           Model Predictions

For complete details on the training process see our post on Medium

Finally, with the model.save() function, the model can be saved and used for inferencing purposes.

Step 4: Running the inferencing tools

Inferencing was performed using the ArcGIS API for Python. By running inferencing inside of ArcGIS Enterprise using the model.classify_features function in Notebooks, we can take the inferencing to scale.

The result is a feature service that can be viewed in ArcGIS Pro. (Here’s a link to the web map).

Over nine thousand buildings were automatically classified using deep learning capabilities within ArcGIS!

The map below shows the damaged buildings marked in red, and the undamaged buildings in green. With 99% accuracy, the model is approaching the performance of a trained adjuster – what used to take us days or weeks, now we can do in a matter of hours.

               Inference results
                                                Inference results

Step 5: Deriving valuable insights

Business Analyst: Now that we had a better understanding of the impacted area, we wanted to understand who were the members impacted by the fires. When deploying mobile response units to disaster areas, it’s important to know where the most at-risk populations are located, for example, the elderly or children. Using Infographics from ArcGIS Business Analyst, we extracted valuable characteristics and information about the impacted community and generated a report to help mobile units make decisions faster.

Get location intelligence with ArcGIS Business Analyst
                                       Get location intelligence with ArcGIS Business Analyst

Operations Dashboard: Using operations dashboard containing enriched feature layers, we created easy dynamic access to the status of any structure, the value of the damaged structures, the affected population and much more.

            

Summary:

Using deep learning, imagery and data enrichment capabilities in the ArcGIS platform, we can quickly distinguish damaged from undamaged buildings, identify the most at-risk populations, and organizations can use this information for rapid response and recovery activities.

 More Resources:

Deep Learning in ArcGIS Pro

Distributed Processing using Raster Analytics

Image Analysis Workflows

Details on the model training of the damage assessment 

ArcGIS Notebooks

ABOUT THE AUTHORS

Vinay Viswambharan

Product manager on the Imagery team at Esri, with a zeal for remote sensing and everything imagery.

Rohit Singh

Development Lead - ArcGIS API for Python. Applying deep learning to the Science of Where @Esri. https://twitter.com/geonumist

The new Getting to Know ArcGIS Image Analyst guide gives GIS professionals and imagery analysts hands-on experience with the functionality available with the ArcGIS Image Analyst extension.

It’s a complete training guide to help you get started with complex image processing workflows. It includes a checklist of tutorials, videos and lessons along with links to additional help topics.

 

Task Checklist for getting started with ArcGIS Image Analyst

 

This guide is useful to anyone interested in learning how to work with the powerful image processing and visualization capabilities available with the ArcGIS Image Analyst. Complete the checklist provided in the guide and you’ll get hands on experience with:

 

  • Setting up ArcGIS Image Analyst in ArcGIS Pro
  • Extracting features from imagery using machine learning image classification and deep learning methods
  • Processing imagery quickly using raster functions
  • Visualizing and creating data in a stereo map
  • Creating and measuring features in image space
  • Working with Full Motion Video

 

Download the guide and let us know what you think! Take the guide survey to provide us with direct feedback.

ABOUT THE AUTHOR

The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later).

 

If you created or modified a mosaic dataset using Pro 2.3 or 10.7, you can update it and make it compatible with earlier versions by following the steps below.

 

  1. Open Pro 2.3.2.
  2. In the Catalog pane, navigate to your mosaic dataset. Right-click and select Properties from the drop-down menu.
  3. Click Defaults which displays Image Properties. Scroll down to Maximum Number of Rasters Per Mosaic, and change the value to any number. Press <Tab> to update the field.
  4. Change the Maximum Number of Rasters Per Mosaic property back to the original value and press <Tab> to update the field again. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This resets the mosaic dataset object to the new Pro 2.3.2 version.

 

Update to ArcGIS Pro 2.3.2 by going to My Esri or by using the in-app software updater.

If you create or modify a mosaic dataset in Pro 2.3, it can only be read and modified by ArcGIS Pro 2.3 and ArcMap 10.7 and served with ArcGIS Image Server 10.7 or newer. If you intend to publish your mosaic dataset to an image server prior to 10.7, do not create or edit it using Pro 2.3.

 

Note that for ArcGIS Pro 2.3, significant changes were made to the internal structure of the mosaic dataset so once modified using Pro 2.3, the updated mosaic dataset cannot be read on older versions.

 

In general, mosaic datasets created with older versions of ArcGIS can be read and handled with newer versions of ArcGIS. However, a mosaic dataset created with a newer version of ArcGIS may not be backwards compatible with older versions.

 

See the table below for mosaic dataset compatibility:

 

 Mosaic Dataset compatibility between versions

 

Users utilizing a mosaic dataset created with a new version that does not use any new features in that version, have been able to read a mosaic dataset with an older version.  However, this may cause incompatibility issues.

 

Solution

The ArcGIS Pro 2.3.2 software patch enables mosaic datasets created or modified by Pro 2.3 and 10.7 to be read and modified by earlier versions (ArcGIS Pro 2.1 and 10.5 or later). Read more about it by clicking here.

Do you have imagery from an aerial photography camera (whether a modern digital camera or scanned film) and the orientation data either by direct georeferencing or the results of aerial triangulation? If yes, you’ll want to work with a mosaic dataset, and load the imagery with the proper raster type.

 

The mosaic dataset provides the foundation for many different use cases, including:

  • On-the-fly orthorectification of images in a dynamic mosaic, for direct use in ArcGIS Pro or sharing through ArcGIS Image Server.
  • Production of custom basemaps from source imagery.
  • Managing and viewing aerial frame imagery in stereo
  • Accessing images in their Image Coordinate System (ICS).  


There are different raster types that support the photogrammetric model for frame imagery.  If you have existing orientation data from ISAT or Match-AT, you can use the raster types with those names to directly load the data (see
Help here). 

 

For a general frame camera, you’ll want to know how to use the Frame Camera raster type and we have recently updated some helpful resources:  

UI for automated script

 

Further information:

  • Note that if your imagery is oblique, the Frame Camera raster type supports multi-sensor oblique images. Refer to the http://esriurl.com/FrameCameraBestPractices for configuration advice.
  • If you want to extract a digital terrain model (DTM) from the imagery, or improve the accuracy of the aerial triangulation, see the Ortho Mapping capabilities of ArcGIS Pro (advanced license). http://esriurl.com/OrthoMapping.
  • If you are seeking additional detail on the photogrammetric model used within the Frame Camera raster type, see this supplemental document http://esriurl.com/FrameCameraDetailDoc

Did you know there is a huge repository of powerful Python Raster Functions that you can use for raster analysis and visualization? On the Esri/raster-functions repository on GitHub, you can browse, download, and utilize customized raster functions for on-the-fly processing on your desktop or in the cloud.

Esri's raster functions GitHub repository

What are Python raster functions, you ask?

A raster function is a sneaky way to perform complex raster analysis and visualization without taking up more space on your disk or more time in your day, with on-the-fly processing. A single raster function performs an analysis on an input raster, then displays the result on your screen. No new dataset is created, and pixels get processed as you pan and zoom around the image. You can connect multiple raster functions in a raster function chain and you can turn it into a raster function template by setting parameters as variables.

A Python raster function is simply a custom raster function. A lot of raster functions come with ArcGIS out-of-the-box, but if you don’t find what you’re looking for or you want to create something specific to your needs, you can script your own with Python.

There are a lot of Python raster functions already written and posted for everyone to use, and they’re easy to download and use in ArcGIS. And some of them are unbelievably cool.

For example: Topographic Correction function

The Topographic C Correction function, written by Gregory Brunner from the St. Louis Regional Services office, essentially removes the hillshade from orthophotos. As you can imagine, imagery over mountainous areas or regions with rugged terrain can be difficult to classify accurately because pixels may belong to the same land cover class but some fall into shadow due to varying slopes and aspects. With the topographic correction function, you can get a better estimate of pixel values that would otherwise be impacted by hillshade. The result is a sort of flattening of the image, and it involves some fairly complex math.

Hillshade removal effect

Why should you care?

Okay, so now you know there’s a repository of Python raster functions. What’s next?

  1. Explore the functions you may need.
    Some of the functions on the repository were written for specialized purposes and aren’t included with the ArcGIS installation, such as the Topographic C Correctionfunction (above) or the Linear Spectral Unmixing function [contributed by Jacob Wasilkowski, also from the St. Louis Esri Regional office].
  2. Try writing your own Python raster function.
    A lot of what’s on the GitHub repository is already in the list of out-of-the-box raster functions, but you can open the Python scripts associated with each one, customize them, and save them as new Python raster functions. This can be a great learning tool for those new to the process.
  3. Watch the repo for more functions.
    There are currently over 40 functions listed, and we are continually adding more.
  4. Contribute!
    Have you written something that you can share with the broader community? Do you have ideas for cool raster functions? Add to the conversation by commenting below!

 

Get Started

To easily access all the Python Raster Functions in the GitHub repository, simply click the Clone or Download button on the repository code page, and choose to download the raster functions as a ZIP file.

Click download ZIP button to get the full repo

Extract the zip folder to your disk, then use this helpful Wiki to read about using the Python Raster Functions in ArcGIS Pro.

 

For an example tutorial on using the Python Raster Functions, check out the blog on the Aspect-Slope function.

 

Enjoy exploring!

One of the most important components in a supervised image classification is excellent training sites. Training an accurate classification model requires that your training samples represent distinct spectral responses recorded from the remote sensing platform – a training sample for vegetation should not include pixels with snow or pavement, samples for water classification should not include pixels with bare earth. Using the spectral profiles chart, you can evaluate your training samples before you train your model.

If you use the Training Samples Manager, it’s one simple step to create the chart. If you created your training samples separately, where each polygon or point is a different record in the feature class, it just takes a quick geoprocessing tool before creating the chart if you want to look at the average spectral profiles for each class all on one graph.

The purpose of this blog is not to go through the entire image classification workflow from end-to-end, but simply to show you how to use spectral profiles to guide you in creating training samples. For example, the spectral profile example below tells you that the Water training sites are significantly distinct, but that Golf Course and Healthy Vegetation may be too similar to yield an accurate result.

 

Example of spectral profile

Of course, remotely sensed imagery with large-ish pixel sizes (e.g. Landsat with 30m resolution) is bound to have multiple land cover categories within a single pixel. Still, it’s important to create good training samples in regions where pixels are easily identifiable as a given land cover type, and these samples become even more important when working with lower resolution data or when trying to identify more land cover categories.

In this example, I used image classification to get an understanding of the amount of land used for agriculture in the Imperial Valley in Southern California, a region situated in the Colorado Desert with high temperatures and very little rainfall.

Imperial Valley in Imperial County, California

 

Scenario 1: With the Training Samples Manager

Using the Training Samples Manager in ArcGIS Pro to generate training samples allows you to create a feature class that’s already organized by class name and class ID according to a schema.

In this analysis, I’m using a schema made up of five land cover types: Barren, Planted/Cultivated, Shrubland, Developed, and Water. Using the drawing tools, I’ve created several training samples for each category. Each time I draw a new training sample, a new record is added to the list in the Training Samples Manager. If I tried to create a Spectral Profile Chart with that many training samples, I’d have to select every record for each land cover class. Instead, I’ll use the Collapse tool to combine all the training samples for a given class into a single record. Then I’ll click the Save button to save my training samples as a feature class.

 

Collapse training samples for each categoryScenario 2: Without the Training Samples Manager

If you have a feature class with training samples that you created outside of the Training Samples Manager, where each training site is a separate record in the feature class, you need to run the Dissolve geoprocessing tool before creating a chart if you want to see the average spectral profiles for all your training samples at once. Use the class name or class value as the Dissolve field to combine all records associated with a given land cover class into a single multi-part polygon.

To view the spectral profile for one training sample at a time interactively (e.g. to view each individual training site for Developed), skip this step entirely and start working with your chart.

Use Dissolve to "collapse" records in the training samples feature classCharting the Spectral Profiles

At this point, using your imagery and the training samples feature class, you can create your spectral profiles chart:

  1. Right-click on the image to be classified in the Contents pane
  2. Select Create Chart > Spectral Profile
  3. In the Chart Properties pane, choose Mean Line as the Plot Type.
  4. Use the Feature Selector tool to select one of the polygons. Remember that because we used Collapse or Dissolve, selecting one polygon means you are selecting all the training sites for the land cover category represented by that polygon.
  5. Symbolize the profile lines to match the color of the land cover type and change the label name so you can easily assess the chart.
    **  Pro Tip: To change the label of the profile, type the name in the Label field on the Chart Properties pane and hit TAB  **
  6. Try out different chart types to see the types of information you can glean from them – do you see outliers? Consistent trends? Similar profiles? Distinct categories.

Below is the spectral profile chart I created using the imagery and training samples for the Imperial Valley study. I used the “Medium” (grey) theme in the chart to make it easier to view the profiles.

Spectral profile of land cover training samples in Imperial Valley studyAssessment of Spectral Profiles

At first glance, I can tell that the Planted/Cultivated, Water, and Barren land cover classes have profiles that are distinct enough that I can expect good initial results for classification of these classes. However, the Developed and Shrubland profiles are a little too close for comfort: they have the same general shape and the average reflectance values are similar at each wavelength. From this, I can choose whether I want to re-create my training samples or simply combine the two categories into a single class. Theoretically, combining the Shrubland and Developed into one class shouldn’t impact my analysis because my main focus is an accurate estimate of Planted/Cultivated land cover.

Before making my decision, I’ll take a deeper look at the data. The chart below is the same data in a Boxes plot, and I can hover my mouse over the boxes to get the statistics for each land cover class at each wavelength band.

From the Boxes chart, I can see that the Developed and Shrubland land cover classes have similar average values and similar distribution. However, the Developed land cover type has much higher maximum reflectance values across all wavelengths, and Shrubland has lower minimum values. This makes sense – I would expect developed areas (buildings, roads, parking lots, etc.) to be brighter in general than shrubby areas.
Since the Boxes chart tells me that the minimum and maximum values vary so much between the classes, combining these two classes into a single class could potentially confuse my classification model and impact the overall accuracy. Instead, I’m going to re-create the training samples for the Developed class to capture those higher reflectance values.
The charts below include the spectral profiles for my modified training samples.
Now, in the visible and near infrared bands especially, you can see distinctly higher reflectance values for the Developed land cover training sample data compared to the Shrubland spectral response. With these results, I would be comfortable moving forward with my classification workflow by training my model with all my training samples.

Extra Credit

For bonus points, I used the Multispectral Landsat image service from the Living Atlas to quickly visualize NDVI in the Imperial Valley area. Then I used a spectral profile chart to compare NDVI averages in different areas of interest for vegetation health assessment. Use the steps below to try it yourself:
  1. In ArcGIS Pro,  open the Map tab and select Add Data.
  2. From the menu on the left, expand the Portal option and select Living Atlas. Use the Search box to search for “Multispectral Landsat.”
  3. Select the Multispectral Landsat image service and click OK.
  4. Zoom to Imperial Valley or your area of interest.
  5. Make sure the Multispectral Landsat service is highlighted in the Contents pane. In the Image Service contextual tab set, select the Data tab.
  6. In the Processing group, click the Processing Templates drop-down.
  7. Scroll down to NDVI Colorized. Select this template to display the colormap for NDVI.
  8. Right-click on the Multispectral Landsat image service in Contents and select Create Chart > Spectral Profile.
  9. Use the drawing tools to select multiple small areas of interest to compare NDVI distribution throughout the region.

 

Want to know more?

Try the Image Classification Wizard tutorial

Learn more about the Training Samples Manager

Learn more about image classification

Learn more about charting tools

Given the growing number of people using commercial drones these days, a common question is: “What do I do with all this imagery?”

 

The simple answer is that it depends on what you’re trying to accomplish.

 

If you just want to share the imagery as-is, and aren’t worried about making sure it’s georeferenced to be an accurate depiction of the ground, Oriented Imagery is probably your answer. If you’re capturing video, Full Motion Video in the Image Analyst extension for ArcGIS Pro is your best bet. Ultimately, though, many users plan to turn the single frame images acquired by drones into authoritative mapping products—orthorectified mosaics, digital surface models (DSMs), digital terrain models (DTMs), 3D point clouds, or 3D textured meshes.

 

Esri has three possible solutions for producing authoritative mapping products from drone imagery, each targeted for different users— (1) Drone2Map for ArcGIS, (2) the ortho mapping capability of ArcGIS Pro Advanced, and (3) the Ortho Maker app included with ArcGIS Enterprise. Read on to get an overview of all three solutions, and to figure out which one is best for your application.

 

Drone2Map for ArcGIS

For individual GIS users, Drone2Map is an easy-to-use, standalone app that supports a complete drone-processing workflow.

 

Drone2Map includes guided templates for creating orthorectified mosaics and digital elevation models. It’s also the only ArcGIS product that creates 3D products from drone imagery, including RGB point clouds and 3D textured meshes. Once you’ve processed your imagery, it’s easy to share the final products—2D web maps and 3D web scenes can be easily published on ArcGIS Online with a single step. ArcGIS Desktop isn’t required to run Drone2Map, but products created with Drone2Map are Desktop-compatible. That’s important, because it gives you the option to use ArcGIS Pro as an image management solution, or to serve your imagery products as dynamic image services using ArcGIS Image Server.

 

Ortho mapping capability of ArcGIS Pro Advanced

For GIS professionals, the ortho mapping capability of ArcGIS Pro Advanced enables you to create orthomosaics and digital elevation models from drone images (as well as from modern aerial imagery, historical film, and satellite data) in the familiar ArcGIS Desktop environment.

 

There are added benefits to processing your drone imagery in ArcGIS Pro. For users with very large imagery collections, Pro’s image management capabilities are especially valuable. Managing drone imagery using mosaic datasets makes it easy to query images and metadata, mosaic your imagery, and build footprints. Image management and processing workflows in ArcGIS Pro can also be automated using Python or Model Builder. Finally, sharing your imagery is straightforward. While you can publish your products to ArcGIS Online, you can also use ArcGIS Pro in conjunction with ArcGIS Image Server to publish drone products as dynamic image services.  

 

Ortho Maker app in ArcGIS Enterprise 10.6.1+

For ArcGIS Enterprise users, the Ortho Maker app offers a solution for organizations with multiple users who want simple, web-based workflows to create orthomosaics and DEMs from drone imagery.

 

Ortho Maker provides an easy-to-use web interface for uploading drone imagery and managing the ortho mapping workflow, while behind the scenes it uses the distributed processing and storage capability of Enterprise and ArcGIS Image Server to quickly process even very large collections of drone imagery. (That also means it requires ArcGIS Image Server configured for raster analysis.) The ArcGIS API for Python can be used to automate the ortho mapping process. Sharing Ortho Maker products is virtually automatic—they become imagery layer items accessible in your Enterprise portal, easily shared with users throughout your organization.

 

What do typical users say?

things typical users of each ArcGIS option for processing imagery might say

Next steps

Now that you have a better idea which solution makes sense for your application, it’s time to take one for a test drive. Drone2Map offers a free 15-day trial, plus a hands-on Learn lesson to get started. You can try ArcGIS Pro Advanced free for 21 days, and read more about getting started with ortho mapping for drone imagery.  For users with Enterprise 10.6.1+ and raster analysis enabled, Ortho Maker is included—find out how to get started.  Other Enterprise users should contact their administrator to see about getting access. If you still have questions, contact Esri for more product information.

Esri has released a free app for iOS that interfaces with ArcGIS Online, allowing Esri users to view GIS content from ArcGIS Online to assist with the drone flight planning.   (This was originally developed by Esri business partner 3DR)

 

Flight Plan on top of prior Drone2Map orthomosaic

 

The Site Scan - Limited Edition app (formerly called "Site Scan - Esri Edition") provides mission planning and flight control for a number of leading drones to optimize drone collections for use in Drone2Map or Ortho Mapping in ArcGIS Pro.  This release is compatible with the DJI Phantom 4 Pro, DJI M200, DJI M210, DJI Inspire 2, DJI Mavic Pro, or Yuneec H520-G, as well as the 3DR Solo.

 

The Site Scan - Limited Edition app allows users to take advantage of substantial amounts of publicly accessible data, as well as custom data layers from the user’s ArcGIS Online account, as base and reference data for mission planning.

 

 

Site Scan - Limited Edition is free to everyone with an ArcGIS Online account. The app is available on iTunes at http://esriurl.com/SSEEand will be available soon via ArcGIS Apps.   

 

Try it out! 

 

Note that, as a free app, support for Site Scan - Limited Edition is based on Geonet:  http://esriurl.com/SiteScanGeonet

For the full cloud-based SIte Scan service, see http://esriurl.com/DroneCollections.

Drone2Map for ArcGIS version 1.3.2 has been released today and is available for immediate download by all current and future users.  For users connecting to ArcGIS Online, download and install the new version directly from the software, or download from http://www.esri.com/drone2map.  All users, including those typically working in offline mode, are encouraged to download this new version.  

Esri Headquarters Building Q

This release of Drone2Map for ArcGIS is primarily focused on the following enhancements and bug fixes:

 

Enhancements

  • Volumetric calculations can now be shared to ArcGIS Online or ArcGIS Enterprise through the share as feature layer tool.

Publish volumetric measurements in your ArcGIS Online account

  • Decimal values in the contour interval field are now supported in all available language systems.

 

Fixes

  • Always have success when creating orthomosaics. Additional camera models have been added to our camera database to greatly improve camera search options so that your orthomosaic gets completed.
  • Vector basemap users will no longer experience project creation issues with the vector basemaps in their default basemap gallery.
  • With improved camera support, the creation of NDVI (normalized difference vegetation index) layers will no longer be an issue (for appropriate multispectral cameras).
  • You can now see spectral band metadata when viewing output data products in ArcGIS Pro.
  • Users with the Dansk regional language pack will now have correct image altitudes for more than one layer.

 

Try the new version, and please let us know what you think!  Drone2Map@esri.com 

 

Cody Benkelman

Drone2Map for ArcGIS Product Manager