Imagery and Remote Sensing Blog

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Other Boards in This Place

Latest Activity

(55 Posts)
GeoffTaylor
Esri Contributor

Data Scientists and GIS Users can now take advantage of LiDAR processing in Numpy & Pandas, two python libraries used for processing and making meanings from "Big Data".

Read more...

more
0 9 1,299
VinayViswambharan
Esri Contributor

With the firehose of imagery that’s streaming down daily from a variety of sensors, the need for using AI to automate feature extraction is only increasing. To make sure your organization is prepared, Esri is taking AI to the next level. We are very excited to announce the release of ready-to-use geospatial AI models on the ArcGIS Living Atlas.

Article Overview: Esri is bringing ready-to-use deep learning models to our user community through ArcGIS Online.

To kick it off, we’ve added three models — building footprint extraction and land cover classification from satellite imagery, and another model to classify points representing trees in point cloud datasets.

With the existing capabilities in ArcGIS, you’ve been able to train over a dozen deep learning models on geospatial datasets and derive information products using the ArcGIS API for Python or ArcGIS Pro, and scale up processing using ArcGIS Image Server.

Building footprints automatically extracted using the new deep learning model
Building footprints automatically extracted using the new deep learning model

These newly released models are a game changer! They have been pre-trained by Esri on huge volumes of data and can be readily used (no training required!) to automate the tedious task of digitizing and extracting geographical features from satellite imagery and point cloud datasets. They bring the power of AI and deep learning to the Esri user community. What’s more, these deep learning models are accessible for anyone with an ArcGIS Online subscription at no additional cost.

 

Using the models

Using these models is simple. You can use geoprocessing tools (such as the Detect Objects Using Deep Learning tool) in ArcGIS Pro with the imagery models.  Point the tool to the imagery and the downloaded model, and that’s about it – deep learning has never been this easy! A GPU, though not necessary, can help speed things up. With ArcGIS Enterprise, you can scale up the inferencing using Image Server.

Using the model in ArcGIS Pro
Using the building footprint extraction model in ArcGIS Pro

Coming soon, you’ll be able to consume the model directly in ArcGIS Online Imagery and run it against your own uploaded imagery—all without an ArcGIS Enterprise deployment. The 3D Basemaps solution is also being enhanced to use the tree point classification model and create realistic 3D tree models from raw point clouds.

 

How can you benefit from these deep learning models?

It probably goes without saying that manually extracting features from imagery—like digitizing footprints or generating land cover maps—is time-consuming. Deep learning automates the process and significantly minimizes the manual interaction needed to create these products. However, training your own deep learning model can be complicated – it needs a lot of data, extensive computing resources, and knowledge of how deep learning works.

 

Sample building footprints extracted - Woodland, CA
Sample building footprints extracted - Woodland, CA

With ready-to-use models, you no longer have to invest time and energy into manually extracting features or training your own deep learning model. These models have been trained on data from a variety of geographies and work well across them. As new imagery comes in, you can readily extract features at the click of a button, and produce layers of GIS datasets for mapping, visualization and analysis.

Sample building footprints extracted - Palm Islands, Dubai
Sample building footprints extracted - Palm Islands, Dubai

 

Get to know the first three models we released

Three deep learning models are now available in ArcGIS Online. (Watch for more models in the future!). These models are available as deep learning packages (DLPKs) that can be used with ArcGIS Pro, Image Server and ArcGIS API for Python.

1. Building Footprint Extraction model is used to extract building footprints from high resolution satellite imagery. While its designed for the contiguous United States, it performs fairly well in other parts of the globe.

The model performs fairly well in other parts of the globe. Results from Ulricehamn, Sweden.
The model performs fairly well in other parts of the globe. Results from Ulricehamn, Sweden.

Here’s a story map presenting some of the results. Building footprint layers are useful for creating basemaps and in analysis workflows for urban planning and development, insurance, taxation, change detection, and infrastructure planning.

2. Landcover Classification model is used to create a land cover product using Landsat 8 imagery. The classified land cover will have the same classes as the National Land Cover Database. The resulting land cover maps are useful for urban planning, resource management, change detection and agriculture.

Classified landcover map using Landsat 8 imagery
Classified landcover map using Landsat 8 imagery

This generic model is has been trained on the National Land Cover Database (NLCD) 2016 with the same Landsat 8 scenes that were used to produce the database. Land cover classification is a complex exercise and is hard to capture using traditional means. Deep learning models have a high capacity to learn these complex semantics and give superior results.

3. Tree Point Classification model can be used to classify points representing trees in point cloud datasets.

Interactive 3D basemap created by employing tree point classification model.
3D scene created by employing tree point classification model.

Classifying tree points is useful for creating high quality 3D basemaps, urban planning and forestry workflows.

 

Next steps

Try out the deep learning models in ArcGIS Living Atlas for yourself. Read more detailed instructions for using the deep learning models in ArcGIS. Have questions? Let us know on GeoNet how they are working for you, and which other feature extraction tasks you’d like AI to do for you!

more
0 3 958
VinayViswambharan
Esri Contributor

The ArcGIS Image Analyst extension for ArcGIS Pro 2.5 now features expanded deep learning capabilities, enhanced support for multidimensional data, enhanced motion imagery capabilities, and more.

Learn about  new imagery and remote sensing-related features added in this release to improve your image visualization, exploitation, and analysis workflows.

Deep Learning

We’ve introduced several key deep learning features that offer a more comprehensive and user-friendly workflow:

  • The Train Deep Learning Model geoprocessing tool trains deep learning models natively in ArcGIS Pro. Once you’ve installed relevant deep learning libraries (PyTorch, Fast.ai and Torchvision), this enables seamless, end-to-end workflows.
  • The Classify Objects Using Deep Learning geoprocessing tool is an inferencing tool that assigns a class value to objects or features in an image. For instance, after a natural disaster, you can classify structures as damaged or undamaged.
  • The new Label Objects For Deep Learning pane provides an efficient experience  for managing and  labelling training data. The pane also provides the option to export your deep learning data.
  • A new user experience lets you interactively review deep learning results and edit classes as required.
New deep learning tools in ArcGIS Pro 2.5

New deep learning tools in ArcGIS Pro 2.5

Multidimensional Raster Management, Processing and Analysis

New tools and capabilities for multidimensional analysis allow you to extract and manage subsets of a multidimensional raster, calculate trends in your data, and perform predictive analysis.

New user experience

A new contextual tab in ArcGIS Pro makes it easier to work with multidimensional raster layers or multidimensional mosaic dataset layers in your map.

Intuitive user experience to work with multidimensional data

Intuitive user experience to work with multidimensional data

  • You can Intuitively work with multiple variables and step through time and depth.
  • You have direct access to the new functions and tools that are used to manage, analyze and visualize multidimensional data.
  • You can chart multidimensional data using the temporal profile, which has been enhanced with spatial aggregation and charting trends.

New tools for management and analysis

The new multidimensional functions and geoprocessing tools are listed below.

New geoprocessing tools for management

We’ve added two new tools to help you extract data along specific variables, depths, time frames, and other dimensions:

  • Subset Multidimensional Raster
  • Make Multidimensional Raster layer

New geoprocessing tools for analysis

  • Find Argument Statistics allows you to determine when or where a given statistic was reached in multidimensional raster dataset. For instance, you can identify when maximum precipitation occurred over a specific time period.
  • Generate Trend Raster estimates the trend for each pixel along a dimension for one or more variables in a multidimensional raster. For example, you might use this to understanding how sea surface temperature has changed over time.
  • Predict Using Trend Raster computes a forecasted multidimensional raster using the output trend raster from the Generate Trend Raster tool. This could help you predict the probability of a future El Nino event based on trends in historical sea surface temperature data.

Additionally, the following tools have improvements that support new analytical capabilities:

New raster functions for analysis

  • Generate Trend
  • Predict Using Trend
  • Find Argument Statistics
  • Linear Spectral Unmixing
  • Process Raster Collection

New Python raster objects

Developers can take advantage of new classes and functions added to the Python raster object that allow you to work with multidimensional rasters

New classes include:

  • ia.RasterCollection – The RasterCollection object allows a group of rasters to be sorted and filtered easily and prepares a collection for additional processing and analysis.
  • ia.PixelBlock – The PixelBlock object defines a block of pixels within a raster to use for processing. It is used in conjunction with the PixelBlockCollection object to iterate through one or more large rasters for processing.
  • ia.PixelBlockCollection – The PixelBlockCollection object is an iterator of all PixelBlock objects in a raster or a list of rasters. It can be used to perform customized raster processing on a block-by-block basis, when otherwise the processed rasters would be too large to load into memory.

New functions include:

  • ia.Merge() – Creates a raster object by merging a list of rasters spatially or across dimensions.
  • ia.Render (inRaster, rendering_rule={…}) – Creates a rendered raster object by applying symbology to the referenced raster dataset. This function is useful when displaying data in a Jupyter notebook.
  • Raster functions for arcpy.ia – You can now use almost all of the raster functions to manage and analyze raster data using the arcpy API
New tools to analyse multidimensional data

New tools to analyse multidimensional data

Motion Imagery

This release includes enhancements to our motion imagery support, so you can better manage and interactively use video with embedded geospatial metadata:

  • You can now enhance videos in the video player using contrast, brightness, saturation, and gamma adjustments. You can also invert the color to help identify objects in the video.
  • Video data in multiple video players can be synchronized for comparison and analysis.
  • You can now measure objects in the video player, including length, area, and height.
  • You can list and manage videos added to your project with the Video Feed Manager.
Motion imagery in ArcGIS Pro

Pixel Editor

The Pixel Editor provides a suite of tools to interactively manipulate pixel values of raster and imagery data. Use the toolset for redaction, cloud and noise removal, or to reclassify categorical data. You can edit an individual pixel or a group of pixels at once. Apply editing operations to pixels in elevation datasets and multispectral imagery. Key enhancements in this release include the following:

  • Apply a custom raster function template to regions within the image
  • Interpolate elevation surfaces using values from the edges of a selected region

Additional resources

more
0 0 582
JeffLiedtke
Occasional Contributor II

Do you have blemishes in your image products, such as clouds and shadows that obscure interesting features, or DEMs that don’t represent bare earth? Or perhaps you want to obscure certain confidential features, or correct erroneous class information in your classmap. The Pixel Editor can help you improve your final image products.

 

After you have conducted your scientific remote sensing and image analysis, your results need to be presented to your customers, constituents and stakeholders. Your final products need to be correct and convey the right information for decision support and management. The pixel editor helps you achieve this last important aspect of your workflow – effective presentation of results.

 

Introducing the Pixel Editor

The Pixel Editor, in the Image Analyst extension, provides a suite of tools to interactively manipulate pixel values for raster and imagery data. It allows you to edit an individual pixel or groups of pixels. The types of operations that you can perform depends on the data source type of your raster dataset.

The Pixel Editor tools allows you to perform the following editing tasks on your raster datasets:

Blog Series

We will present a series of blogs addressing the robust capabilities of the Pixel Editor. We will focus on real-world practical applications for improving your imagery products, and provide tips and best practices for getting the most out of your imagery using the Pixel Editor. Stayed tuned for this interesting and worthwhile news.

 

Your comments, inputs and application examples of the Pixel Editor capability are very welcome and appreciated!

more
0 1 356
VinayViswambharan
Esri Contributor

In the aftermath of a natural disaster, response and recovery efforts can be drastically slowed down by manual data collection. Traditionally, insurance assessors and government officials have to rely on human interpretation of imagery and site visits to assess damage and loss. But depending on the scope of a disaster, this necessary process could delay relief to disaster victims.

Article Snapshot: At this year’s Esri User Conference plenary session, the United Services Automobile Association (USAA) demonstrated the use of deep learning capabilities in ArcGIS to perform automated damage assessment of homes after the devastating Woolsey fire. This work was a collaborative prototype between Esri and USAA to show the art of the possible in doing this type of damage assessment using the ArcGIS platform.

The Woolsey Fire burned for 15 days, burning almost 97,000 acres, and damaging or destroying thousands of structures. Deep learning within ArcGIS was used to quickly identify damaged structures within the fire perimeter, fast tracking the time for impacted residents and businesses to have their adjuster process the insurance claims.

The process included capturing training samples, training the deep learning model, running inferencing tools and detecting damaged homes – all done within the ArcGIS platform. In this blog, we’ll walk through each step in the process.

Step1: Managing the imagery

Before the fires were extinguished, DataWing flew drones in the fire perimeter and captured high resolution imagery of impacted areas. The imagery totaled 40 GB in size and was managed using a mosaic dataset. The mosaic dataset is the primary image management model for ArcGIS to manage large volumes of imagery.

Step2. Labelling and preparing training samples

Prior to training a deep learning model, training samples must be created to represent areas of interest – in this case, the USAA was interested in damaged and undamaged buildings. The building footprint data provided by LA County, was overlaid on the high resolution drone imagery in ArcGIS Pro, and several hundred homes were manually labelled as Damaged or Undamaged  (a new field called “ClassValue” in the building footprint feature class was attributed with this information). These training features were used to export training samples using the Export Training Data for Deep Learning tool in ArcGIS Pro, with the metadata output format set to ‘Labeled Tiles’.

                             Resultant image chips (Labeled Tiles used for training the Damage Classification model)
               Resultant image chips (Labeled Tiles used for training the Damage Classification model)

Step 3: Training the deep learning model

ArcGIS Notebooks was used for training purposes. ArcGIS Notebooks is pre-configured with the necessary deep learning libraries, so no extra setup was required. With a few lines of code, the training samples exported from ArcGIS Pro were augmented. Using the arcgis.learn module in the ArcGIS Python API, optimum training parameters for the damage assessment model were set, and the deep learning model was trained using a ResNet34 architecture to classify all buildings in the imagery as either damaged or undamaged.

               
                                       The model converged around 99% accuracy                      

Once complete, the ground truth labels were compared to the model classification results to get a quick qualitative idea on how well the model performed.

         Model Predictions
                                                                           Model Predictions

For complete details on the training process see our post on Medium

Finally, with the model.save() function, the model can be saved and used for inferencing purposes.

Step 4: Running the inferencing tools

Inferencing was performed using the ArcGIS API for Python. By running inferencing inside of ArcGIS Enterprise using the model.classify_features function in Notebooks, we can take the inferencing to scale.

The result is a feature service that can be viewed in ArcGIS Pro. (Here’s a link to the web map).

Over nine thousand buildings were automatically classified using deep learning capabilities within ArcGIS!

The map below shows the damaged buildings marked in red, and the undamaged buildings in green. With 99% accuracy, the model is approaching the performance of a trained adjuster – what used to take us days or weeks, now we can do in a matter of hours.

               Inference results
                                                Inference results

Step 5: Deriving valuable insights

Business Analyst: Now that we had a better understanding of the impacted area, we wanted to understand who were the members impacted by the fires. When deploying mobile response units to disaster areas, it’s important to know where the most at-risk populations are located, for example, the elderly or children. Using Infographics from ArcGIS Business Analyst, we extracted valuable characteristics and information about the impacted community and generated a report to help mobile units make decisions faster.

Get location intelligence with ArcGIS Business Analyst
                                       Get location intelligence with ArcGIS Business Analyst

Operations Dashboard: Using operations dashboard containing enriched feature layers, we created easy dynamic access to the status of any structure, the value of the damaged structures, the affected population and much more.

            

Summary:

Using deep learning, imagery and data enrichment capabilities in the ArcGIS platform, we can quickly distinguish damaged from undamaged buildings, identify the most at-risk populations, and organizations can use this information for rapid response and recovery activities.

 More Resources:

Deep Learning in ArcGIS Pro

Distributed Processing using Raster Analytics

Image Analysis Workflows

Details on the model training of the damage assessment 

ArcGIS Notebooks

ABOUT THE AUTHORS

Vinay Viswambharan

Product manager on the Imagery team at Esri, with a zeal for remote sensing and everything imagery.

Rohit Singh

Development Lead - ArcGIS API for Python. Applying deep learning to the Science of Where @Esri. https://twitter.com/geonumist

more
1 0 1,627
JuliaLenhardt
Esri Contributor

The new Getting to Know ArcGIS Image Analyst guide gives GIS professionals and imagery analysts hands-on experience with the functionality available with the ArcGIS Image Analyst extension.

It’s a complete training guide to help you get started with complex image processing workflows. It includes a checklist of tutorials, videos and lessons along with links to additional help topics.

Task Checklist for getting started with ArcGIS Image Analyst

This guide is useful to anyone interested in learning how to work with the powerful image processing and visualization capabilities available with the ArcGIS Image Analyst. Complete the checklist provided in the guide and you’ll get hands on experience with:

  • Setting up ArcGIS Image Analyst in ArcGIS Pro
  • Extracting features from imagery using machine learning image classification and deep learning methods
  • Processing imagery quickly using raster functions
  • Visualizing and creating data in a stereo map
  • Creating and measuring features in image space
  • Working with Full Motion Video

Download the guide and let us know what you think! Take the guide survey to provide us with direct feedback.

ABOUT THE AUTHOR

more
2 0 493