Imagery and Remote Sensing Blog

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Other Boards in This Place

Latest Activity

(65 Posts)
ArthurCrawford
Esri Contributor

This blog is developing, just starting and will grow with time:

Over the last few years, I have been publishing lidar colorized point clouds as supplements to the 3D cities, like St. Louis area, that I have created using the 3D Basemap solution and extracting buildings from lidar.    Sean Morrish wrote a wonderful blog on the mechanics of publishing lidar.   Lately, I have looked into using the colorized lidar point clouds as a effective and relatively cheap way to create 3D Basemaps.   It has some advantages as most states, counties and cities already have lidar available, most have high resolution imagery and all have NAIP imagery that is needed to create these scenes.   

Mantiowoc County lidar scenes:

Some counties, cites and states are doing this already. Manitowoc County, WI., has a 3D LiDAR Point Cloud and also creating scenes with the data.  Manitowoc County also did a great story map showing how their lidar is used as a colorized lidar point cloud here and highly recommend taking a look at it.  

ManitowocStoryMap

The StoryMap shows with a video how to do capture vertical, horizontal, and direct distances of LiDAR ground surfaces at any location Countywide.   How to obtain detailed measurements of small LiDAR surface features such as depth of ditches to the road. 

Measure LiDAR point clouds distances relative to LiDAR ground surfaces using house roofs as an example to see the height of the building.

Here's one of Manitowac Buildings scene layer where they had the creative idea of using the same colorized lidar to give fake building sides by showing the lidar classified as buildings several times over, each with a slightly less elevation by changing the offset in the symbology.   Further down in the blog, I show how to do this.

Hats off to Bruce Riesterer of Manitowoc County who put this all together before retiring, including coming up with the idea to use the building points multiple times with different colors to show the sides of buildings and now is working for private industry, see his new work at RiestererB_AyresAGO.

.

State of Connecticut 3D Viewer:

I helped the State of Connecticut CLEAR colorize their lidar using NAIP imagery as the first statewide lidar point cloud published to ArcGIS Online.   It turned out to be about 650GBs of lidar broken into two scene layer packages.   The time spent on it was mainly processing time and loading time.   The State of Connecticut CLEAR sent me the newest NAIP imagery they had and with all the data on a computer, I just let it run colorizing the lidar.  With that layer and other layers CLEAR had online, a web scene was created.   A feature class was added that has links to their laz files, to the imagery, DEM and other layers.   This allows users to preview the lidar in a 3D viewer before downloading.   Users can even do measurements with lines or areas in 3D that allow most users to view it before.   

Connecticut 3D Viewer

Here's several views using different symbology and filters on the published lidar point cloud for Connecticut.

Colorized with NAIP:

ConnecticutColor

Class Code modulated with intensity:  Modulated Intensity allows the features to show up like roads, sidewalks, details in roof tops and trees.

Elevation modulated with intensity:

Color filtered to show buildings:

Here's some examples of how to display from their blog:

Connecticut Examples of lidar point cloud display

Chicago 3D:

Recently, I was asked by the Urban team to help with Chicago and created basic 3D Building footprints from the Chicago Data Portal building footprints.   I used the 8ppm Geiger lidar to create the DSM, nDSM and DTM for the 3D Basemap Solution.   I then colorized the lidar using NAIP imagery and again the high resolution  leaf off 2018 imagery.   I then extracted the points classified as vegetation and used it to replace the high resolution leaf off imagery in the scene layer to show trees as green, but to get the roof tops using the high resolution.Leaf Off Colorized LIdar

Above is the high resolution leaf off imagery used to colorize the lidar in the scene.  Below is he same area with the lidar colorized with NAIP for the vegetation (some building sides were classified by the vendor as vegetation in delivery to the USGS of the 8ppm Geiger lidar).  You can see how the trees are much more identifiable using the lidar colorized with NAIP.

This could be used as a 3D Basemap.  The 3D buildings do not have segmented roofs (divided by height), but the lidar shows the detail of the buildings.    Below is the John Hancock Building identified in blue with a basic multipatch polygon in a scene layer with transparency applied.

Chicago Skyline showing 3D basic buildings

Here's a view of the Navy Pier with both the NAIP colorized trees and High Resolution colorized lidar on at the same time.

Navy Pier

Picking the Imagery to use:

Using the Chicago Lidar, I also built a scene to compare the different types of imagery used to colorize the lidar.   It's a guide using the the video below to show how high resolution imagery leaf on vs. high resolution imagery leaf off vs. 1m NAIP leaf on.  You can see below how leaf on imagery is great for showing trees.   Your imagery does not have to meet exactly the year of the lidar, but the closer the better.   But making it visual appealing is import in 3D cartography with colorized point clouds.

Chicago Lidar Colorized Comparison (High Res Leaf On vs. High Res Leaf Off vs. NAIP Leaf On).  

Leaf On High Resolution:

LeafOnHighRes

Leaf On NAIP:

Leaf Off High Resolution:

Leaf Off High Resolution

Sometimes colorful fall imagery before the leaves drop can be a great for scenes, but it all depends on what you want to show.

Way to add trees with limbs and points as leaves:

Here's a test I did using Dead Tree symbology to represent the trunk and branches, while the leaves come from the lidar colorized.  My tool Trees from LIdar or 3D Basemap Solution tree tools can create the trees from lidar and then the points can be symbolized with the Dead Tree symbol.  This image was done in ArcGIS Pro, not a scene.

Trees using point cloud and Dead Tree Symbology to represent trunks and branches

City of Klaipėda 3D viewer (Lithuania):

The City of Klaipėda in Lithuania has used the colorized point clouds of trees with cylinders to represent trunks.  The mesh here is very detailed.  Because of being zoomed in so far, the points appear fairly large.

Here's another view zoomed out:

And another further zoomed out:

Some of the scenes above use NAIP imagery instead of higher resolution imagery.  Why, when higher resolution is available.  Often high resolution imagery is somewhat oblique where NAIP is collected usually from a higher altitude.   With the higher altitude, you get less leans of buildings in the imagery and often the spacing of the lidar does not support the higher resolution imagery.   In these cases NAIP is often preferred, but make point clouds with a couple las files to see what the differences are.

Here's a one over Fairfax County, VA, using their published lidar colorized (from their 3D LiDAR Viewer) and using intensity to show the roofs.  With intensity, you can see the roof details better than with the NAIP imagery.

Scene Viewer 

With large buildings, the building sides and roofs using intensity for symbology really shows off the detail.  The sides of the buildings are just representative, not showing what the actual sides look like.

Scene Viewer 

Here it is below with just using NAIP colorized points without the building sides and roofs using intensity for symbology. 

The missing sides of the buildings make it hard to see where the building actually is and much of the roof detail is lost.   Colorizing with intensity allows more detail because intensity is a measure, collected for every point, of the return strength of the laser pulse that generated the point. It is based, in part, on the reflectivity of the object struck by the laser pulse. The imagery often does not match exactly with the lidar due to collection maybe slightly off and slight oblique angle found in the imagery orthorectification.

Colorizing Redlands 2018 lidar:

Here's an example of colorizing the City of Redlands with the 2018 lidar that I worked on to support one of the groups here at Esri.  Highly recommend taking just one las file to begin with, run through the process all the way to publishing to make sure there are no issues.   I have in the past run through hours of download and colorization, just to learn a projection is wrong or something else is not correct (bad imagery, too dark or too light, etc.).

1. Downloaded the 2018 lidar from the USGS.

   a. Download the meta data file with footprints, unzipped it and added the Tile shapefile to ArcGIS Pro   

   b. I got the full path and name of the laz files from the USGS site.  Here's an example:of the path for the file to download:

      ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/LPC/Projects/USGS_LPC_CA_SoCal_Wildfires_B1_2018_LAS_2019/laz/USGS_LPC_CA_SoCal_Wildfires_B1_2018_w1879n1438_LAS_2019.laz

   c. Added a path field to the Tiles and calculated the path with the individual name file to replace the original path copied in b. This is the formula I used in calculate field:

"ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/LPC/Projects/USGS_LPC_CA_SoCal_Wildfires_B1_2018_LAS_2019/laz/USGS_LPC_CA_SoCal_Wildfires_B1_2018_" & !Name! & "_LAS_2019.laz"

     d. Then exported to a text file and used an free download software using FTP to mass download all the files from the USGS lidar laz files for the tiles that intersected the City of Redlands Limits feature class.

2. Use Convert LAS to convert the data from laz format to LAS.   LAS is required to colorize the lidar.  Got an error, but it was because the projection was not supported.  Turned off the rearrange and it converted from laz to las.  

3. Looked at the projection in the reports in the meta data downloaded and found it was not supported because it used meters instead of feet, modified a projection and then copied it using a simple model builder tool to give a projection file to each LAS file.

4. Evaluated the lidar by creating a populating a las dataset: ground, bridge decks, water, overlap, etc. were classified.  Buildings and vegetation were not.

5. Used Classify LAS Building with aggressive option to classify the buildings.  Classifying the buildings allows you to filter it in the future scene.   It's also good for using to extract building footprints, another operation I commonly do.

6. Used Classify LAS by Height to classify the vegetation. (3) Low Vegetation set to 2m, (4) Medium Vegetation set to 5m, (5) High Vegetation set to 50m.  This caused the Low Vegetation to be non-ground from 0 to 2m, Medium >2m to 5m and High Vegetation >5m to 50m.   This is done so you can turn off the vegetation in a scene.

7. Used the ArcGIS Pro NAIP imagery service with the Template set to None and then Split Raster tool to download the area I need based on the Redlands City Limits.

8. Created a mosaic dataset of the NAIP imagery.      Applied  function to resamples from 0.6m to 0.2m and applied a statistics with a 2 circle on it.   This will take the course 0.6m (or 1m) NAIP imagery and smooth it for better colorization.

9. Colorized the lidar with the Color LAS tool.  Set it to colorized RGB and Infrared with the 1,2,3,4 bands and an output folder.  The default puts a "_colorized" on it, I usually do not do this and simply have the output folder to be called colorlas.   Set the output colorized lasd to be one directory above with the same name.

10. Added to Pro and reviewed the las files lasd.   Found it covered the area, the lidar was colorized properly and it had a good look to it.   Set the symbology to RGB.

11. Create Point Cloud Scene Layer Package Tool with the output lidar to create a slpk file and then added to ArcGIS Pro to make sure it was working correctly.   Change the defaults.

12. Used my ArcGIS Online account to load the slpk or the Share Package tool.

13. Publish the scene and view it online.

14. Add the lidar scene layers multiple times and use symbolize or properties to show it the what you would like.  You can see an example at the Chicago 3D.   If you open it and save it as your own (logged in), you can access the layers in the scene layers to see how they are set up with Symbology and Properties.

Kentucky lidar test:

Below is some work I helped the State of Kentucky with.   The result is similar in someways to the work of neo-Impressionists Georges Seurat and Paul Signac that pioneered a painting technique, dubbed Pointillism.   

Kentucky lidar colorized:

Georges Seurat's Seascape at Port en Bessin Normandy below:

Adding Tree Trunks:

Image showing buildings and tree trunks with beautiful fall imagery applied to lidar

Adding tree trunks in the background makes the point cloud scenes just look a little more real and allows viewers to better view that it's a tree.   I recently worked on a park area in Kentucky with supporting Division of Geographic Information (DGI) to try to create tree trunks like the one in the City of Klaipėda scene in Lithuania and then add sides of buildings as a test.  I did not want the tree trunks to overwhelm the scene, just to be a background item to make the scenery more realistic.  The image and link to the scene shows the red arrows point to the tree trunks.   I used raster functions applied to the Range of Values using Las Point Statistics as Raster.    First I created a Range of Elevation Values using 5ft sampling value using Las Point Statistics as Raster.    Then I used raster functions to create a raster showing the high points as show below.(1).Statistics: 5x5 mean.   (2). Minus:  Statistics – Range,  Max Of,  Intersection Of.   (3). Remap:  Minimum -0.1 to 0.1, Output 0, Change missing values to NoData.   (4). Plus:  Range plus Max Of, Intersection Of .  (5). Remap:  0 to 26 set to no data.    I then used Raster to Points to create a point layer of the raster.  Add field and calculate it to be Height from gridcode,   Add field TreeTrunkRadius and calculate it to be (!Height!/60) + .5, this gave me a width to use later for the 3D polygons. Placed the points into the 3D Layers and applied the Type – Base height and applied field to Height and US Feet.   Ise an expression for the Height using the Expression Builder   $feature.Height *0.66, because I wanted the tree trunks to go up only 2/3 of the height of the trees.   I used 3D Layer to Feature Class to create a line feature class and then use it as input to Buffer 3D using the Field option for distance and the TreeTrunkRadius as the input field.  I then use Create 3D Object Scene Layer Package to output the TreeTrunks.slpk.   I used Add item to your ArcGIS Online Account and publish it.  Once online, I played with the colors to get it brown.   This process could also have use the Trees from Lidar tool, but the NAIP imagery was fall and did not have the same NDVI reflectivity for that process.

Adding Building sides:

To fill the building sides, I classified the buildings using Classify LAS Building.   Then used Extract LAS with a filter on the lidar dataset point cloud to extract only the ones with the building class code.    I published this and added it 9 times to a group in the scene.   The first three building layers, I adjusted them up .5, 0 and -0.5 meters and left them colorized with the imagery.   This make the roof slightly more solid looking as there were gaps.   Second, I took the remaining 6 layers and set them to intensity color with increments of -1m, -2m, -3m, -4m, -5m and -6m.   This created the walls of the buildings.  I changed the range of colors for the -3m to give a line to go across it that was slightly darker.  I grouped all the layers together so they would be one layer.  You can also use the filter for buildings and colorize with Class Code, then set the class code color to be what ever color you wish for building sides.   If you set the modulated with intensity, this too change change the appearance, often giving shadowed the sides of buildings or what looks like windows at times.

Standard look before adding building sides and thickness to the roof:

No walls

With the illusion of walls added and roof thickened (Cumberland, KY):

Here's another view showing the building sides:

It is an illusion with the sides of the buildings, the colors do not match the real side of the buildings and the differences in the intensity of the lidar used do not show the true differences in the sides like the perceived windows above.   Like most illusions or representations, it tricks the eyes into thinking it's the sides of the buildings.   Sometimes, like most illusions or representations, it does not work from all angles as this building did with a low amount of points:

Overall, I think the representative sides help you to identify what is a building and what is not.  The colors of the building sides will not match reality, nor do not show true windows or true doors.   Doing so is usually a very costly and time consuming process after generating 3D models of the buildings.   You could still generate the 3D buildings with the Local Government 3D Basemap Solution for ArcGIS Pro and have them semi-transparent and use for selection and analysis without any coloring.  You could also use rpks applied to the buildings to represent the 3D models more accurately, but this would again be a representation without very detailed data.  

How to get Lidar easily for large area:  Here's a video of how to download high volumes of Lidar tiles easily from the USGS for a project area.   Includes how to get a list of the laz files to download.   Once downloaded, use Convert LAS to take the files from LAZ format to LAS.   LAS format is needed to run classification, colorization and manual editing of the point cloud.

Covering the basics of Colorized Lidar Point Clouds, creating scene layer packages and visualizing:

Here's a video where I go through the process of Colorizing Lidar and what things I look for with times below to go to certain items (26 minutes long):

Link: https://esri-imagery.s3.us-east-1.amazonaws.com/events/UC2020/videos/ColorizingLidar.mp4

0:30 Seconds talking lidar input and some in classification of lidar.

1:05 Adding imagery services and review of the imagery alignment  to use for the colorization of lidar.  

1:49:Adding Getting NAIP imagery service.

2:20: Hi Resolution imagery services.

3:35: Talking about the colorizing trees with leaf off vs. leaf on imagery.

4:55: Comparing imagery, Sidewalks coloring the trees with leaf off.

6:00: Using Split raster to download imagery to use from a service.

8:25: Mosaic to New Raster the split images downloaded.

10:50: Check download mosiac

11:40: Colorize LAS tool

13:52: Talking about lidar, how big it is, how long it takes to colorized, download times from services using Split Raster,    size of scene layer package in relation to the before uncolorized lidar. 

15:40: Adding the colorized lidar and seeing it in 3D.

16:40:  Upping the Display Limit to allow more points to be seen in the 3D viewer.

17:00: Adding imagery and reviewing the lidar colorized.

17:56: Increasing Point Size of the points to better see the roofs, trees and sides of buildings in the colorized lidar.     Talking about how color does not match for sides of buildings.

19:00: Looking at trees, shadows.

19:25: Looking at roofs and down into the streets.

20:10: Creating a Create Point Cloud Scene Layer Package.

21:30: Adding Scene Layer Package to Pro and seeing the increased speed because of tilling and formatting.   Review it    this way to speed.

22:30: Looking at it with different settings in symbology, Elevation, Class, Intensity, Return (Geiger lidar does not have    returns).

24:35: Looking at a single tree.

24:50: Share Package tool.

26:35: Showing the web scene

There is also this video done a couple years ago that covers the topic in a different ways.

DEMs:

Loading a higher resolution DTM (DEM) as the ground is often needed.   In this link you can turn off the DTM1m_tif whidh is the lidar ground DTM vs. the World Terrain Service to see the difference.   In the US, most of the time NED is usually good enough that your colorized las will be very close, but sometimes your colorized buildings will go into the ground and the lidar points near the ground will be under it or float on top of it.   You can take your las files and in ArcGIS Pro see if the ground points fall below or above the terrain.  If the difference is too much, you need to publish your DTM.   The 3D Basemap solution (called Local Government 3D Basemap solution right now) can guide you through this process and might in the future guide you through the colorization and publication of lidar.  Here's a blog that goes through how to do it and the help.   The Local Government 3D Basemaps solution has tasks that can also walk you through this process for publishing an elevation. 

Import tools for building colorized lidar packages:

Edit LAS file classification codes Every lidar point can have a classification code assigned to it that defines the type of object that has reflected the laser pulse. Lidar points can be classified into a number of categories, including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files.

Covert LAS - Converts LAS files between different compression methods, file versions, and point record formats.

Extract LAS Filters, clips, and reprojects the collection of lidar data referenced by a LAS dataset.

Colorize LAS Applies colors and near-infrared values from orthographic imagery to LAS points.

Classify LAS Building - Classifies building rooftops and sides in LAS data.

Classify Ground - Classifies ground points in aerial lidar data.

Classify LAS by Height - Reclassifies lidar points based on their height from the ground surface. Primarily used for classifying vegetation.

Classify LAS Noise - Classifies LAS points with anomalous spatial characteristics as noise.

Classify LAS Overlap - Classifies LAS points from overlapping scans of aerial lidar surveys.

Change LAS Class Codes - Reassigns the classification codes and flags of LAS files.

Create Point Cloud Scene Layer Package - Creates a point cloud scene layer package (.slpk file) from LAS, zLAS, LAZ, or LAS dataset input.

Share Package - Shares a package by uploading to ArcGIS Online or ArcGIS Enterprise.

Share a web elevation layer

Here's some lidar colorized to look at:

Helsinki point cloud  (Finland) Scene Viewer 

Barneget Bay (New Jersey, US) Scene Viewer 

City of Denver Point Cloud (Colorado, US)  Scene Viewer 

City of Redlands (California, US) Scene Viewer   Has a comparison of high resolution vs. NAIP colorized lidar.

Kentucky Lidar Test (Cumberland, Kentucky, US) Scene Viewer   Fall imagery applied to trees, building sides added, tree trunks added.

Lewisville, TX, US (near Dallas) Scene Viewer 

I'll be adding more to this blog in the future, stay tuned.

Arthur Crawford - Living Atlas/Content Product Engineer

more
6 14 10.7K
VinayViswambharan
Esri Contributor

The ArcGIS Image Analyst extension for ArcGIS Pro 2.5 now features expanded deep learning capabilities, enhanced support for multidimensional data, enhanced motion imagery capabilities, and more.

Learn about  new imagery and remote sensing-related features added in this release to improve your image visualization, exploitation, and analysis workflows.

Deep Learning

We’ve introduced several key deep learning features that offer a more comprehensive and user-friendly workflow:

  • The Train Deep Learning Model geoprocessing tool trains deep learning models natively in ArcGIS Pro. Once you’ve installed relevant deep learning libraries (PyTorch, Fast.ai and Torchvision), this enables seamless, end-to-end workflows.
  • The Classify Objects Using Deep Learning geoprocessing tool is an inferencing tool that assigns a class value to objects or features in an image. For instance, after a natural disaster, you can classify structures as damaged or undamaged.
  • The new Label Objects For Deep Learning pane provides an efficient experience  for managing and  labelling training data. The pane also provides the option to export your deep learning data.
  • A new user experience lets you interactively review deep learning results and edit classes as required.
New deep learning tools in ArcGIS Pro 2.5

New deep learning tools in ArcGIS Pro 2.5

Multidimensional Raster Management, Processing and Analysis

New tools and capabilities for multidimensional analysis allow you to extract and manage subsets of a multidimensional raster, calculate trends in your data, and perform predictive analysis.

New user experience

A new contextual tab in ArcGIS Pro makes it easier to work with multidimensional raster layers or multidimensional mosaic dataset layers in your map.

Intuitive user experience to work with multidimensional data

Intuitive user experience to work with multidimensional data

  • You can Intuitively work with multiple variables and step through time and depth.
  • You have direct access to the new functions and tools that are used to manage, analyze and visualize multidimensional data.
  • You can chart multidimensional data using the temporal profile, which has been enhanced with spatial aggregation and charting trends.

New tools for management and analysis

The new multidimensional functions and geoprocessing tools are listed below.

New geoprocessing tools for management

We’ve added two new tools to help you extract data along specific variables, depths, time frames, and other dimensions:

  • Subset Multidimensional Raster
  • Make Multidimensional Raster layer

New geoprocessing tools for analysis

  • Find Argument Statistics allows you to determine when or where a given statistic was reached in multidimensional raster dataset. For instance, you can identify when maximum precipitation occurred over a specific time period.
  • Generate Trend Raster estimates the trend for each pixel along a dimension for one or more variables in a multidimensional raster. For example, you might use this to understanding how sea surface temperature has changed over time.
  • Predict Using Trend Raster computes a forecasted multidimensional raster using the output trend raster from the Generate Trend Raster tool. This could help you predict the probability of a future El Nino event based on trends in historical sea surface temperature data.

Additionally, the following tools have improvements that support new analytical capabilities:

New raster functions for analysis

  • Generate Trend
  • Predict Using Trend
  • Find Argument Statistics
  • Linear Spectral Unmixing
  • Process Raster Collection

New Python raster objects

Developers can take advantage of new classes and functions added to the Python raster object that allow you to work with multidimensional rasters

New classes include:

  • ia.RasterCollection – The RasterCollection object allows a group of rasters to be sorted and filtered easily and prepares a collection for additional processing and analysis.
  • ia.PixelBlock – The PixelBlock object defines a block of pixels within a raster to use for processing. It is used in conjunction with the PixelBlockCollection object to iterate through one or more large rasters for processing.
  • ia.PixelBlockCollection – The PixelBlockCollection object is an iterator of all PixelBlock objects in a raster or a list of rasters. It can be used to perform customized raster processing on a block-by-block basis, when otherwise the processed rasters would be too large to load into memory.

New functions include:

  • ia.Merge() – Creates a raster object by merging a list of rasters spatially or across dimensions.
  • ia.Render (inRaster, rendering_rule={…}) – Creates a rendered raster object by applying symbology to the referenced raster dataset. This function is useful when displaying data in a Jupyter notebook.
  • Raster functions for arcpy.ia – You can now use almost all of the raster functions to manage and analyze raster data using the arcpy API
New tools to analyse multidimensional data

New tools to analyse multidimensional data

Motion Imagery

This release includes enhancements to our motion imagery support, so you can better manage and interactively use video with embedded geospatial metadata:

  • You can now enhance videos in the video player using contrast, brightness, saturation, and gamma adjustments. You can also invert the color to help identify objects in the video.
  • Video data in multiple video players can be synchronized for comparison and analysis.
  • You can now measure objects in the video player, including length, area, and height.
  • You can list and manage videos added to your project with the Video Feed Manager.
Motion imagery in ArcGIS Pro

Pixel Editor

The Pixel Editor provides a suite of tools to interactively manipulate pixel values of raster and imagery data. Use the toolset for redaction, cloud and noise removal, or to reclassify categorical data. You can edit an individual pixel or a group of pixels at once. Apply editing operations to pixels in elevation datasets and multispectral imagery. Key enhancements in this release include the following:

  • Apply a custom raster function template to regions within the image
  • Interpolate elevation surfaces using values from the edges of a selected region

Additional resources

more
0 0 975
JeffLiedtke
Esri Contributor

Using your knowledge of geography, geospatial and remote sensing science, and using the image classification tools in ArcGIS, you have produced a pretty good classified raster for your project area. Now it’s time to clean up some of those pesky pixels that were misclassified – like that one pixel labelled “shrub” in the middle of your baseball diamond. The fun part is using the Pixel Editor to interactively edit your classified raster data to be useful and accurate. The resulting map can be used to drive operational applications such as land use inventory and management.

For operational management of land use units, a useful classified map may not necessarily be the most accurate in terms of identified features. For example, a small clearing in a forest, cars in a parking lot, or a shed in a backyard are not managed differently than the larger surrounding land use. The Pixel Editor merges and reclassifies groups of pixels, objects and regions quickly and easily into units that can be managed similarly, and result in presentable and easy-to-understand maps for your decision support and management.

What is the Pixel Editor?

The Pixel Editor is an interactive group of tools that enables editing of raster data and imagery , and it is included with the ArcGIS Pro Image Analyst. It is a suite of image processing capability, driven by an effective user interface, that allows you to interactively manipulate pixel values. Try different operations using different parameter settings to achieve optimum editing results, then save, publish and share them.

The Pixel Editor is contextual to the raster source type of the layer being edited, which means that suites of capability are turned on or off depending on the data type of the layer you are working with. For thematic data, you can reassign pixels, objects and regions to different classes, perform operations such as filtering, shrinking or expanding classes, masking, or even create and populate new classes. Edits can be saved, discarded, and reviewed in the Edits Log.

Pixel Editor in action

Because the Pixel Editor is contextual, you need to first load the layer you want to edit. Two datasets are loaded into ArcGIS Pro, the infrared source satellite image and the classified result. The source data is infrared satellite imagery where vegetation is depicted in shades of red depending on coverage and relative vigor. This layer has been classified using the Random Trees classifier in ArcGIS Pro. The class map needs editing to account for classification discrepancies and to support operational land use management.

Launch the Pixel Editor

To launch the Pixel Editor, select the classified raster layer in the Contents pane, go to the Imagery tab and click the Pixel Editor button from the Tools group.


The Pixel Editor tab will open. In this example, we’ll be editing a land use map, so the editor will present you with editing tools relevant for thematic data.

The Reclassify dropdown menu

The Region group provides tools for delineating and managing a region of interest. The Edit group provides tools to perform specific operations to reclassify pixels, objects or regions of interest. The Edit group also provides the Operations gallery, which only works on Regions.

Reclassify

Reclassify is a great tool to reassign a group of pixels to a different class. In the example below, you can see from the multispectral image that either end of the track infield is in poor condition with very little vegetation, which resulted in that portion of the field being incorrectly classified. We want to reclassify these areas as turf, which is colored bright green in the classified dataset.

Infrared image and associated classmap needing edits.

We used the multispectral image as the backdrop to more easily digitize the field, then simply reassigned the incorrect class within the region of interest to the Turf class.

Edited classmap

Majority Filter and Expand
Check out the parking lots south of the track field containing cars, which are undesirable in terms of classified land use. We removed the cars and make the entire parking lot Asphalt with a two-step process:

Parking lot before editing
(1) We digitized the parking lot and removed the cars with a Majority Filter operation with a filter size of 20 pixels – the size of the biggest cars in the lot.

(2) Then we used Expand to reclassify any remaining pixels within the lot to Asphalt.

Parking lot after Majority Filter and Expand operations

Add a new class

Another great feature of the Pixel Editor is the ability to add a new class to your classified raster. Here, we added a Water class to account for water features that we missed in the first classification.

Add new class

New class WATER was added to the classmap

In the New Class drop-down menu, you can add a new class, provide its name, class codes, and define a color for the new class display.

After adding the new class to the class schema, we used the Reclass Object tool to reassign the incorrect Shadow class to the correct Water class. Simply click the object you want to reclassify and encompass it within the circle - and voila! – the object is reclassified to Water.

Reclass incorrect class "Shadow" to correct class "Water"

Feature to Region

Sometimes you may have an existing polygon layer with more accurate class polygon boundaries. These could be building footprints, roads, wetland polygons, water bodies and more. Using the Feature to Region option you can easily create a region of pixels to edit by clicking on the desired feature from your feature layers in the map. Then use the Reclass by Feature tool to assign the proper class.

Region from Feature Edit

We see the updated water body now matches the polygon feature from your feature class. The class was also changed from Shadow to its correct value, Water.

Summary

The Pixel Editor provides a fast, easy, interactive way to edit your classified rasters. You can edit groups of pixels and objects, and editing operations include reclassification using filtering, expanding and shrinking regions, or by simply selecting or digitizing the areas to reclassify. You can even add an entire new class. Try it out with your own data, and see how quickly you can transform a good classification data set into an effective management tool!

Acknowledgement

Thanks to the co-author, Eric Rice, for his contributions to this article.

more
0 0 617
JeffLiedtke
Esri Contributor

Do you have blemishes in your image products, such as clouds and shadows that obscure interesting features, or DEMs that don’t represent bare earth? Or perhaps you want to obscure certain confidential features, or correct erroneous class information in your classmap. The Pixel Editor can help you improve your final image products.

 

After you have conducted your scientific remote sensing and image analysis, your results need to be presented to your customers, constituents and stakeholders. Your final products need to be correct and convey the right information for decision support and management. The pixel editor helps you achieve this last important aspect of your workflow – effective presentation of results.

 

Introducing the Pixel Editor

The Pixel Editor, in the Image Analyst extension, provides a suite of tools to interactively manipulate pixel values for raster and imagery data. It allows you to edit an individual pixel or groups of pixels. The types of operations that you can perform depends on the data source type of your raster dataset.

The Pixel Editor tools allows you to perform the following editing tasks on your raster datasets:

Blog Series

We will present a series of blogs addressing the robust capabilities of the Pixel Editor. We will focus on real-world practical applications for improving your imagery products, and provide tips and best practices for getting the most out of your imagery using the Pixel Editor. Stayed tuned for this interesting and worthwhile news.

 

Your comments, inputs and application examples of the Pixel Editor capability are very welcome and appreciated!

more
0 1 715
VinayViswambharan
Esri Contributor

In the aftermath of a natural disaster, response and recovery efforts can be drastically slowed down by manual data collection. Traditionally, insurance assessors and government officials have to rely on human interpretation of imagery and site visits to assess damage and loss. But depending on the scope of a disaster, this necessary process could delay relief to disaster victims.

Article Snapshot: At this year’s Esri User Conference plenary session, the United Services Automobile Association (USAA) demonstrated the use of deep learning capabilities in ArcGIS to perform automated damage assessment of homes after the devastating Woolsey fire. This work was a collaborative prototype between Esri and USAA to show the art of the possible in doing this type of damage assessment using the ArcGIS platform.

The Woolsey Fire burned for 15 days, burning almost 97,000 acres, and damaging or destroying thousands of structures. Deep learning within ArcGIS was used to quickly identify damaged structures within the fire perimeter, fast tracking the time for impacted residents and businesses to have their adjuster process the insurance claims.

The process included capturing training samples, training the deep learning model, running inferencing tools and detecting damaged homes – all done within the ArcGIS platform. In this blog, we’ll walk through each step in the process.

Step1: Managing the imagery

Before the fires were extinguished, DataWing flew drones in the fire perimeter and captured high resolution imagery of impacted areas. The imagery totaled 40 GB in size and was managed using a mosaic dataset. The mosaic dataset is the primary image management model for ArcGIS to manage large volumes of imagery.

Step2. Labelling and preparing training samples

Prior to training a deep learning model, training samples must be created to represent areas of interest – in this case, the USAA was interested in damaged and undamaged buildings. The building footprint data provided by LA County, was overlaid on the high resolution drone imagery in ArcGIS Pro, and several hundred homes were manually labelled as Damaged or Undamaged  (a new field called “ClassValue” in the building footprint feature class was attributed with this information). These training features were used to export training samples using the Export Training Data for Deep Learning tool in ArcGIS Pro, with the metadata output format set to ‘Labeled Tiles’.

                             Resultant image chips (Labeled Tiles used for training the Damage Classification model)
               Resultant image chips (Labeled Tiles used for training the Damage Classification model)

Step 3: Training the deep learning model

ArcGIS Notebooks was used for training purposes. ArcGIS Notebooks is pre-configured with the necessary deep learning libraries, so no extra setup was required. With a few lines of code, the training samples exported from ArcGIS Pro were augmented. Using the arcgis.learn module in the ArcGIS Python API, optimum training parameters for the damage assessment model were set, and the deep learning model was trained using a ResNet34 architecture to classify all buildings in the imagery as either damaged or undamaged.

               
                                       The model converged around 99% accuracy                      

Once complete, the ground truth labels were compared to the model classification results to get a quick qualitative idea on how well the model performed.

         Model Predictions
                                                                           Model Predictions

For complete details on the training process see our post on Medium

Finally, with the model.save() function, the model can be saved and used for inferencing purposes.

Step 4: Running the inferencing tools

Inferencing was performed using the ArcGIS API for Python. By running inferencing inside of ArcGIS Enterprise using the model.classify_features function in Notebooks, we can take the inferencing to scale.

The result is a feature service that can be viewed in ArcGIS Pro. (Here’s a link to the web map).

Over nine thousand buildings were automatically classified using deep learning capabilities within ArcGIS!

The map below shows the damaged buildings marked in red, and the undamaged buildings in green. With 99% accuracy, the model is approaching the performance of a trained adjuster – what used to take us days or weeks, now we can do in a matter of hours.

               Inference results
                                                Inference results

Step 5: Deriving valuable insights

Business Analyst: Now that we had a better understanding of the impacted area, we wanted to understand who were the members impacted by the fires. When deploying mobile response units to disaster areas, it’s important to know where the most at-risk populations are located, for example, the elderly or children. Using Infographics from ArcGIS Business Analyst, we extracted valuable characteristics and information about the impacted community and generated a report to help mobile units make decisions faster.

Get location intelligence with ArcGIS Business Analyst
                                       Get location intelligence with ArcGIS Business Analyst

Operations Dashboard: Using operations dashboard containing enriched feature layers, we created easy dynamic access to the status of any structure, the value of the damaged structures, the affected population and much more.

            

Summary:

Using deep learning, imagery and data enrichment capabilities in the ArcGIS platform, we can quickly distinguish damaged from undamaged buildings, identify the most at-risk populations, and organizations can use this information for rapid response and recovery activities.

 More Resources:

Deep Learning in ArcGIS Pro

Distributed Processing using Raster Analytics

Image Analysis Workflows

Details on the model training of the damage assessment 

ArcGIS Notebooks

ABOUT THE AUTHORS

Vinay Viswambharan

Product manager on the Imagery team at Esri, with a zeal for remote sensing and everything imagery.

Rohit Singh

Development Lead - ArcGIS API for Python. Applying deep learning to the Science of Where @Esri. https://twitter.com/geonumist

more
1 0 2,338
by Anonymous User
Not applicable

The new Getting to Know ArcGIS Image Analyst guide gives GIS professionals and imagery analysts hands-on experience with the functionality available with the ArcGIS Image Analyst extension.

It’s a complete training guide to help you get started with complex image processing workflows. It includes a checklist of tutorials, videos and lessons along with links to additional help topics.

Task Checklist for getting started with ArcGIS Image Analyst

This guide is useful to anyone interested in learning how to work with the powerful image processing and visualization capabilities available with the ArcGIS Image Analyst. Complete the checklist provided in the guide and you’ll get hands on experience with:

  • Setting up ArcGIS Image Analyst in ArcGIS Pro
  • Extracting features from imagery using machine learning image classification and deep learning methods
  • Processing imagery quickly using raster functions
  • Visualizing and creating data in a stereo map
  • Creating and measuring features in image space
  • Working with Full Motion Video

Download the guide and let us know what you think! Take the guide survey to provide us with direct feedback.

ABOUT THE AUTHOR

more
2 0 775
CodyBenkelman
Esri Regular Contributor

Do you have imagery from an aerial photography camera (whether a modern digital camera or scanned film) and the orientation data either by direct georeferencing or the results of aerial triangulation? If yes, you’ll want to work with a mosaic dataset, and load the imagery with the proper raster type.

The mosaic dataset provides the foundation for many different use cases, including:

  • On-the-fly orthorectification of images in a dynamic mosaic, for direct use in ArcGIS Pro or sharing through ArcGIS Image Server.
  • Production of custom basemaps from source imagery.
  • Managing and viewing aerial frame imagery in stereo
  • Accessing images in their Image Coordinate System (ICS).  


There are different raster types that support the photogrammetric model for frame imagery.  If you have existing orientation data from ISAT or Match-AT, you can use the raster types with those names to directly load the data (see
Help here). 

For a general frame camera, you’ll want to know how to use the Frame Camera raster type and we have recently updated some helpful resources:  

UI for automated script

Further information:

  • Note that if your imagery is oblique, the Frame Camera raster type supports multi-sensor oblique images. Refer to the http://esriurl.com/FrameCameraBestPractices for configuration advice.
  • If you want to extract a digital terrain model (DTM) from the imagery, or improve the accuracy of the aerial triangulation, see the Ortho Mapping capabilities of ArcGIS Pro (advanced license). http://esriurl.com/OrthoMapping.
  • If you are seeking additional detail on the photogrammetric model used within the Frame Camera raster type, see this supplemental document http://esriurl.com/FrameCameraDetailDoc

more
3 0 1,548
by Anonymous User
Not applicable

Did you know there is a huge repository of powerful Python Raster Functions that you can use for raster analysis and visualization? On the Esri/raster-functions repository on GitHub, you can browse, download, and utilize customized raster functions for on-the-fly processing on your desktop or in the cloud.

Esri's raster functions GitHub repository

What are Python raster functions, you ask?

A raster function is a sneaky way to perform complex raster analysis and visualization without taking up more space on your disk or more time in your day, with on-the-fly processing. A single raster function performs an analysis on an input raster, then displays the result on your screen. No new dataset is created, and pixels get processed as you pan and zoom around the image. You can connect multiple raster functions in a raster function chain and you can turn it into a raster function template by setting parameters as variables.

A Python raster function is simply a custom raster function. A lot of raster functions come with ArcGIS out-of-the-box, but if you don’t find what you’re looking for or you want to create something specific to your needs, you can script your own with Python.

There are a lot of Python raster functions already written and posted for everyone to use, and they’re easy to download and use in ArcGIS. And some of them are unbelievably cool.

For example: Topographic Correction function

The Topographic C Correction function, written by Gregory Brunner from the St. Louis Regional Services office, essentially removes the hillshade from orthophotos. As you can imagine, imagery over mountainous areas or regions with rugged terrain can be difficult to classify accurately because pixels may belong to the same land cover class but some fall into shadow due to varying slopes and aspects. With the topographic correction function, you can get a better estimate of pixel values that would otherwise be impacted by hillshade. The result is a sort of flattening of the image, and it involves some fairly complex math.

Hillshade removal effect

Why should you care?

Okay, so now you know there’s a repository of Python raster functions. What’s next?

  1. Explore the functions you may need.
    Some of the functions on the repository were written for specialized purposes and aren’t included with the ArcGIS installation, such as the Topographic C Correctionfunction (above) or the Linear Spectral Unmixing function [contributed by Jacob Wasilkowski, also from the St. Louis Esri Regional office].
  2. Try writing your own Python raster function.
    A lot of what’s on the GitHub repository is already in the list of out-of-the-box raster functions, but you can open the Python scripts associated with each one, customize them, and save them as new Python raster functions. This can be a great learning tool for those new to the process.
  3. Watch the repo for more functions.
    There are currently over 40 functions listed, and we are continually adding more.
  4. Contribute!
    Have you written something that you can share with the broader community? Do you have ideas for cool raster functions? Add to the conversation by commenting below!

 

Get Started

To easily access all the Python Raster Functions in the GitHub repository, simply click the Clone or Download button on the repository code page, and choose to download the raster functions as a ZIP file.

Click download ZIP button to get the full repo

Extract the zip folder to your disk, then use this helpful Wiki to read about using the Python Raster Functions in ArcGIS Pro.

For an example tutorial on using the Python Raster Functions, check out the blog on the Aspect-Slope function.

 

Enjoy exploring!

more
3 0 1,153
by Anonymous User
Not applicable

In Part I of this blog series, we explained what an ortho mapping workspace is and how to create one for digital aerial imagery. At this point, the imagery has been organized and managed so that we can access all the necessary metadata, information, tools and functionality to work with our imagery, but we haven’t yet performed a bundle block adjustment.

 

Ortho Mapping blog series part 2

 

Block adjustment is the process of adjusting the parameters in the image support data to get an accurate transformation between the image and the ground. The process is based on the relationship between overlapping images, control points, the camera model, and topography – then computing a transformation for the group of images (a block). With aerial digital data, it consists of three key components:

  • Tie points – Common points that appear in overlapping images, tying the overlapping images to each other to minimize misalignment between the images. These are automatically identified by the software.
  • Ground control points – These are usually obtained with ground survey, and they provide references from features visible in the images to known ground coordinates.
  • Aerial triangulation – Computes an accurate camera model, ground position (X, Y, Z), and orientation (omega, phi, kappa) for each image, which are necessary to transform the images to match the control points and the elevation model.

When we created our workspace, we provided the Frames and Cameras tables, which contain the orientation and camera information needed to make up our camera model and to establish the relationship between the imagery and the ground. We also provided an elevation model which we obtained from the Terrain image service available through the Living Atlas of the World. Now we’re ready to move on to the next step in the ortho mapping process.

Performing a Block Adjustment for Digital Aerial Data

 

  1. In the ortho mapping workspace, open the Ortho Mapping tab and select Adjustment Options from the Adjust group. This is where we can define the parameters used in computing the block adjustment, which includes computing tie points. For more information on each parameter, check out the Adjustment Options help documentation.

Ortho Mapping Adjustment Options and GCP Import

 

 

  1. Next, we want to add Ground Control Points (GCPs) to our workspace to improve the overall georeferencing and accuracy of the adjustment. To do this, select the Manage GCPs tool in the Ortho Mapping tab and choose Import GCPs. We have a CSV table with X, Y and Z coordinates and accuracy to be used for this analysis.
    • If you have an existing table of GCPs, use this Import option and map the fields in the Import GCPs dialog for the X, Y, and Z coordinates, GCP label, and accuracy fields in your table. You may have photos of each GCP location for reference – if so, you can import the folder of photos for reference when you are measuring (or linking) the GCPs to the overlapping images.
    • You may also have secondary GCPs, or control points that were not obtained in a survey but from an existing orthoimage with known accuracy. You can import those here as well, or you can manually add them using the GCP Manager.
    • Once you have added GCPs to the workspace, use the GCP Manager to add tie points to the associated locations on each overlapping image. Select one of the GCPs in the GCP Manager table, then iterate through the overlapping images in the Image list below and use your cursor to place a tie point on the site that is represented by the GCP

 

Add tie points for each GCP and change some to check points

A few notes:

Check Points: Be sure to change some of your GCPs to Check Points (right-click on the GCP in the GCP Manager and select “Change to Check Point) so you can view the check point deviation in the Adjustment Report after running the adjustment. This is essentially changing the point from a control point that facilitates the adjustment process to a control point that assesses the adjustment results.The icon in the GCP table will change from a circle to a triangle, and the check points appear as pink triangles in the workspace map.

Drone imagery: If you are performing a block adjustment with drone imagery, you must run the Adjust tool before adding GCPs. In this blog, we’re focusing on aerial digital data.

 

  1. Finally, we click the Adjust tool to compute the block adjustment. This will take some time – transforming a number of images so that they align with each other and the ground is complicated work – so get up, maybe do some stretches or get yourself a cup of coffee. The log window will let you know when the process is complete. When the adjustment is finished, you’ll see new options available in the ortho mapping tab that enable you to assess the results of the adjustment.

 

Assessing the Block Adjustment

 

  1. Run the Analyze Tie Points tool to generate QA/QC data in your ortho mapping workspace. The Overlap Polygons feature class contains control point coverage in areas where images overlap, and the Coverage Polygons feature class contains control point coverage for each image in the image collection.  Inspect these feature classes to identify areas that need additional control points to improve block adjustment results.
QA/QC outputs in the ortho mapping workspace

 

  1. Open the Adjustment Report to view the components and results of the adjustment report. Here you will find information about the number of control points used in the adjustment, the average residual error, tie point sets, and connectivity of overlapping imagery. In our case, the Mean Reprojection Error of our adjustment is 0.38 pixels.

Now what?

The block adjustment tools allow for an iterative computation, so that you can check on the quality of the adjustment, modify options, add or delete GCPs, or recompute tie points before re-running the adjustment. If you are unsatisfied with the error in the Adjustment Report, try adding GCPs in the Manage GCPs pane, or try modifying some of the Adjustment Options. You can also change some of your check points back into GCPs, and choose a few other GCPs to be your check points. Re-run the adjustment and see how this impacts the shift.

Once you are satisfied with the accuracy of your adjusted imagery, it’s time to make ortho products! Check out the final installment in our blog series to see how it’s done.

more
0 2 1,666
175 Subscribers