Imagery and Remote Sensing Blog

cancel
Showing results for 
Search instead for 
Did you mean: 

Latest Activity

(49 Posts)
Occasional Contributor

With the firehose of imagery that’s streaming down daily from a variety of sensors, the need for using AI to automate feature extraction is only increasing. To make sure your organization is prepared, Esri is taking AI to the next level. We are very excited to announce the release of ready-to-use geospatial AI models on the ArcGIS Living Atlas.

Article Overview: Esri is bringing ready-to-use deep learning models to our user community through ArcGIS Online.

To kick it off, we’ve added three models — building footprint extraction and land cover classification from satellite imagery, and another model to classify points representing trees in point cloud datasets.

With the existing capabilities in ArcGIS, you’ve been able to train over a dozen deep learning models on geospatial datasets and derive information products using the ArcGIS API for Python or ArcGIS Pro, and scale up processing using ArcGIS Image Server.

Building footprints automatically extracted using the new deep learning model
Building footprints automatically extracted using the new deep learning model

These newly released models are a game changer! They have been pre-trained by Esri on huge volumes of data and can be readily used (no training required!) to automate the tedious task of digitizing and extracting geographical features from satellite imagery and point cloud datasets. They bring the power of AI and deep learning to the Esri user community. What’s more, these deep learning models are accessible for anyone with an ArcGIS Online subscription at no additional cost.

 

Using the models

Using these models is simple. You can use geoprocessing tools (such as the Detect Objects Using Deep Learning tool) in ArcGIS Pro with the imagery models.  Point the tool to the imagery and the downloaded model, and that’s about it – deep learning has never been this easy! A GPU, though not necessary, can help speed things up. With ArcGIS Enterprise, you can scale up the inferencing using Image Server.

Using the model in ArcGIS Pro
Using the building footprint extraction model in ArcGIS Pro

Coming soon, you’ll be able to consume the model directly in ArcGIS Online Imagery and run it against your own uploaded imagery—all without an ArcGIS Enterprise deployment. The 3D Basemaps solution is also being enhanced to use the tree point classification model and create realistic 3D tree models from raw point clouds.

 

How can you benefit from these deep learning models?

It probably goes without saying that manually extracting features from imagery—like digitizing footprints or generating land cover maps—is time-consuming. Deep learning automates the process and significantly minimizes the manual interaction needed to create these products. However, training your own deep learning model can be complicated – it needs a lot of data, extensive computing resources, and knowledge of how deep learning works.

 

Sample building footprints extracted - Woodland, CA
Sample building footprints extracted - Woodland, CA

With ready-to-use models, you no longer have to invest time and energy into manually extracting features or training your own deep learning model. These models have been trained on data from a variety of geographies and work well across them. As new imagery comes in, you can readily extract features at the click of a button, and produce layers of GIS datasets for mapping, visualization and analysis.

Sample building footprints extracted - Palm Islands, Dubai
Sample building footprints extracted - Palm Islands, Dubai

 

Get to know the first three models we released

Three deep learning models are now available in ArcGIS Online. (Watch for more models in the future!). These models are available as deep learning packages (DLPKs) that can be used with ArcGIS Pro, Image Server and ArcGIS API for Python.

1. Building Footprint Extraction model is used to extract building footprints from high resolution satellite imagery. While its designed for the contiguous United States, it performs fairly well in other parts of the globe.

The model performs fairly well in other parts of the globe. Results from Ulricehamn, Sweden.
The model performs fairly well in other parts of the globe. Results from Ulricehamn, Sweden.

Here’s a story map presenting some of the results. Building footprint layers are useful for creating basemaps and in analysis workflows for urban planning and development, insurance, taxation, change detection, and infrastructure planning.

2. Landcover Classification model is used to create a land cover product using Landsat 8 imagery. The classified land cover will have the same classes as the National Land Cover Database. The resulting land cover maps are useful for urban planning, resource management, change detection and agriculture.

Classified landcover map using Landsat 8 imagery
Classified landcover map using Landsat 8 imagery

This generic model is has been trained on the National Land Cover Database (NLCD) 2016 with the same Landsat 8 scenes that were used to produce the database. Land cover classification is a complex exercise and is hard to capture using traditional means. Deep learning models have a high capacity to learn these complex semantics and give superior results.

3. Tree Point Classification model can be used to classify points representing trees in point cloud datasets.

Interactive 3D basemap created by employing tree point classification model.
3D scene created by employing tree point classification model.

Classifying tree points is useful for creating high quality 3D basemaps, urban planning and forestry workflows.

 

Next steps

Try out the deep learning models in ArcGIS Living Atlas for yourself. Read more detailed instructions for using the deep learning models in ArcGIS. Have questions? Let us know on GeoNet how they are working for you, and which other feature extraction tasks you’d like AI to do for you!

more
0 3 123
Regular Contributor II

Drone2Map users, it's time for an update!  

Version 2.2 was released last week - You should be notified in the app that there is a new version available.  If you're unsure about version, view “About” in the main menu on the left side of the screen to verify your version, and download a new version if necessary.  You can also download from My Esri.

What’s new in ArcGIS Drone2Map version 2.2? 

We continue to improve Drone2Map with enhancements in the core technology as well as improving the usability to streamline your drone processing workflows.  For a brief discussion of new features, see this blog What’s new in ArcGIS Drone2Map (July 2020) and you can find a more detailed listing here with full release notes available here

more
0 0 67
Occasional Contributor III

This blog is developing, just starting and will grow with time:

Over the last few years, I have been publishing lidar colorized point clouds as supplements to the 3D cities, like St. Louis area, that I have created using the 3D Basemap solution and extracting buildings from lidar.    Sean Morrish wrote a wonderful blog on the mechanics of publishing lidar.   Lately, I have looked into using the colorized lidar point clouds as a effective and relatively cheap way to create 3D Basemaps.   It has some advantages as most states, counties and cities already have lidar available, most have high resolution imagery and all have NAIP imagery that is needed to create these scenes.   

Mantiowoc County lidar scenes:

Some counties, cites and states are doing this already. Manitowoc County, WI., has a 3D LiDAR Point Cloud and also creating scenes with the data.  Manitowoc County also did a great story map showing how their lidar is used as a colorized lidar point cloud here and highly recommend taking a look at it.  

ManitowocStoryMap

The StoryMap shows with a video how to do capture vertical, horizontal, and direct distances of LiDAR ground surfaces at any location Countywide.   How to obtain detailed measurements of small LiDAR surface features such as depth of ditches to the road. 

Measure LiDAR point clouds distances relative to LiDAR ground surfaces using house roofs as an example to see the height of the building.

Here's one of Manitowac Buildings scene layer where they had the creative idea of using the same colorized lidar to give fake building sides by showing the lidar classified as buildings several times over, each with a slightly less elevation by changing the offset in the symbology.   Further down in the blog, I show how to do this.

Hats off to Bruce Riesterer of Manitowoc County who put this all together before retiring, including coming up with the idea to use the building points multiple times with different colors to show the sides of buildings and now is working for private industry, see his new work at RiestererB_AyresAGO.

.

State of Connecticut 3D Viewer:

I helped the State of Connecticut CLEAR colorize their lidar using NAIP imagery as the first statewide lidar point cloud published to ArcGIS Online.   It turned out to be about 650GBs of lidar broken into two scene layer packages.   The time spent on it was mainly processing time and loading time.   The State of Connecticut CLEAR sent me the newest NAIP imagery they had and with all the data on a computer, I just let it run colorizing the lidar.  With that layer and other layers CLEAR had online, a web scene was created.   A feature class was added that has links to their laz files, to the imagery, DEM and other layers.   This allows users to preview the lidar in a 3D viewer before downloading.   Users can even do measurements with lines or areas in 3D that allow most users to view it before.   

Connecticut 3D Viewer

Here's several views using different symbology and filters on the published lidar point cloud for Connecticut.

Colorized with NAIP:

ConnecticutColor

Class Code modulated with intensity:  Modulated Intensity allows the features to show up like roads, sidewalks, details in roof tops and trees.

Elevation modulated with intensity:

Color filtered to show buildings:

Here's some examples of how to display from their blog:

Connecticut Examples of lidar point cloud display

Chicago 3D:

Recently, I was asked by the Urban team to help with Chicago and created basic 3D Building footprints from the Chicago Data Portal building footprints.   I used the 8ppm Geiger lidar to create the DSM, nDSM and DTM for the 3D Basemap Solution.   I then colorized the lidar using NAIP imagery and again the high resolution  leaf off 2018 imagery.   I then extracted the points classified as vegetation and used it to replace the high resolution leaf off imagery in the scene layer to show trees as green, but to get the roof tops using the high resolution.Leaf Off Colorized LIdar

Above is the high resolution leaf off imagery used to colorize the lidar in the scene.  Below is he same area with the lidar colorized with NAIP for the vegetation (some building sides were classified by the vendor as vegetation in delivery to the USGS of the 8ppm Geiger lidar).  You can see how the trees are much more identifiable using the lidar colorized with NAIP.

This could be used as a 3D Basemap.  The 3D buildings do not have segmented roofs (divided by height), but the lidar shows the detail of the buildings.    Below is the John Hancock Building identified in blue with a basic multipatch polygon in a scene layer with transparency applied.

Chicago Skyline showing 3D basic buildings

Here's a view of the Navy Pier with both the NAIP colorized trees and High Resolution colorized lidar on at the same time.

Navy Pier

Picking the Imagery to use:

Using the Chicago Lidar, I also built a scene to compare the different types of imagery used to colorize the lidar.   It's a guide using the the video below to show how high resolution imagery leaf on vs. high resolution imagery leaf off vs. 1m NAIP leaf on.  You can see below how leaf on imagery is great for showing trees.   Your imagery does not have to meet exactly the year of the lidar, but the closer the better.   But making it visual appealing is import in 3D cartography with colorized point clouds.

Chicago Lidar Colorized Comparison (High Res Leaf On vs. High Res Leaf Off vs. NAIP Leaf On).  

Leaf On High Resolution:

LeafOnHighRes

Leaf On NAIP:

Leaf Off High Resolution:

Leaf Off High Resolution

Sometimes colorful fall imagery before the leaves drop can be a great for scenes, but it all depends on what you want to show.

Way to add trees with limbs and points as leaves:

Here's a test I did using Dead Tree symbology to represent the trunk and branches, while the leaves come from the lidar colorized.  My tool Trees from LIdar or 3D Basemap Solution tree tools can create the trees from lidar and then the points can be symbolized with the Dead Tree symbol.  This image was done in ArcGIS Pro, not a scene.

Trees using point cloud and Dead Tree Symbology to represent trunks and branches

City of Klaipėda 3D viewer (Lithuania):

The City of Klaipėda in Lithuania has used the colorized point clouds of trees with cylinders to represent trunks.  The mesh here is very detailed.  Because of being zoomed in so far, the points appear fairly large.

Here's another view zoomed out:

And another further zoomed out:

Some of the scenes above use NAIP imagery instead of higher resolution imagery.  Why, when higher resolution is available.  Often high resolution imagery is somewhat oblique where NAIP is collected usually from a higher altitude.   With the higher altitude, you get less leans of buildings in the imagery and often the spacing of the lidar does not support the higher resolution imagery.   In these cases NAIP is often preferred, but make point clouds with a couple las files to see what the differences are.

Here's a one over Fairfax County, VA, using their published lidar colorized (from their 3D LiDAR Viewer) and using intensity to show the roofs.  With intensity, you can see the roof details better than with the NAIP imagery.

Scene Viewer 

With large buildings, the building sides and roofs using intensity for symbology really shows off the detail.  The sides of the buildings are just representative, not showing what the actual sides look like.

Scene Viewer 

Here it is below with just using NAIP colorized points without the building sides and roofs using intensity for symbology. 

The missing sides of the buildings make it hard to see where the building actually is and much of the roof detail is lost.   Colorizing with intensity allows more detail because intensity is a measure, collected for every point, of the return strength of the laser pulse that generated the point. It is based, in part, on the reflectivity of the object struck by the laser pulse. The imagery often does not match exactly with the lidar due to collection maybe slightly off and slight oblique angle found in the imagery orthorectification.

Colorizing Redlands 2018 lidar:

Here's an example of colorizing the City of Redlands with the 2018 lidar that I worked on to support one of the groups here at Esri.  Highly recommend taking just one las file to begin with, run through the process all the way to publishing to make sure there are no issues.   I have in the past run through hours of download and colorization, just to learn a projection is wrong or something else is not correct (bad imagery, too dark or too light, etc.).

1. Downloaded the 2018 lidar from the USGS.

   a. Download the meta data file with footprints, unzipped it and added the Tile shapefile to ArcGIS Pro   

   b. I got the full path and name of the laz files from the USGS site.  Here's an example:of the path for the file to download:

      ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/LPC/Projects/USGS_LPC_CA_SoCal_Wildfires_B1_2018_LAS_2019/laz/USGS_LPC_CA_SoCal_Wildfires_B1_2018_w1879n1438_LAS_2019.laz

   c. Added a path field to the Tiles and calculated the path with the individual name file to replace the original path copied in b. This is the formula I used in calculate field:

"ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/LPC/Projects/USGS_LPC_CA_SoCal_Wildfires_B1_2018_LAS_2019/laz/USGS_LPC_CA_SoCal_Wildfires_B1_2018_" & !Name! & "_LAS_2019.laz"

     d. Then exported to a text file and used an free download software using FTP to mass download all the files from the USGS lidar laz files for the tiles that intersected the City of Redlands Limits feature class.

2. Use Convert LAS to convert the data from laz format to LAS.   LAS is required to colorize the lidar.  Got an error, but it was because the projection was not supported.  Turned off the rearrange and it converted from laz to las.  

3. Looked at the projection in the reports in the meta data downloaded and found it was not supported because it used meters instead of feet, modified a projection and then copied it using a simple model builder tool to give a projection file to each LAS file.

4. Evaluated the lidar by creating a populating a las dataset: ground, bridge decks, water, overlap, etc. were classified.  Buildings and vegetation were not.

5. Used Classify LAS Building with aggressive option to classify the buildings.  Classifying the buildings allows you to filter it in the future scene.   It's also good for using to extract building footprints, another operation I commonly do.

6. Used Classify LAS by Height to classify the vegetation. (3) Low Vegetation set to 2m, (4) Medium Vegetation set to 5m, (5) High Vegetation set to 50m.  This caused the Low Vegetation to be non-ground from 0 to 2m, Medium >2m to 5m and High Vegetation >5m to 50m.   This is done so you can turn off the vegetation in a scene.

7. Used the ArcGIS Pro NAIP imagery service with the Template set to None and then Split Raster tool to download the area I need based on the Redlands City Limits.

8. Created a mosaic dataset of the NAIP imagery.      Applied  function to resamples from 0.6m to 0.2m and applied a statistics with a 2 circle on it.   This will take the course 0.6m (or 1m) NAIP imagery and smooth it for better colorization.

9. Colorized the lidar with the Color LAS tool.  Set it to colorized RGB and Infrared with the 1,2,3,4 bands and an output folder.  The default puts a "_colorized" on it, I usually do not do this and simply have the output folder to be called colorlas.   Set the output colorized lasd to be one directory above with the same name.

10. Added to Pro and reviewed the las files lasd.   Found it covered the area, the lidar was colorized properly and it had a good look to it.   Set the symbology to RGB.

11. Create Point Cloud Scene Layer Package Tool with the output lidar to create a slpk file and then added to ArcGIS Pro to make sure it was working correctly.   Change the defaults.

12. Used my ArcGIS Online account to load the slpk or the Share Package tool.

13. Publish the scene and view it online.

14. Add the lidar scene layers multiple times and use symbolize or properties to show it the what you would like.  You can see an example at the Chicago 3D.   If you open it and save it as your own (logged in), you can access the layers in the scene layers to see how they are set up with Symbology and Properties.

Kentucky lidar test:

Below is some work I helped the State of Kentucky with.   The result is similar in someways to the work of neo-Impressionists Georges Seurat and Paul Signac that pioneered a painting technique, dubbed Pointillism.   

Kentucky lidar colorized:

Georges Seurat's Seascape at Port en Bessin Normandy below:

Adding Tree Trunks:

Image showing buildings and tree trunks with beautiful fall imagery applied to lidar

Adding tree trunks in the background makes the point cloud scenes just look a little more real and allows viewers to better view that it's a tree.   I recently worked on a park area in Kentucky with supporting Division of Geographic Information (DGI) to try to create tree trunks like the one in the City of Klaipėda scene in Lithuania and then add sides of buildings as a test.  I did not want the tree trunks to overwhelm the scene, just to be a background item to make the scenery more realistic.  The image and link to the scene shows the red arrows point to the tree trunks.   I used raster functions applied to the Range of Values using Las Point Statistics as Raster.    First I created a Range of Elevation Values using 5ft sampling value using Las Point Statistics as Raster.    Then I used raster functions to create a raster showing the high points as show below.(1).Statistics: 5x5 mean.   (2). Minus:  Statistics – Range,  Max Of,  Intersection Of.   (3). Remap:  Minimum -0.1 to 0.1, Output 0, Change missing values to NoData.   (4). Plus:  Range plus Max Of, Intersection Of .  (5). Remap:  0 to 26 set to no data.    I then used Raster to Points to create a point layer of the raster.  Add field and calculate it to be Height from gridcode,   Add field TreeTrunkRadius and calculate it to be (!Height!/60) + .5, this gave me a width to use later for the 3D polygons. Placed the points into the 3D Layers and applied the Type – Base height and applied field to Height and US Feet.   Ise an expression for the Height using the Expression Builder   $feature.Height *0.66, because I wanted the tree trunks to go up only 2/3 of the height of the trees.   I used 3D Layer to Feature Class to create a line feature class and then use it as input to Buffer 3D using the Field option for distance and the TreeTrunkRadius as the input field.  I then use Create 3D Object Scene Layer Package to output the TreeTrunks.slpk.   I used Add item to your ArcGIS Online Account and publish it.  Once online, I played with the colors to get it brown.   This process could also have use the Trees from Lidar tool, but the NAIP imagery was fall and did not have the same NDVI reflectivity for that process.

Adding Building sides:

To fill the building sides, I classified the buildings using Classify LAS Building.   Then used Extract LAS with a filter on the lidar dataset point cloud to extract only the ones with the building class code.    I published this and added it 9 times to a group in the scene.   The first three building layers, I adjusted them up .5, 0 and -0.5 meters and left them colorized with the imagery.   This make the roof slightly more solid looking as there were gaps.   Second, I took the remaining 6 layers and set them to intensity color with increments of -1m, -2m, -3m, -4m, -5m and -6m.   This created the walls of the buildings.  I changed the range of colors for the -3m to give a line to go across it that was slightly darker.  I grouped all the layers together so they would be one layer.  You can also use the filter for buildings and colorize with Class Code, then set the class code color to be what ever color you wish for building sides.   If you set the modulated with intensity, this too change change the appearance, often giving shadowed the sides of buildings or what looks like windows at times.

Standard look before adding building sides and thickness to the roof:

No walls

With the illusion of walls added and roof thickened (Cumberland, KY):

Here's another view showing the building sides:

It is an illusion with the sides of the buildings, the colors do not match the real side of the buildings and the differences in the intensity of the lidar used do not show the true differences in the sides like the perceived windows above.   Like most illusions or representations, it tricks the eyes into thinking it's the sides of the buildings.   Sometimes, like most illusions or representations, it does not work from all angles as this building did with a low amount of points:

Overall, I think the representative sides help you to identify what is a building and what is not.  The colors of the building sides will not match reality, nor do not show true windows or true doors.   Doing so is usually a very costly and time consuming process after generating 3D models of the buildings.   You could still generate the 3D buildings with the Local Government 3D Basemap Solution for ArcGIS Pro and have them semi-transparent and use for selection and analysis without any coloring.  You could also use rpks applied to the buildings to represent the 3D models more accurately, but this would again be a representation without very detailed data.  

How to get Lidar easily for large area:  Here's a video of how to download high volumes of Lidar tiles easily from the USGS for a project area.   Includes how to get a list of the laz files to download.   Once downloaded, use Convert LAS to take the files from LAZ format to LAS.   LAS format is needed to run classification, colorization and manual editing of the point cloud.

Covering the basics of Colorized Lidar Point Clouds, creating scene layer packages and visualizing:

Here's a video where I go through the process of Colorizing Lidar and what things I look for with times below to go to certain items (26 minutes long):

Link: https://esri-imagery.s3.us-east-1.amazonaws.com/events/UC2020/videos/ColorizingLidar.mp4

0:30 Seconds talking lidar input and some in classification of lidar.

1:05 Adding imagery services and review of the imagery alignment  to use for the colorization of lidar.  

1:49:Adding Getting NAIP imagery service.

2:20: Hi Resolution imagery services.

3:35: Talking about the colorizing trees with leaf off vs. leaf on imagery.

4:55: Comparing imagery, Sidewalks coloring the trees with leaf off.

6:00: Using Split raster to download imagery to use from a service.

8:25: Mosaic to New Raster the split images downloaded.

10:50: Check download mosiac

11:40: Colorize LAS tool

13:52: Talking about lidar, how big it is, how long it takes to colorized, download times from services using Split Raster,    size of scene layer package in relation to the before uncolorized lidar. 

15:40: Adding the colorized lidar and seeing it in 3D.

16:40:  Upping the Display Limit to allow more points to be seen in the 3D viewer.

17:00: Adding imagery and reviewing the lidar colorized.

17:56: Increasing Point Size of the points to better see the roofs, trees and sides of buildings in the colorized lidar.     Talking about how color does not match for sides of buildings.

19:00: Looking at trees, shadows.

19:25: Looking at roofs and down into the streets.

20:10: Creating a Create Point Cloud Scene Layer Package.

21:30: Adding Scene Layer Package to Pro and seeing the increased speed because of tilling and formatting.   Review it    this way to speed.

22:30: Looking at it with different settings in symbology, Elevation, Class, Intensity, Return (Geiger lidar does not have    returns).

24:35: Looking at a single tree.

24:50: Share Package tool.

26:35: Showing the web scene

There is also this video done a couple years ago that covers the topic in a different ways.

DEMs:

Loading a higher resolution DTM (DEM) as the ground is often needed.   In this link you can turn off the DTM1m_tif whidh is the lidar ground DTM vs. the World Terrain Service to see the difference.   In the US, most of the time NED is usually good enough that your colorized las will be very close, but sometimes your colorized buildings will go into the ground and the lidar points near the ground will be under it or float on top of it.   You can take your las files and in ArcGIS Pro see if the ground points fall below or above the terrain.  If the difference is too much, you need to publish your DTM.   The 3D Basemap solution (called Local Government 3D Basemap solution right now) can guide you through this process and might in the future guide you through the colorization and publication of lidar.  Here's a blog that goes through how to do it and the help.   The Local Government 3D Basemaps solution has tasks that can also walk you through this process for publishing an elevation. 

Import tools for building colorized lidar packages:

Edit LAS file classification codes Every lidar point can have a classification code assigned to it that defines the type of object that has reflected the laser pulse. Lidar points can be classified into a number of categories, including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files.

Covert LAS - Converts LAS files between different compression methods, file versions, and point record formats.

Extract LAS Filters, clips, and reprojects the collection of lidar data referenced by a LAS dataset.

Colorize LAS Applies colors and near-infrared values from orthographic imagery to LAS points.

Classify LAS Building - Classifies building rooftops and sides in LAS data.

Classify Ground - Classifies ground points in aerial lidar data.

Classify LAS by Height - Reclassifies lidar points based on their height from the ground surface. Primarily used for classifying vegetation.

Classify LAS Noise - Classifies LAS points with anomalous spatial characteristics as noise.

Classify LAS Overlap - Classifies LAS points from overlapping scans of aerial lidar surveys.

Change LAS Class Codes - Reassigns the classification codes and flags of LAS files.

Create Point Cloud Scene Layer Package - Creates a point cloud scene layer package (.slpk file) from LAS, zLAS, LAZ, or LAS dataset input.

Share Package - Shares a package by uploading to ArcGIS Online or ArcGIS Enterprise.

Share a web elevation layer

Here's some lidar colorized to look at:

Helsinki point cloud  (Finland) Scene Viewer 

Barneget Bay (New Jersey, US) Scene Viewer 

City of Denver Point Cloud (Colorado, US)  Scene Viewer 

City of Redlands (California, US) Scene Viewer   Has a comparison of high resolution vs. NAIP colorized lidar.

Kentucky Lidar Test (Cumberland, Kentucky, US) Scene Viewer   Fall imagery applied to trees, building sides added, tree trunks added.

Lewisville, TX, US (near Dallas) Scene Viewer 

I'll be adding more to this blog in the future, stay tuned.

Arthur Crawford - Living Atlas/Content Product Engineer

more
5 12 1,824
Occasional Contributor

Users of the Oriented Imagery Catalog Management Tools in ArcGIS Pro 2.5 may have encountered a crash when browsing for an Oriented Imagery Catalog (OIC) as input in any of the tools in the Oriented Imagery Catalog toolbox. 

 

This bug will be fixed in the next release of ArcGIS Pro, but there is a workaround in the meantime. To avoid the crash, don't click the Browse folder icon to navigate to your OIC. Instead of browsing to the file, you should copy the path to the OIC file and paste it into the input field of the GP tool.

To do this in Windows:

  1. Open Windows File Explorer.
  2. Browse to the OIC file. (If you’ve created this in your project’s geodatabase, the OIC file will be located by default at C:\Users\[username]\Documents\ArcGIS\Projects\[Project Name]\[OIC name].)
  3. Select the OIC file, then click Copy Path. (You may have to remove any quotation marks around the file path.)

   Screenshot of Windows File Explorer

4. In ArcGIS Pro, paste the path into the Input Oriented Imagery field of the GP tool.

more
0 0 124
Regular Contributor II

Drone2Map version 2.1 is now available.  Current users can view “About” in the main menu on the left side of the screen to verify your version, and download a new version if necessary.  You can also download from My Esri.

 

What’s new in Drone2Map for ArcGIS version 2.1? 

In this release we continue to improve the user experience in many areas of the workflow.

 

Camera Model Editor

  • Esri maintains an internal camera database which is updated along with Drone2Map several times per year. In addition to the internal camera database, Drone2Map also has a user camera database. With the Camera Model Editor, users are now able to edit existing cameras from the internal camera database and store the modified camera models in the user camera database.

  • An important use case supported by this capability is to provide support for high quality metric cameras, where the photogrammetric lens parameters such as focal length, principal point and distortion are stable and known. Since Drone2Map supports consumer cameras, these parameters may (by default) be adjusted during processing. For metric cameras, the Camera Model Editor allows users to input known, high accuracy parameters when applicable and maintain those values throughout processing.

  • Additionally, when a successful project has been processed and you are happy with the results, the .d2mx file from that project may be imported into the camera model editor of a new project and those optimized camera parameters from the imported project will be stored in the user camera database and allow those parameters to be used in future processing jobs. This helps to standardize results and reduce processing times.

 

Control Updates

  • In this release there is an improved user experience for managing control using the Control Manager.  Users can view properties of each control point, filter based on the type of control, and launch the links editor, all with a few button clicks.
  • Some geographic features, such as water, can be difficult to generate sufficient tie points and successfully match those tie points using automated algorithms. Now users can create and link manual tie points to images to successfully process imagery in geographic areas that previously caused problems.

  • Linking control to your images can be a time-consuming process. At Drone2Map 2.1, we have introduced assisted image links. This workflow requires initial processing to be run, and after you enter one link, the software is able to automatically find your control markers in subsequent images and provide visual feedback as to the accuracy of that link. Once satisfied with the positioning of the control to the images, simply click Auto Link and Drone2Map will link the verified control for you.

 

Share DEM as Elevation Layer

  • Drone2Map users are now able to publish their own custom surfaces on ArcGIS Online or ArcGIS Portal for either an ortho reference DTM or top surface DSM. These surfaces can be used in 3D web scenes to ensure accurate height values for point clouds and meshes generated by Drone2Map.

 

Add custom DEM into the Drone2Map project

  • Users may add their own elevation surface into the project (on top of the default World Terrain surface), to ensure that any 3D views incorporate the authoritative elevation surface.  This can be very useful in project areas that are captured on multiple dates (e.g. agriculture) and/or where an accurate input terrain is important (e.g. an airport, construction site, or a site with material stockpiles).

  • In addition, if ground control points are subsequently extracted from the map, the Z values are provided by the custom elevation surface. This is important to ensure date-to-date consistency for sites that are captured repeatedly and analyzed over time.

 

Elevation Profile and Spectral Profile for additional analytical capabilities

  • Users are now able to generate cross-sectional elevation profiles in any Drone2Map projects that are processed to create output surfaces (DSM and/or DTM).

                                Imagery provided by GeoCue Group, Inc.                                                                

 

  • For users with multispectral cameras, Drone2Map also allows extraction of spectral profiles (defined by point samples, linear transects, or 2D areas of interest) to support detailed analysis of vegetation or other landcover surface types.

 

 

Colorized Indices

  • Indices created from multispectral imagery products are now colorized by default.

 

New Inspection Template

  • The inspection template has been added to all users who wish to create projects that are focused on inspecting, annotating, and sharing raw drone images.

Browse performance improvements

  • Performance has been improved when browsing folders and files on disk.

Exif reader improvements

  • The performance of reading and extracting Exif data from drone images has improved to significantly reduce the amount of time required to create a project.

Licensing Changes

  • Drone2Map for ArcGIS 2.1 is a “premium app” which is a for-fee add-on to ArcGIS Online or ArcGIS Enterprise.

 

Full release notes for Drone2Map 2.1 are available here

more
1 0 1,374
Occasional Contributor

The ArcGIS Image Analyst extension for ArcGIS Pro 2.5 now features expanded deep learning capabilities, enhanced support for multidimensional data, enhanced motion imagery capabilities, and more.

Learn about  new imagery and remote sensing-related features added in this release to improve your image visualization, exploitation, and analysis workflows.

Deep Learning

We’ve introduced several key deep learning features that offer a more comprehensive and user-friendly workflow:

  • The Train Deep Learning Model geoprocessing tool trains deep learning models natively in ArcGIS Pro. Once you’ve installed relevant deep learning libraries (PyTorch, Fast.ai and Torchvision), this enables seamless, end-to-end workflows.
  • The Classify Objects Using Deep Learning geoprocessing tool is an inferencing tool that assigns a class value to objects or features in an image. For instance, after a natural disaster, you can classify structures as damaged or undamaged.
  • The new Label Objects For Deep Learning pane provides an efficient experience  for managing and  labelling training data. The pane also provides the option to export your deep learning data.
  • A new user experience lets you interactively review deep learning results and edit classes as required.
New deep learning tools in ArcGIS Pro 2.5

New deep learning tools in ArcGIS Pro 2.5

Multidimensional Raster Management, Processing and Analysis

New tools and capabilities for multidimensional analysis allow you to extract and manage subsets of a multidimensional raster, calculate trends in your data, and perform predictive analysis.

New user experience

A new contextual tab in ArcGIS Pro makes it easier to work with multidimensional raster layers or multidimensional mosaic dataset layers in your map.

Intuitive user experience to work with multidimensional data

Intuitive user experience to work with multidimensional data

  • You can Intuitively work with multiple variables and step through time and depth.
  • You have direct access to the new functions and tools that are used to manage, analyze and visualize multidimensional data.
  • You can chart multidimensional data using the temporal profile, which has been enhanced with spatial aggregation and charting trends.

New tools for management and analysis

The new multidimensional functions and geoprocessing tools are listed below.

New geoprocessing tools for management

We’ve added two new tools to help you extract data along specific variables, depths, time frames, and other dimensions:

  • Subset Multidimensional Raster
  • Make Multidimensional Raster layer

New geoprocessing tools for analysis

  • Find Argument Statistics allows you to determine when or where a given statistic was reached in multidimensional raster dataset. For instance, you can identify when maximum precipitation occurred over a specific time period.
  • Generate Trend Raster estimates the trend for each pixel along a dimension for one or more variables in a multidimensional raster. For example, you might use this to understanding how sea surface temperature has changed over time.
  • Predict Using Trend Raster computes a forecasted multidimensional raster using the output trend raster from the Generate Trend Raster tool. This could help you predict the probability of a future El Nino event based on trends in historical sea surface temperature data.

Additionally, the following tools have improvements that support new analytical capabilities:

New raster functions for analysis

  • Generate Trend
  • Predict Using Trend
  • Find Argument Statistics
  • Linear Spectral Unmixing
  • Process Raster Collection

New Python raster objects

Developers can take advantage of new classes and functions added to the Python raster object that allow you to work with multidimensional rasters

New classes include:

  • ia.RasterCollection – The RasterCollection object allows a group of rasters to be sorted and filtered easily and prepares a collection for additional processing and analysis.
  • ia.PixelBlock – The PixelBlock object defines a block of pixels within a raster to use for processing. It is used in conjunction with the PixelBlockCollection object to iterate through one or more large rasters for processing.
  • ia.PixelBlockCollection – The PixelBlockCollection object is an iterator of all PixelBlock objects in a raster or a list of rasters. It can be used to perform customized raster processing on a block-by-block basis, when otherwise the processed rasters would be too large to load into memory.

New functions include:

  • ia.Merge() – Creates a raster object by merging a list of rasters spatially or across dimensions.
  • ia.Render (inRaster, rendering_rule={…}) – Creates a rendered raster object by applying symbology to the referenced raster dataset. This function is useful when displaying data in a Jupyter notebook.
  • Raster functions for arcpy.ia – You can now use almost all of the raster functions to manage and analyze raster data using the arcpy API
New tools to analyse multidimensional data

New tools to analyse multidimensional data

Motion Imagery

This release includes enhancements to our motion imagery support, so you can better manage and interactively use video with embedded geospatial metadata:

  • You can now enhance videos in the video player using contrast, brightness, saturation, and gamma adjustments. You can also invert the color to help identify objects in the video.
  • Video data in multiple video players can be synchronized for comparison and analysis.
  • You can now measure objects in the video player, including length, area, and height.
  • You can list and manage videos added to your project with the Video Feed Manager.
Motion imagery in ArcGIS Pro

Pixel Editor

The Pixel Editor provides a suite of tools to interactively manipulate pixel values of raster and imagery data. Use the toolset for redaction, cloud and noise removal, or to reclassify categorical data. You can edit an individual pixel or a group of pixels at once. Apply editing operations to pixels in elevation datasets and multispectral imagery. Key enhancements in this release include the following:

  • Apply a custom raster function template to regions within the image
  • Interpolate elevation surfaces using values from the edges of a selected region

Additional resources

more
0 0 322
Occasional Contributor II

Using your knowledge of geography, geospatial and remote sensing science, and using the image classification tools in ArcGIS, you have produced a pretty good classified raster for your project area. Now it’s time to clean up some of those pesky pixels that were misclassified – like that one pixel labelled “shrub” in the middle of your baseball diamond. The fun part is using the Pixel Editor to interactively edit your classified raster data to be useful and accurate. The resulting map can be used to drive operational applications such as land use inventory and management.

For operational management of land use units, a useful classified map may not necessarily be the most accurate in terms of identified features. For example, a small clearing in a forest, cars in a parking lot, or a shed in a backyard are not managed differently than the larger surrounding land use. The Pixel Editor merges and reclassifies groups of pixels, objects and regions quickly and easily into units that can be managed similarly, and result in presentable and easy-to-understand maps for your decision support and management.

What is the Pixel Editor?

The Pixel Editor is an interactive group of tools that enables editing of raster data and imagery , and it is included with the ArcGIS Pro Image Analyst. It is a suite of image processing capability, driven by an effective user interface, that allows you to interactively manipulate pixel values. Try different operations using different parameter settings to achieve optimum editing results, then save, publish and share them.

The Pixel Editor is contextual to the raster source type of the layer being edited, which means that suites of capability are turned on or off depending on the data type of the layer you are working with. For thematic data, you can reassign pixels, objects and regions to different classes, perform operations such as filtering, shrinking or expanding classes, masking, or even create and populate new classes. Edits can be saved, discarded, and reviewed in the Edits Log.

Pixel Editor in action

Because the Pixel Editor is contextual, you need to first load the layer you want to edit. Two datasets are loaded into ArcGIS Pro, the infrared source satellite image and the classified result. The source data is infrared satellite imagery where vegetation is depicted in shades of red depending on coverage and relative vigor. This layer has been classified using the Random Trees classifier in ArcGIS Pro. The class map needs editing to account for classification discrepancies and to support operational land use management.

Launch the Pixel Editor

To launch the Pixel Editor, select the classified raster layer in the Contents pane, go to the Imagery tab and click the Pixel Editor button from the Tools group.


The Pixel Editor tab will open. In this example, we’ll be editing a land use map, so the editor will present you with editing tools relevant for thematic data.

The Reclassify dropdown menu

The Region group provides tools for delineating and managing a region of interest. The Edit group provides tools to perform specific operations to reclassify pixels, objects or regions of interest. The Edit group also provides the Operations gallery, which only works on Regions.

Reclassify

Reclassify is a great tool to reassign a group of pixels to a different class. In the example below, you can see from the multispectral image that either end of the track infield is in poor condition with very little vegetation, which resulted in that portion of the field being incorrectly classified. We want to reclassify these areas as turf, which is colored bright green in the classified dataset.

Infrared image and associated classmap needing edits.

We used the multispectral image as the backdrop to more easily digitize the field, then simply reassigned the incorrect class within the region of interest to the Turf class.

Edited classmap

Majority Filter and Expand
Check out the parking lots south of the track field containing cars, which are undesirable in terms of classified land use. We removed the cars and make the entire parking lot Asphalt with a two-step process:

Parking lot before editing
(1) We digitized the parking lot and removed the cars with a Majority Filter operation with a filter size of 20 pixels – the size of the biggest cars in the lot.

(2) Then we used Expand to reclassify any remaining pixels within the lot to Asphalt.

Parking lot after Majority Filter and Expand operations

Add a new class

Another great feature of the Pixel Editor is the ability to add a new class to your classified raster. Here, we added a Water class to account for water features that we missed in the first classification.

Add new class

New class WATER was added to the classmap

In the New Class drop-down menu, you can add a new class, provide its name, class codes, and define a color for the new class display.

After adding the new class to the class schema, we used the Reclass Object tool to reassign the incorrect Shadow class to the correct Water class. Simply click the object you want to reclassify and encompass it within the circle - and voila! – the object is reclassified to Water.

Reclass incorrect class "Shadow" to correct class "Water"

Feature to Region

Sometimes you may have an existing polygon layer with more accurate class polygon boundaries. These could be building footprints, roads, wetland polygons, water bodies and more. Using the Feature to Region option you can easily create a region of pixels to edit by clicking on the desired feature from your feature layers in the map. Then use the Reclass by Feature tool to assign the proper class.

Region from Feature Edit

We see the updated water body now matches the polygon feature from your feature class. The class was also changed from Shadow to its correct value, Water.

Summary

The Pixel Editor provides a fast, easy, interactive way to edit your classified rasters. You can edit groups of pixels and objects, and editing operations include reclassification using filtering, expanding and shrinking regions, or by simply selecting or digitizing the areas to reclassify. You can even add an entire new class. Try it out with your own data, and see how quickly you can transform a good classification data set into an effective management tool!

Acknowledgement

Thanks to the co-author, Eric Rice, for his contributions to this article.

more
0 0 94
Occasional Contributor II

Do you have blemishes in your image products, such as clouds and shadows that obscure interesting features, or DEMs that don’t represent bare earth? Or perhaps you want to obscure certain confidential features, or correct erroneous class information in your classmap. The Pixel Editor can help you improve your final image products.

 

After you have conducted your scientific remote sensing and image analysis, your results need to be presented to your customers, constituents and stakeholders. Your final products need to be correct and convey the right information for decision support and management. The pixel editor helps you achieve this last important aspect of your workflow – effective presentation of results.

 

Introducing the Pixel Editor

The Pixel Editor, in the Image Analyst extension, provides a suite of tools to interactively manipulate pixel values for raster and imagery data. It allows you to edit an individual pixel or groups of pixels. The types of operations that you can perform depends on the data source type of your raster dataset.

The Pixel Editor tools allows you to perform the following editing tasks on your raster datasets:

Blog Series

We will present a series of blogs addressing the robust capabilities of the Pixel Editor. We will focus on real-world practical applications for improving your imagery products, and provide tips and best practices for getting the most out of your imagery using the Pixel Editor. Stayed tuned for this interesting and worthwhile news.

 

Your comments, inputs and application examples of the Pixel Editor capability are very welcome and appreciated!

more
0 1 139
Occasional Contributor

In the aftermath of a natural disaster, response and recovery efforts can be drastically slowed down by manual data collection. Traditionally, insurance assessors and government officials have to rely on human interpretation of imagery and site visits to assess damage and loss. But depending on the scope of a disaster, this necessary process could delay relief to disaster victims.

Article Snapshot: At this year’s Esri User Conference plenary session, the United Services Automobile Association (USAA) demonstrated the use of deep learning capabilities in ArcGIS to perform automated damage assessment of homes after the devastating Woolsey fire. This work was a collaborative prototype between Esri and USAA to show the art of the possible in doing this type of damage assessment using the ArcGIS platform.

The Woolsey Fire burned for 15 days, burning almost 97,000 acres, and damaging or destroying thousands of structures. Deep learning within ArcGIS was used to quickly identify damaged structures within the fire perimeter, fast tracking the time for impacted residents and businesses to have their adjuster process the insurance claims.

The process included capturing training samples, training the deep learning model, running inferencing tools and detecting damaged homes – all done within the ArcGIS platform. In this blog, we’ll walk through each step in the process.

Step1: Managing the imagery

Before the fires were extinguished, DataWing flew drones in the fire perimeter and captured high resolution imagery of impacted areas. The imagery totaled 40 GB in size and was managed using a mosaic dataset. The mosaic dataset is the primary image management model for ArcGIS to manage large volumes of imagery.

Step2. Labelling and preparing training samples

Prior to training a deep learning model, training samples must be created to represent areas of interest – in this case, the USAA was interested in damaged and undamaged buildings. The building footprint data provided by LA County, was overlaid on the high resolution drone imagery in ArcGIS Pro, and several hundred homes were manually labelled as Damaged or Undamaged  (a new field called “ClassValue” in the building footprint feature class was attributed with this information). These training features were used to export training samples using the Export Training Data for Deep Learning tool in ArcGIS Pro, with the metadata output format set to ‘Labeled Tiles’.

                             Resultant image chips (Labeled Tiles used for training the Damage Classification model)
               Resultant image chips (Labeled Tiles used for training the Damage Classification model)

Step 3: Training the deep learning model

ArcGIS Notebooks was used for training purposes. ArcGIS Notebooks is pre-configured with the necessary deep learning libraries, so no extra setup was required. With a few lines of code, the training samples exported from ArcGIS Pro were augmented. Using the arcgis.learn module in the ArcGIS Python API, optimum training parameters for the damage assessment model were set, and the deep learning model was trained using a ResNet34 architecture to classify all buildings in the imagery as either damaged or undamaged.

               
                                       The model converged around 99% accuracy                      

Once complete, the ground truth labels were compared to the model classification results to get a quick qualitative idea on how well the model performed.

         Model Predictions
                                                                           Model Predictions

For complete details on the training process see our post on Medium

Finally, with the model.save() function, the model can be saved and used for inferencing purposes.

Step 4: Running the inferencing tools

Inferencing was performed using the ArcGIS API for Python. By running inferencing inside of ArcGIS Enterprise using the model.classify_features function in Notebooks, we can take the inferencing to scale.

The result is a feature service that can be viewed in ArcGIS Pro. (Here’s a link to the web map).

Over nine thousand buildings were automatically classified using deep learning capabilities within ArcGIS!

The map below shows the damaged buildings marked in red, and the undamaged buildings in green. With 99% accuracy, the model is approaching the performance of a trained adjuster – what used to take us days or weeks, now we can do in a matter of hours.

               Inference results
                                                Inference results

Step 5: Deriving valuable insights

Business Analyst: Now that we had a better understanding of the impacted area, we wanted to understand who were the members impacted by the fires. When deploying mobile response units to disaster areas, it’s important to know where the most at-risk populations are located, for example, the elderly or children. Using Infographics from ArcGIS Business Analyst, we extracted valuable characteristics and information about the impacted community and generated a report to help mobile units make decisions faster.

Get location intelligence with ArcGIS Business Analyst
                                       Get location intelligence with ArcGIS Business Analyst

Operations Dashboard: Using operations dashboard containing enriched feature layers, we created easy dynamic access to the status of any structure, the value of the damaged structures, the affected population and much more.

            

Summary:

Using deep learning, imagery and data enrichment capabilities in the ArcGIS platform, we can quickly distinguish damaged from undamaged buildings, identify the most at-risk populations, and organizations can use this information for rapid response and recovery activities.

 More Resources:

Deep Learning in ArcGIS Pro

Distributed Processing using Raster Analytics

Image Analysis Workflows

Details on the model training of the damage assessment 

ArcGIS Notebooks

ABOUT THE AUTHORS

Vinay Viswambharan

Product manager on the Imagery team at Esri, with a zeal for remote sensing and everything imagery.

Rohit Singh

Development Lead - ArcGIS API for Python. Applying deep learning to the Science of Where @Esri. https://twitter.com/geonumist

more
1 0 1,436