Help using the land_cover_classification_using_unet jupyter notebook sample

5693
39
07-24-2019 01:45 PM
RichardFairhurst
MVP Honored Contributor

I downloaded a set of sample Jupyter notebooks from esri at https://developers.arcgis.com/python/sample-notebooks/.  One of the notebooks is called land_cover_classification_using_unet, which is supposed to showcase an end-to-end to land cover classification workflow using ArcGIS API for Python. The workflow consists of three major steps: (1) extract training data, (2) train a deep learning image segmentation model, (3) deploy the model for inference and create maps. 

I am having trouble running the notebook, and so far have only gotten the first two steps to work, which just create a connection to ArcGIS Online.  The third and fourth lines of code are supposed to access a labeled image to train the model, but I get an error that the index value is out of range no matter what index value I use, which basically means the image was not found. 

label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
label_layer
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-29-a4ac34d0306c> in <module>
----> 1 label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
      2 label_layer

IndexError: list index out of range

I downloaded the original classified image for Kent County in Deleware from the Chesapeake Conservancy land cover project.  It looks the same, although I am not completely sure it matches the the extent or classifications of the training image the notebook was supposed to use.

How do I change the code to use the image I downloaded and saved on my computer rather than the image from ArcGIS Online?

I will probably will be asking more questions as I progress though the code, since it seems likely I will hit other problems.  I am hoping to first be able to complete the notebook example covering the Deleware region and afterward adapt it to process the NAIP imagery for my jurisdiction.

0 Kudos
39 Replies
RichardFairhurst
MVP Honored Contributor

I just found this link for the github code for a deep learning project that extracts building footprints from satellite imagery.  An overview description of the the project is here.  I have not tried it or really explored the code yet, but at least initially this seems to be much closer to the kind of code I need.  The code was designed for use in Linux, so I expect it will need some tweaking to run in Windows.

0 Kudos
DanPatterson_Retired
MVP Emeritus

I will look at this sometime soon, but I have other distractions to deal with

0 Kudos
RichardFairhurst
MVP Honored Contributor

I have been trying to run code from the last link I posted, but the code was developed by Microsoft and is promoting the use of Azure.  I am having problems setting up the Azure workspace, and I don't really want to risk getting charged for storing data that is purely experimental at first.  I am frustrated that most of my problems are centered around licencing and access requirements of proprietary software or the set up of online services that may require payment and can't even begin running the code that actually processes any imagery or does any deep learning.

I found yet another GitHub project for extracting building footprints described by the video here.  This code was developed by a graduate student.  He limited himself to using 3-band imagery, which is the kind of imagery I have.  I am not sure how easy this code is to run, but it seems to include a command line interface, which seems nice.  Hopefully it avoids software that isn't open source and online resources that have to be paid for, but I haven't verified that yet.

0 Kudos
RichardFairhurst
MVP Honored Contributor

What really got me interested in deep learning was the Microsoft Building Footprints and really my goal is to essentially recreate their process so that I can apply it to more recent aerial photos.  They applied Semantic Segmentation like the project in my previous post, but they achieved much better results.  Apparently it is crucial to incorporate RefineNet upsampling layers described in this paper in the code to achieve resolutions that make it possible to extract individual building footprints when buildings cluster near each other.  The RefineNet paper included a link to the MATLAB code they used and their github page included a link to a PyTorch implementation of RefineNet.

Sadly I have not found any code published by Microsoft related to the footprints they created, so I am stuck trying to build up enough of a knowledge base to move beyond just a conceptual understanding of what they did toward my own practical applications.

RichardFairhurst
MVP Honored Contributor

I tried running the code from the Light-Weight RefineNet (in PyTorch) Github project.  I was able to run the notebooks without a problem using the pretrained models.  However, when I tried to run the model training script I was unable to complete the first epoch, because it used up all of my GPU memory.  I am using NVidia Quadro P4000 with 8 GB VRam.  I thought the batch size was defaulting to 1, but it was actually set to 6 or higher.  I appears I am able to run the model after setting the batch size to 5 or less.

0 Kudos
SandeepKumar11
New Contributor

Hey richard did you this post https://medium.com/geoai/building-footprint-extraction-and-damage-classification-8a5458759332 , we did this using ArcGIS Pro and the python API, it uses U-Net under the hood.

0 Kudos
RichardFairhurst
MVP Honored Contributor

I had not seen that.  Is there any code I could see that is related to the image?  What was involved in training the model?

0 Kudos
SandeepKumar11
New Contributor

I am writing a sample notebook for that but we majorly used arcgis.learn module in the python API to train a U-Net model. The training data was exported using the "Export Training data for deep learning" tool in ArcGIS Pro.

0 Kudos
RichardFairhurst
MVP Honored Contributor

The notebook this post was originally based on was set up to use Image Services and Image Server.  Will your code work with data stored locally rather than online or using Image Server?  Will it work with just an Image Analyst licence in ArcGIS Pro?

0 Kudos
RichardFairhurst
MVP Honored Contributor

Sandeep:

I would very much like to see the UNet model your team used to extract building footprints.  Did your model include a RefineNet subroutine to enhance the quality of the classified image output or did you just rely on the Regularize Building Footprint tool to clean up lower resolution raster images?

0 Kudos