Select to view content in your preferred language

Help using the land_cover_classification_using_unet jupyter notebook sample

7182
39
07-24-2019 01:45 PM
RichardFairhurst
MVP Honored Contributor

I downloaded a set of sample Jupyter notebooks from esri at https://developers.arcgis.com/python/sample-notebooks/.  One of the notebooks is called land_cover_classification_using_unet, which is supposed to showcase an end-to-end to land cover classification workflow using ArcGIS API for Python. The workflow consists of three major steps: (1) extract training data, (2) train a deep learning image segmentation model, (3) deploy the model for inference and create maps. 

I am having trouble running the notebook, and so far have only gotten the first two steps to work, which just create a connection to ArcGIS Online.  The third and fourth lines of code are supposed to access a labeled image to train the model, but I get an error that the index value is out of range no matter what index value I use, which basically means the image was not found. 

label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
label_layer
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-29-a4ac34d0306c> in <module>
----> 1 label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
      2 label_layer

IndexError: list index out of range

I downloaded the original classified image for Kent County in Deleware from the Chesapeake Conservancy land cover project.  It looks the same, although I am not completely sure it matches the the extent or classifications of the training image the notebook was supposed to use.

How do I change the code to use the image I downloaded and saved on my computer rather than the image from ArcGIS Online?

I will probably will be asking more questions as I progress though the code, since it seems likely I will hit other problems.  I am hoping to first be able to complete the notebook example covering the Deleware region and afterward adapt it to process the NAIP imagery for my jurisdiction.

0 Kudos
39 Replies
DanPatterson_Retired
MVP Emeritus

did you check the link on GitHub?

arcgis-python-api/land_cover_classification_using_unet.ipynb at master · Esri/arcgis-python-api · Gi... 

did you enter the folder containing the dataset and launch Jupyter from there? it seems that is the suggestion on the page you linked to Richard

0 Kudos
RichardFairhurst
MVP Honored Contributor

The Github link you referenced only has the Jupyter Notebook as far as I can see, not any image file.  I did change change directories to the location of the notebook I downloaded before launching it.  There is no image in that directory.  There was a data subdirectory, but the only image is called percipitation.tif without any classification labeling.  I searched all downloaded subdirectories for a file name that contained the word Kent and found nothing.

Anyway, I have downloaded the Kent classified image from the Chesapeake Conservancy land cover projectand it looks like the image shown by the notebook. However, the way the code is written it is looking at ArcGIS Online, not the local directory where the notebook is located.  Anyway, I would think that there would be a way to create a layer from a file on my local hard drive, I just am not having success searching the ArcGIS Pro Python documentation for it.

0 Kudos
RichardFairhurst
MVP Honored Contributor

I contacted esri tech support and they determined that the image is published, but it is being published under an organization operated by a division of esri rather than under the normal main esri organization, and my organization's security doesn't recognize that division and is blocking me from accessing the data.  So I will have to talk to my AGOL administrator to see if he either can grant me rights to see data published by that esri division or can move the data as a service under my organization.

It appears that the notebook is expecting an image service and the tech person did not think it can be served from my local machine without a substantial rewrite of the code, but she said she would talk to some of the Python specialists to see if they can suggest any options.

Anyway, I will post back if I make any progress.  I also will see if any of the other deep learning notebooks has a different way of setting up the training data that might work for me.

DanPatterson_Retired
MVP Emeritus

pretty bad if you can't work with locally stored data.  keep us posted Richard

0 Kudos
RichardFairhurst
MVP Honored Contributor

After doing some searching it looks like ArcPy handles local data and arcgis.GIS handles webmap data and the two don't really mix.  So while I figured out that I can load a local layer into a Jupyter notebook by using arcpy.MakeRasterLayer_management, I can't display it in a webmap.  It looks like Spatially Enabled Dataframes can handle both local and online data together, but it seems to be more of an environment for manipulating feature class data in tabular format than it is for visualizing rasters on a map.  Anyway, it does look like the code would have to undergo a major rewrite to work with a mixture of local and online data, if it is even possible.

I also tried to use an anonymous login to ArcGIS Online, but that only caused more errors.  The security issue seems to be that the data is housed in an online location that is requiring https security and my organization's setup is not compliant with the ssl certification protocols required to access the data.  It is a little frustrating that they didn't just post this data like all the other esri services that I can access through Portal.

My best bet may be to publish the image data I downloaded from the Conservancy as an Image Service through my organization, although I am not completely sure even that will work.

0 Kudos
RichardFairhurst
MVP Honored Contributor

Well it looks like even if I could access the data that I personally can't really run any of the deep learning code.  Virtually all of the tools involved in model training require full access to Image Server.  While I believe my organization has Image Server, I personally don't have rights to access it. I find some of the documentation in the notebook and elsewhere to be misleading when it says that code like this can be run in ArcGIS Pro, since it looks like none of it really is run directly by Pro.

Anyway, I feared that esri was blowing smoke at the conference saying that deep learning is coming to ArcGIS, when in reality it looks like it is only available to an extremely small percentage of their users and only an extremely small percentage of them will ever have any interest in tackling the learning curve.  I estimate that 99% of the people that attended their deep learning presentations at the UC wasted their time, since they will never have the rights to use deep learning with the way esri is currently deploying it.

My frustration is that while the potential of deep learning is obvious to me, I didn't get enough information at the UC and can't get enough information from esri web articles to even intelligently talk to my organization about all of the licenses and server set ups we really would need to even attempt to use it.  I feel like there is no way for me to know if what we already have is enough or if we can justify the cost of upgrading our enterprise agreement and systems to make deep learning even possible.

0 Kudos
DanPatterson_Retired
MVP Emeritus

Richard

There is a large community out there using some of the packages without having Arc-anything installed.

Seemingly, they have package and approach .  It isn't the only way

For example

GitHub - keras-team/keras: Deep Learning for humans 

GitHub - tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone 

GitHub - microsoft/CNTK: Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit 

dependencies, python usually numpy... the above are only a few of the sampling.

However, if you are locked down without admin rights on your machine, it might be worthwhile getting yourself a separate testing machine.

Stack Overflow - Where Developers Learn, Share, & Build Careers 

Is a good source to a bit of trolling after the basic introductory materials.

The big question is what do you want to use "IT" for?  Like any tool, new and cool isn't necessarily a requirement to jump on the bandwagon.

Remember ... factor analysis, analysis of variance... even hot spot analysis.

I like this quote

What have been the major paradigm shifts in data science? - Quora 

RichardFairhurst
MVP Honored Contributor

Dan:

Thanks for the links and comments.  The answer to "what do I want to use 'IT' for?' initially is orthophoto image analysis.  The value of AI was proven to me by the building footprints Microsoft released.  The inclusion of those 800K shapes overnight in my source data has allowed me to quickly analyze and dramatically improve the positional accuracy of my address points, parcels, land use tracking cases, general plan, zoning, cities, etc in a very short amount of time single-handed, which was never possible before.. But the Microsoft data is already getting out of date and I am certain that AI is the only practical way to make the maintenance of this layer possible for my jurisdiction and the 29 cities we encompass.

I want to be part of the ability to create and maintain data in near real-time, with ever increasing resolution that is comprehensible and integrated across multiple objects in a variety of formats at scales both large and small for my jurisdiction.  Currently my jurisdiction is most lacking in its ability to extract useful information from orthophoto imagery that goes beyond making it a map background,

0 Kudos
RichardFairhurst
MVP Honored Contributor

Dan:

I found this video on image segmentation using UNet to detect cell nuclei in images, which shows the basic principles of a modeling approach that I think could be adapted to extract building footprints from aerials.  I was able to get the code to work after pip installing a few site packages (opencv-python, tensorflow and tensorflow-gpu) and some NVidia developer software for GPU acceleration (CUDA 10.0 and CUDNN 7.6.2.24).  

The division of the training/test data into tiles or chips was not done by the video example code, so I still have to deal with developing my own routines for preparing training data from much larger rasters, since the arcgis.learn.export_training_data() method esri has created requires access rights I don't have.  My starting data is similar to the esri sample, since I have a raster covering a much bigger extent than my area of interest, and a polygon layer of building footprints within my area of interest that optionally could be converted to a classified raster. 

export = learn.export_training_data(input_raster=naip_input_layer,
                                    output_location=samplefolder,
                                    input_class_data=label_layer.url, 
                                    chip_format="PNG", 
                                    tile_size={"x":400,"y":400}, 
                                    stride_size={"x":0,"y":0}, 
                                    metadata_format="Classified_Tiles",                                        
                                    context={"startIndex": 0, "exportAllTiles": False, "cellSize": 2},
                                    gis = gis)‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The export_training_data method parameters suggest that this tool is very similar to the Split Raster tool.  I don't have experience using the Split Raster tool either, but it looks like the main difference in the parameters seems to be the metadata_format that outputs Classified Tiles.  I wish I could see a sample of the output of the export_training_data method that I could compare to the Split Raster output so that I could determine what, if any, additional processing is done beyond what the Split Raster tool does.

The video example seems to handle training and testing of the model fairly well, however, it does not deal with creating a final model output from a new raster and it seems best suited to processing separate photos that do not have to be reassembled into a single image at the end, so I would also have to figure out how to accomplish that.  The esri example seems to have enclosed the final classification process in a black box method.  I assume that method tiles the new raster and combines the tiles at the end to create a final classified raster covering the original raster extent.

Anyway, I would appreciate your thoughts on the assumptions I am making and any suggestions you may have that might help me create code or apply other techniques so that I could design a process of my own that might work for my needs.

0 Kudos