Select to view content in your preferred language

Classify Pixels Deep Learning Package - Sentinel 2

4966
14
03-07-2021 12:11 PM
Labels (2)
TiagoCarvalho1979
Emerging Contributor

Hi all,

I've been trying to put to work the deep learning package corine_landcover.dlpk, as available in Esri Living Atlas. I'm having a unexpected result from the deep learning process, without no warnings or errors identified by ArcGIS Pro. I've tested 2 approaches:

- Scenario 1: raster mosaic, sentinel 2 type, with all bands template, using SRS WGS 1984 UTM Zone 29N. 

- Scenario 2: raster dataset, Sentinel True Colour, exported R::Band 4; G::Band 3; B::Band 2, SRS Web Mercator.

Scenario 1, my first approach, I've experimented several batch_size, from 4, 8 to 16. I've selected Mosaicked Image and Process All Rasters single. batch_size parameter has not affected the results. Processor type (GPU or CPU), has affected the results. Both are not understandable.

All processing have took account the visible map area as input parameter for the inferencing.

Starting point:

DL_interest_area.PNG

 Scenario 1 - GPU - Mosaicked Image / Batch size 4/GPU:

DL_Errors.PNG

 Scenario 1 - CPU - Mosaicked Image / Batch size 4/CPU:

DL_Errors_CPU.PNG

 For scenario 2 same results, GPU and CPU.

For the GPU case I don't have a clue what the issue is. I've ArcGIS Pro Advanced 2.7.1, Image Analyst ext, Deep Learning Framework (with all the requirements, including visual studio). My GPU is RTX 3070 8 Gb who I consider has enough power to do the math. No errors or warnings are presented in the inference processing.

For the CPU the results are clearly not adequate. The test area is about 145.6716km². Most of the classification is Inland Waters, and that doesn't make any sense has you see above in the sentinel 2 scene. I know the model is Unet and should be using GPU, but nevertheless I thought it could be SRS (WGS 1984 UTM Zone 29N) the issue.

I tested again, scenario 2, the below image is GPU processing

DL_Errors_scenario2_gpu.PNG

 CPU mode was about the same.

I understand that the Pytorch version in use in ArcGIS Pro supports only CUDA 10.2 and no CUDA 11.x GPU's. I've seen around the threads issues alike, with different graphic cards, Touring and Pascal, and this type of issue are happening in pixel classification using deep learning.

The CPU results, I don't understand them.  I have an i5-10600K CPU with 32 Gb RAM.

Any comments on this?

0 Kudos
14 Replies
TimG
by
Regular Contributor

I got strange results but found the following helped:

-Change the processing template to None of the original Sentinel View (as mentioned)

-Right-Click on the Sentinel View layer -> Data -> Export Raster

-Set the Clipping Geometry to "Current Display Extent" (as a test)

-Create the TIFF with these settings, Pixel Type to 32 bit unsigned and Output Format to TIFF

The nvidia-smi.exe output

TimGrenside_0-1615260739552.png

0 Kudos
TiagoCarvalho1979
Emerging Contributor

Hi Tim,

I will try to do your approach but I can give the feedback that a colleague of mine with RTX3000 (laptop Touring architect) using my workflow has with success did inferencing on the data, without converting.

As I see from your screenshot you have and RTX 2080, so you also have a touring GPU. Since pytorch 1.4.0 supports only CUDA 10.2, and because CUDA 10.2 is not supported in new Ampere GPU although having CUDA 11 installed the cudnn/cudatoolkit in use will be 7.x/10.x.

TiagoCarvalho1979_0-1615377368075.png

TiagoCarvalho1979_1-1615377415054.pngTiagoCarvalho1979_2-1615377488303.png

I will try to do your workflow and give feedback.

0 Kudos
TiagoCarvalho1979
Emerging Contributor

Hi @TimG 

did create the new raster (export to 32 bits unsigned). Same result. Thanks for the help.

 

0 Kudos
TimG
by
Regular Contributor

Hmmm, this is mine (image below).  The left side 32bit unsigned, right side 8bit unsigned.   Other thoughts

- Your 'Map' set to WGS 1984 Web Mercator?

- Cell size set to 10 on the Classify  Pixels for Deep Learning?

TimGrenside_0-1615416774547.png

 

0 Kudos
TiagoCarvalho1979
Emerging Contributor

Hi @TimG 

TiagoCarvalho1979_0-1615492015712.png

the CUDA issue just happens, in 16bit and 32bit unassigned. 8 bit even worse. Really I don't have a clue why this issue happens. 

Any comments?

Thanks

0 Kudos