Hi all,
I've been trying to put to work the deep learning package corine_landcover.dlpk, as available in Esri Living Atlas. I'm having a unexpected result from the deep learning process, without no warnings or errors identified by ArcGIS Pro. I've tested 2 approaches:
- Scenario 1: raster mosaic, sentinel 2 type, with all bands template, using SRS WGS 1984 UTM Zone 29N.
- Scenario 2: raster dataset, Sentinel True Colour, exported R::Band 4; G::Band 3; B::Band 2, SRS Web Mercator.
Scenario 1, my first approach, I've experimented several batch_size, from 4, 8 to 16. I've selected Mosaicked Image and Process All Rasters single. batch_size parameter has not affected the results. Processor type (GPU or CPU), has affected the results. Both are not understandable.
All processing have took account the visible map area as input parameter for the inferencing.
Starting point:
Scenario 1 - GPU - Mosaicked Image / Batch size 4/GPU:
Scenario 1 - CPU - Mosaicked Image / Batch size 4/CPU:
For scenario 2 same results, GPU and CPU.
For the GPU case I don't have a clue what the issue is. I've ArcGIS Pro Advanced 2.7.1, Image Analyst ext, Deep Learning Framework (with all the requirements, including visual studio). My GPU is RTX 3070 8 Gb who I consider has enough power to do the math. No errors or warnings are presented in the inference processing.
For the CPU the results are clearly not adequate. The test area is about 145.6716km². Most of the classification is Inland Waters, and that doesn't make any sense has you see above in the sentinel 2 scene. I know the model is Unet and should be using GPU, but nevertheless I thought it could be SRS (WGS 1984 UTM Zone 29N) the issue.
I tested again, scenario 2, the below image is GPU processing
CPU mode was about the same.
I understand that the Pytorch version in use in ArcGIS Pro supports only CUDA 10.2 and no CUDA 11.x GPU's. I've seen around the threads issues alike, with different graphic cards, Touring and Pascal, and this type of issue are happening in pixel classification using deep learning.
The CPU results, I don't understand them. I have an i5-10600K CPU with 32 Gb RAM.
Any comments on this?
I got strange results but found the following helped:
-Change the processing template to None of the original Sentinel View (as mentioned)
-Right-Click on the Sentinel View layer -> Data -> Export Raster
-Set the Clipping Geometry to "Current Display Extent" (as a test)
-Create the TIFF with these settings, Pixel Type to 32 bit unsigned and Output Format to TIFF
The nvidia-smi.exe output
Hi Tim,
I will try to do your approach but I can give the feedback that a colleague of mine with RTX3000 (laptop Touring architect) using my workflow has with success did inferencing on the data, without converting.
As I see from your screenshot you have and RTX 2080, so you also have a touring GPU. Since pytorch 1.4.0 supports only CUDA 10.2, and because CUDA 10.2 is not supported in new Ampere GPU although having CUDA 11 installed the cudnn/cudatoolkit in use will be 7.x/10.x.
I will try to do your workflow and give feedback.
Hmmm, this is mine (image below). The left side 32bit unsigned, right side 8bit unsigned. Other thoughts
- Your 'Map' set to WGS 1984 Web Mercator?
- Cell size set to 10 on the Classify Pixels for Deep Learning?
Hi @TimG
the CUDA issue just happens, in 16bit and 32bit unassigned. 8 bit even worse. Really I don't have a clue why this issue happens.
Any comments?
Thanks