Select to view content in your preferred language

arcgis deep learning

576
4
05-22-2024 05:58 AM
MariiaDemchenko
New Contributor

Hello everyone
I have trained an artificial intelligence in ArcGIS Pro version 3.2.2 to classify staves and trunks
I set the parameters (screenshots attached below) that fit the block sizes and block point limits, but for some reason I can't get a perfect classification, the classification is successful on the power lines but I have problems with the distribution poles and transmission towers
Maybe someone knows what the problem may be and can suggest a solution to this problem
Thank you in advance

signal-2024-05-22-155454_002.pngsignal-2024-05-22-155720_002.pngsignal-2024-05-22-155739_002.png
Thank you in advancesignal-2024-05-22-152149_002.pngsignal-2024-05-22-152313_002.pngsignal-2024-05-22-152334_002.png

Tags (1)
4 Replies
David_McRitchie
Esri Contributor

Hey Maria,

Deep Learning can have a lot going on in the background, especially if you have trained your own model. It would be beneficial if you could share more details on the background training, perhaps a video walking through each step and what parameters have been used, along with the training data.

From looking at this it could be a case that more training samples are needed. I would also highlight that the Living Atlas does have a similar Deep Learning Model that might help with your workflow, or be something you could use by retraining with your own data.

Hope that helps!

David

Esri UK -Technical Support Analyst
MariiaDemchenko
New Contributor

Hi David,

We have already tested this model, but it does not work well for transmission towers.
We highlighted all parameters in our main help request.
The video will not show anything new because you will see the same.
Probably could you help with zoom session?

Thank you in response!

Mariia

David_McRitchie
Esri Contributor

Hey Mariia,

I won't be able to hop on a zoom session but in regards to the pre-trained Living Atlas model it might be worth testing a retraining of this, using your own training data.

Can we confirm what the training data is like? Usually, when classifications aren't coming out perfect it is a sign that more training data is required. Deep Learning frustratingly can need incredibly large training datasets hence reusing models helps significantly cut down time needed to build a good model.

 

Hope that helps,

David

Esri UK -Technical Support Analyst
0 Kudos
Edward_Wong
Emerging Contributor

Hi Mariia,
Changing the Block Size often means you need to update the Block Point Limit for more info read here the change in how you export your .pctd will directly affect the inferencing manner.

Now moving to model training (different architecture have different advantage e.g required more training data / resource intensive on inferencing - you can find out more from the published paper or just give it a go, note your max epoch for training are set to 10 unless its by design.

Minimum points per block could be another thing thats worth modifying. As far as I can tell most setting that were used here are by default which wont work very well if the default setting are tested while training a model in a different size, have a look at here, specifically the code sample , hopefully it give an overview of how point within a block are crucial for the training and inferencing stage.

exw@eagle.co.nz

Edward 
Eagle Technology ESRI NZ Distributor - GeoAI Lead