Hello everyone,
I am currently working on the ArcGIS deep learning model titled "Solar Photovoltaic Park Classification - Global", and I am encountering an issue that I can't seem to resolve.
My images are in RGB format with 3 bands, but the model appears to have been trained on multispectral images with 12 bands. This results in size mismatch errors when I try to load the model weights.
Here is the exact error message I am receiving:
size mismatch for layers.0.0.weight: copying a param with shape torch.Size([64, 12, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 3, 7, 7]).
Could anyone assist me in adjusting the model to work correctly with RGB images with 3 bands? Any suggestions or experiences with this type of issue would be greatly appreciated.
Thank you very much for your help!
Best regards,
Hi,
Can you confirm if you have used the Sentinel- 2 L2A imagery - BOA Reflectance as shown in the below image? In case you have used your own composite, try again with BOA Reflectance - the model should work.
Hello @S6
Thank you for your reply. My aim is to use the ESRI model on orthophoto images in RBG 3 bands, not the sentinel images.
I don't know if this is possible