Chaning Chip_Size in train deep learning Model

602
2
11-16-2021 11:57 PM
Labels (1)
Zaki
by
New Contributor II

Hi, i am trying to train a deep learning model so that it can extract building footprint. I have made the trainingdata with a tilesize 640 and stride 320. I therefore think it is natural that the Chip_Size is 640 when using the train deep learning model, does anyone here know what that parametere does?

0 Kudos
2 Replies
TimG
by
New Contributor III

Hi

Have you seen the documentation on this for prepare_data, it says

chip_size - Optional integer, default 224. Size of the image to train the model. Images are cropped to the specified chip_size. If image size is less than chip_size, the image size is used as chip_size. Not supported for SuperResolution, SiamMask, Pix2Pix and CycleGAN.

0 Kudos
Kwakuopokuware401
New Contributor

Hi, I hope this message finds you well. I am reaching out to you today to seek your guidance and expertise regarding a technical challenge I am currently facing with plot-level segmentation using MaskRCNN in ArcGIS Pro. I have recently been working on a project that involves training a model for plot-level segmentation. I have successfully prepared my training data using the ArcGIS training tool for deep learning and exported it with the appropriate metadata information. However, upon testing my trained model, I encountered an issue where the results are displayed in small chips, similar to the training labels, rather than the desired plot-level segmentation. My primary objective is to obtain segmentation results that align with the plots I have trained the model on. I would greatly appreciate any assistance or clarification you could provide to help me overcome this hurdle and achieve the desired output.

0 Kudos