Hi, i am trying to train a deep learning model so that it can extract building footprint. I have made the trainingdata with a tilesize 640 and stride 320. I therefore think it is natural that the Chip_Size is 640 when using the train deep learning model, does anyone here know what that parametere does?
Hi
Have you seen the documentation on this for prepare_data, it says
chip_size - Optional integer, default 224. Size of the image to train the model. Images are cropped to the specified chip_size. If image size is less than chip_size, the image size is used as chip_size. Not supported for SuperResolution, SiamMask, Pix2Pix and CycleGAN.