Hello,
I'm using U-net for land use and land cover classification in ArcGIS Pro with the following workflow:
1. Create a training dataset from an image scene and export annotated image chips using the "Export Training Data for Deep Learning" tool.
2. Train a CNN (U-net) model using the "Train Deep Learning Model" tool with the dataset from Step 1.
I want to know if it's possible to create and use multiple training datasets from different image scenes to train the CNN model. Currently, I can only produce one train data folder (includes "images", "labels", esri_accumulated_stats.json, esri_model_definition.emd, etc) from a single scene, and use only one such train data folder in " Training Deep Learning Model". My current trained model is not very accurate due to insufficient train data only from one scene. How can I diversify my training datasets by producing multiple train data folders and use them all at one time in " Training Deep Learning Model"? Please advise. Thank you.
Solved! Go to Solution.
Hey Bonnie,
It has been a bit of time since I last did Deep Learning so someone may have a much better idea, but have we considered doing retraining?
You could generate a few different and entirely separate batches of trained image chips, then using just one of these build your initial deep learning model which should give you a .dlpk file.
You could then go back into the training tool and feed another set of trained image chips into this, and feed it into the .dlpk from your first use of the tool and repeat this for additional image chip folders.
The parameter to use for this is documented here.
Hope that helps,
David
Hey Bonnie,
It has been a bit of time since I last did Deep Learning so someone may have a much better idea, but have we considered doing retraining?
You could generate a few different and entirely separate batches of trained image chips, then using just one of these build your initial deep learning model which should give you a .dlpk file.
You could then go back into the training tool and feed another set of trained image chips into this, and feed it into the .dlpk from your first use of the tool and repeat this for additional image chip folders.
The parameter to use for this is documented here.
Hope that helps,
David