The names of the arguments are populated by the tool from reading the Python module. Where?

863
3
Jump to solution
02-07-2023 02:21 PM
Labels (2)
KingboroughCouncil
New Contributor II

Hi all

When I try to figure out which parameters I can put in Classify Pixels Using Deep Learning it says "The names of the arguments are populated by the tool from reading the Python module."

Where is this? How do I read the python module? I can't find any documentation on what parameters can be used for the classification

Thanks

0 Kudos
1 Solution

Accepted Solutions
PavanYadav
Esri Contributor

Hello @KingboroughCouncil 

I am going to share some information on this below. I will work with the team and get it documented as well. 

The information from the Model Definition parameter will be used to populate this parameter. These arguments vary, depending on the model architecture. The following are supported model arguments for models trained within ArcGIS. ArcGIS Pre-trained models and custom deep learning models may have additional arguments that the tool supports.

 

  • batch_size—The number of image tiles processed in each step of the model inference. This depends on the memory of your graphic card. The argument is available for all model architectures.
  • padding—The number of pixels at the border of image tiles, from which predictions are blended for adjacent tiles. To smooth the output while reducing artifacts, increase the value. The maximum value of the padding can be half of the tile size value. The argument is available for all model architectures.
  • tile_size—The width and height of image tiles into which the imagery is split for prediction. This argument is only available for the CycleGAN architecture.
  • predict_background— If set to True, the background class is also classified. Available for UNET, PSPNET, DeepLab, and MMSegmentation.
  • test_time_augmentation—Performs test time augmentation while predicting. If true, predictions of flipped and rotated variants of the input image will be merged into the final output. Available for UNET, PSPNET, DeepLab, HED Edge Detector, BDCN Edge Detector, ConnectNet, MMSegmentation, and Multi-Task Road Extractor.
  • merge_policy—Policy for merging augmented predictions. Available options are mean, max, or min. This is only applicable when test time augmentation is used. Available for MultiTaskRoadExtractor, and ConnectNet architectures. If IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation are also available architectures.
  • threshold—The predictions that have a confidence score higher than this threshold are included in the result. The allowed values range from 0 to 1.0. If ArcGISLearnVersion is 1.8.4 or higher in model's .emd file, MultiTaskRoadExtractor and ConnectNet architectures are available. If ArcGISLearnVersion is 1.8.4 or higher and IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation architectures are also available.
  • return_probability_raster—Available options are True and False. If ArcGISLearnVersion is 1.8.4 or higher in model's .emd file, MultiTaskRoadExtractor and ConnectNet architectures are available. If ArcGISLearnVersion is 1.8.4 or higher and IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation architectures are also available.
  • direction—The image is translated from one domain to another. Available options are AtoB and BtoA. This argument is only available for the CycleGAN architecture. For more information about this argument, see How CycleGAN works.
  • thinning—Thins or skeletonizes the predicted edges. Available options are True and False. If IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation are available architectures.


Thanks

Pavan 
Product Engineer 

View solution in original post

3 Replies
DanPatterson
MVP Esteemed Contributor

Are you using this?

Classify Pixels Using Deep Learning (Image Analyst)—ArcGIS Pro | Documentation

and is everything ready to go?


... sort of retired...
KingboroughCouncil
New Contributor II

That's the one, though I'm wanting to see what arguments are available to see what fine tuning options there are

0 Kudos
PavanYadav
Esri Contributor

Hello @KingboroughCouncil 

I am going to share some information on this below. I will work with the team and get it documented as well. 

The information from the Model Definition parameter will be used to populate this parameter. These arguments vary, depending on the model architecture. The following are supported model arguments for models trained within ArcGIS. ArcGIS Pre-trained models and custom deep learning models may have additional arguments that the tool supports.

 

  • batch_size—The number of image tiles processed in each step of the model inference. This depends on the memory of your graphic card. The argument is available for all model architectures.
  • padding—The number of pixels at the border of image tiles, from which predictions are blended for adjacent tiles. To smooth the output while reducing artifacts, increase the value. The maximum value of the padding can be half of the tile size value. The argument is available for all model architectures.
  • tile_size—The width and height of image tiles into which the imagery is split for prediction. This argument is only available for the CycleGAN architecture.
  • predict_background— If set to True, the background class is also classified. Available for UNET, PSPNET, DeepLab, and MMSegmentation.
  • test_time_augmentation—Performs test time augmentation while predicting. If true, predictions of flipped and rotated variants of the input image will be merged into the final output. Available for UNET, PSPNET, DeepLab, HED Edge Detector, BDCN Edge Detector, ConnectNet, MMSegmentation, and Multi-Task Road Extractor.
  • merge_policy—Policy for merging augmented predictions. Available options are mean, max, or min. This is only applicable when test time augmentation is used. Available for MultiTaskRoadExtractor, and ConnectNet architectures. If IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation are also available architectures.
  • threshold—The predictions that have a confidence score higher than this threshold are included in the result. The allowed values range from 0 to 1.0. If ArcGISLearnVersion is 1.8.4 or higher in model's .emd file, MultiTaskRoadExtractor and ConnectNet architectures are available. If ArcGISLearnVersion is 1.8.4 or higher and IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation architectures are also available.
  • return_probability_raster—Available options are True and False. If ArcGISLearnVersion is 1.8.4 or higher in model's .emd file, MultiTaskRoadExtractor and ConnectNet architectures are available. If ArcGISLearnVersion is 1.8.4 or higher and IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation architectures are also available.
  • direction—The image is translated from one domain to another. Available options are AtoB and BtoA. This argument is only available for the CycleGAN architecture. For more information about this argument, see How CycleGAN works.
  • thinning—Thins or skeletonizes the predicted edges. Available options are True and False. If IsEdgeDetection is present in the model's .emd file, BDCNEdgeDetector, HEDEdgeDetector, and MMSegmentation are available architectures.


Thanks

Pavan 
Product Engineer