First time poster and wasn't exactly sure where to take this question. So I went through and used the image classification wizard for support vector machine and random forest. Pixel based for both and applied the same training dataset. I was classifying healthy versus diseased pixels on trees. I had a layer that was a raster stack of 11 bands and had RGB showing for visual purposes. When going through the classification wizard it never asks which bands I would like to use for random forest or support vector machine. I went through the classification process twice using RGB and then NDVI, GNDVI and far-red but got the same results despite changing which bands I selected for RGB of the layer. Does anyone know which bands it is using for the classification in the for random forest and support vector?
Also, under "image classification" I used the "accuracy assessment" to validate my SVM and RF layers. My training dataset was just under 10,000 pixels. I used 3,000 pixels for the accuracy assessment and got 100% accuracy for both my random forest and support vector layer. This was concerning to me so I bumped it up to 10,000 pixels and still got 100% accuracy. This means that all my training data was correctly classified in the RF and SVM layers. Does ArcGIS pro actually classify those training dataset pixels or does it just classify them as the user classified them in the training dataset. I have a hard time believing I created that strong of a training dataset there is 100% accuracy in both my random forest and support vector classification.
Get back to me with your thoughts and any other questions you have for me.