Hello everyone!
I encountered a problem with the Tool "Compute Accuracy For Object Detection" that does not make any sense to me. Maybe someone of you has any idea about this:
I created a MaskRCNN Model and used it to detect objects in a single image (for testing purposes). I want to see how well this model performs, which is why I use ground truth data of 50 images to compare my detection to (ground truth data covers the single image i used). I can clearly see, that some of my detections (pink) overlap with the ground truth data from that specific image (highlighted in cyan):
However, when i use "Compute Accuracy For Object Detection" it only shows false_positives and false_negative:
If anyone has any idea, it would be of great help! Thanks 🙂
Solved! Go to Solution.
This issue has been resolved by making sure both the Ground Truth Features, and the Detect Features contain the same class values. For example, if your Ground Truth Features have the class value "car" and the Detect Features has the class value "cars", the tool will treat them as two different classes.
Hi, Can you also share the screenshot of "Compute Accuracy For Object Detection" tool parameters which you used.
Hey,
So after i tried many different things, the original data from my post got lost. However, i recreated the problem, just with a different image but the same problem. Here is what i did:
1) Detect Objects using Deep Learning:
2) Define Projections for all layers to get everthing into the same coord.sys. (Otherwise ComputeAccuracy... does not work, I found this solution somewhere on the internet)
3) Compute Accuracy using Deep Learning:
After this, I get again no positive detections, even though i can clearly see some in the map. Here are some additional informations:
Ground Truth data:
Detected Objects:
Hi,
I can also check whether both the feature layers have same spatial reference or projection?
Yes they have the same spatial reference:
Detected Features:
Ground Truth:
Hi,
Can you also check whether both the feature layers (Detected testing and GTD_testing) have same spatial reference or projection?
Yes it's the same (see above)
@rachbauer because in your screen capture, I see some overlap that's why I feel like it should work. Just for a test, please try 0.1, 0.2, 0.3 values and see if any of them returns anything meaningful.. next you can look at your data and decide a value. I often use 0.1, in cases when I feel some overlap is sufficient.
Hey,
I just tried values ranging from 0.001 to 0.4, but it did not help. I was thinking that maybe there is something wrong with the fields. Am i right to assume that i can just create new fields (with any name) for the detected and ground truth data and fill them with any value, e.g. 2 (for both)?
@rachbauer in your screen capture, I noticed you did not set the 'Detect Class Value Field' and the 'Ground Truth Class Value Field' values, can you please make sure right fields are selected?. here a screen capture of one of my working tests
If this does not work, would you like to share you test data (both detections and ground truths), just few records? If so, please email at pyadav AT esri DOT com.
Best!
Pavan