Annotation viewpoint wrong in much of our images

In which Wildbook did the issue occur?
IOT

What operating system were you using? (eg. MacOS 10.15.3)
Windows 10

What web browser were you using? (eg. Chrome 79)
Chrome 96

What is your role on the site? (admin, researcher, etc)
Admin

What happened?
I have just noticed that many of the viewpoints on annotations of Cuora bourreti are wrong/inconsistent

What did you expect to happen?

What are some steps we could take to reproduce the issue?
Do we need to retry the upload?

If this is a bulk import report, send the spreadsheet to services@wildme.org with the email subject line matching your bug report

Hi @JCarney

Can you please provide links to a few example encounters? We’ll take a look.

Thanks,
Jason

Hi Jason,

These are just some random ones I picked and they all seem to have issues:

https://iot.wildbook.org/encounters/encounter.jsp?number=51c04ea1-47c9-4e16-b414-2c5d44f46901
https://iot.wildbook.org/encounters/encounter.jsp?number=b1b6b1bc-d9e2-4048-b031-98b8a4a2556d
https://iot.wildbook.org/encounters/encounter.jsp?number=849ce28f-10c0-4dfe-b79d-a7841089be62

Left and Right seem to work fine but Up, Down, Front and Back seem to get confused a lot

Thanks,

Jack

Yes, the labeler has trouble disambiguating the non-left/right viewpoints.

The confusion matrix for the labeler (for all species) shows an overall 84% accuracy at predicting species and viewpoint together. When we restrict the accuracy to species only, the model is 91% accurate at performing species predictions (most of the error is with misclassifying heads and bodies for the same species). Looking at the Asian sea turtle (ST-A) classification performance (green box), we can see how well the system is accurately predicting viewpoints for just that species. Note that the dark blue grid squares in the red boxes are inter-species misclassification with Asian sea turtles, which is very rare.

Looking closer at the confusion matrix for just Asian sea turtles, we can see that left and right perform by far the best (dark red, high 90% accuracy). Note that the predicted value is on the x-axis and the ground-truth label is on the y-axis. The model has the most trouble with the Up viewpoint, consistently mislabeling it as Front (and vise-versa). The Down and Back viewpoints also share similar trade-offs with errors. That being said, each viewpoint by itself is still fairly accurate in aggregate, scoring at least 60-70%.

The most straightforward solution to this problem is to gather more training data of these additional viewpoints. The other challenge is that the true viewpoints that are annotated are often too rare to justify adding them as unique classes. The following mapping is used to merge secondary viewpoints into the most likely primary bucket (e.g., merging “frontleft” into “front”).

left       -> left
frontleft  -> front
front      -> front
frontright -> front
right      -> right
backright  -> back
back       -> back
backleft   -> back
up         -> up
upleft     -> left
upfront    -> front
upright    -> right
upback     -> back
down       -> down
downleft   -> left
downfront  -> front
downright  -> right
downback   -> back

Other than additional data, changing the mapping for the Asian sea turtle species may help to separate the errors between Up-Front and Down-Back by not merging the “upfront” ground-truth label into “front” and the “downback” label into “back”. The challenge to allowing these additional categories is sufficient training data.

1 Like

Hi @JCarney

Here are a few resources that can help you with the workaround to a bad annotation from detection, which is to manually delete and redraw the annotation:

Deleting an annotation

Manually drawing a new annotation

Thanks,
Jason