Any chance you can look at this for us please?
Hi @PaulK and @ACWadmin1
I’m not sure that all Right and Left viewpoints are getting matched to all variations of Front and Up as well? Which would provide a much higher number of possible match candidates in the database than we’re getting now.
I did a check in the database, and here is the distribution of lion+head viewpoints:
VIEWPOINT | count
------------±------
back | 18
down | 1
front | 11543
frontleft | 69
frontright | 59
left | 160
right | 141
up | 632
upfront | 48
upleft | 9
upright | 9
One thing to be aware of is that when considering which annotations to match against by viewpoint, Wildbook creates a matrix of surrounding viewpoints and includes them in the query. Example:
A “left” viewpoint will be compared to other “lefts” as well as the surrounding viewpoints leftfront, frontleft, upleft, downleft, leftback, and backleft. BUT…not front. For most species, a solid left flank shot isn’t going to translate into a good front comparison using existing algorithms. Given those distributions and the matrix behavior, I believe the number of left photos to match against is low and that the left shots are indeed not being compared to fronts. I would guess that both hotspotter and PIE would struggle to compare a left to a front or an up to a front.
I hope that helps explain the numbers you’re seeing.
Thanks,
Jason
Hi @ACWadmin1 and @PaulK
Issue #2:
I’ve seen quite a high percentage of mis-labelled left and right viewpoints on lion+head annotations - lefts that should be rights and rights that should be lefts.
Given the distributions of lion+head viewpoints above, and assuming the training data for the detector model matches those dsitributions roughly, I am not surprised that left vs. right are common confusers. Relative to the other lion+head viewpoints, those are relatively small volumes of annotations (160 lefts, 141 rights) to train on. Wildbook doesn’t change the viewpoint prediction from the detector in any, so I believe these are legitimate detector mispreditions on viewpoint.
The workaround here short-term is to remove the annotation and manually recreate it with the correct viewpoint. Long-term, more perpendicular lefts and rights should be used in a future detector retraining.
As far as ID goes, I suspect HotSpotter will have some matchability of lefts and right lion heads, but I have no idea if left and right viewpoints were considered in PIE training. Matchability of those may be much lower than of purely front lion face shots.
Thanks,
Jason
Hi @jason, sorry for the delayed reply on this. I understand the logic you explain above however I’m wondering if enabling left & right viewpoints (and variations of that) to be compared to Front viewpoints is possible, hopefully via a configuration change? With lion heads, I feel that most Front viewpoints include a sufficient proportion of a left or a right side to be matchable to those other viewpoints.
Alternatively, I’d want to change all left and rights in this dataset to front; there are so few straight lefts and rights that they’re not worth worrying about.
Thoughts?
thanks
Maureen
Hi @ACWadmin1
The logic for comparison is at the Java code level and would not work for other species. I would like to avoid changing that, especially since we don’t know if “left” lion heads would even natch to a “front” viewpoint photo.
We could change “left” or “right” lion+head viewpoints to “leftfront” and “rightfront” viewpoints, to allow for matchability to “front”, but even there I suspect very low matchability given the angular difference. It’s an option, but once we turn it on, we change the expectation with users to “should match” when I suspect they will not. So I would not recommend it.
I will check with Drew on Tuesday whether we even considered simple “left” and “right” viewpoints in training. I suspect they were excluded.
Thanks,
Jason