What happened?
I ran matching for a random sighting and found several instances, even within 1 set of match results, where the viewpoint of the proposed match is the opposite of the viewpoint in the target image.
This was a detection prediction of “down” (rather than the expected prediction of “left”) on the viewpoint, causing it to match against what looks like a rogue’s gallery of other “down” and down-related (mis)predictions.
It may be that “down” is a viewpoint class with insufficient data. If we see a lot of this, then we may eventually need to retrain the model with additional “down” examples or rebalance the dataset to do better at this viewpoint versus others.
With the misprediction of down, the system then actually behaved as designed and filtered on related viewpoints (where they were actual “down” viewpoints or not).
The workaround is to delete the annotation and redraw it manually, setting the correct viewpoint. Let’s monitor for more of these.
Was this match kicked off by bulk import? I’m trying to understand the pathway because it is too many annotations ( against 8493 candidates). When I click start another match and make sure not to filter by location, I get a much smaller set of candidates (filtered by viewpoint), and a clean slate of no potential matches.
The difference in the # of candidates btw what you ran and what he ran, which will have been using the My data filter criteria, is strange. I’m not sure if the time stamp on the match page run by the researcher reflects his time zone or mine - I assume his? If so, this would have been run around the time that @MarkF was applying some updates so maybe that had an effect?
Perhaps. Please let me know if you can find a way to reproduce this. It is an unusual result, but I can’t find a pathway to recreate it and dig further.