In which Wildbook did the issue occur?
What operating system were you using? (eg. MacOS 10.15.3)
What web browser were you using? (eg. Chrome 79)
What is your role on the site? (admin, researcher, etc)
One of the volunteers reported that was matching to the same photo a lot of the time.
What did you expect to happen?
It seems the algorithm is not being very specific or even stuck on one or to of the photos as a first choice.
What are some steps we could take to reproduce the issue?
I’m not sure, as the volunteer user: atrofimtchouk has been encountering this.
If this is a bulk import report, send the spreadsheet to email@example.com with the email subject line matching your bug report
Across all of your examples, these are 0 score results, meaning that there is no evidence of matching. When we ask PIE to take an annotation and match it to a another set of annotations (e.g., all of them for snow leopards or just a subset for a location), it will return a ranked list of potential matches with scores sorted in descending order. However, it actually has no way of sorting between potential matches with 0 score (no corresponding features). Thus we essentially get a random list of unsorted things that didn’t match at 0 score. How/if we should display 0 score results from PIE is worth discussion internally (e.g., Hotspotter does filter out 0 score matches I believe), but essentially for this use case 0 scores should be ignored.
I did file an internal ticket to have PIE filter out zero score results (like HotSpotter does).
SAGE-497 is the ticket.
While I can’t promise when we’ll get to it, it’s a good idea to not confuse users with this. For now the workaround is to ignore 0 scores.