In which Wildbook did the issue occur? Flukebook
What operating system were you using? Windows 10
What web browser were you using? Chrome Version 97.0.4692.99 (Official Build) (64-bit)
What is your role on the site? researcher
I was doing a batch upload which seemed to have worked but when I tried viewing the matches, they didn’t seem to have worked and instead of matching to the entire North Atlantic Right Whale catalog, they were only comparing images to the other images that I had uploaded for that day.
I will send an email with some screenshots and a bulk upload spreadsheet.
What did you expect to happen?
What are some steps we could take to reproduce the issue?
If this is a bulk import report, send the spreadsheet to firstname.lastname@example.org with the email subject line matching your bug report
instead of matching to the entire North Atlantic Right Whale catalog,
Thanks for reaching out. I looked at the screenshots in your separate email. The deepsense model is a fixed classifier used for right whale heads. It will predict ID based on the individual whales it was trained on. It cannot adjust its response based on location, but it looks like the results you saw were filtered to match the location criteria after deeepsense ran. I recommend not filtering matching when working with deepense as the filtering doesn’t actually affect the algorithm’s output. From bulk import that looks like:
And from the Encounter page “start match” that looks like:
It looks like what you received was a set of 0 score matches, which means deepsense had no prediction of ID. I double checked that deepsense is properly running by matching with no location filters:
When deepsense produces a complete list of 0 scores, we have no ability to essentially rank any result. Thus the results of “who is this”? are a random list of annotations that all have 0 scores. From an interpretation standpoint: disregard anything with a 0 score.
I would expect to see one photo of the whale I had uploaded, I saw two images.
I think this was the Encounter you reported.
This is actually the system working as designed. For the one photo, machine learning found two annotations: 1) right_whale (the body; not matchable with deepsense) and 2) right_whale+head (the head matchable with deepsense). The desire two have both body and head detected is part of potential future work to also try to make bodies (especially bellies) matchable. Each annotation is displayed separately over its photo and in bold green. Other annotations are shown in a soft, thin yellow.
You may also notice occasions where more than one head or body is detected (false positives), and these duplicates can be safely deleted.
I would be happy to jump on a Zoom call if you want to discuss your use cases more and go through the interface and deepsense interpretation.
One more quick thought: deepsense is an ID predictor. When it tries to answer the question “Who is this?”, it only returns the whale’s name and a score. We then randomly display an annotation of that whale for comparison since there was not one particular image that matched, but it may not always be the best annotation. It is worth clicking on the whale’s profile to explore further and look at other photos.
I also figured out why you were only seeing filtered matches to your own data. We use the name “Atlantic Ocean (North)” but in the database the actual value is “AtlanticOcean”. Thus you were entering the correct name, but you were only matching your on data.
I am fixing the old data to properly match “Atlantic Ocean (North)”
Thanks for getting back to me and sorry for the delayed response here; I was away in the field all of last week. What you are describing makes sense to me. I’ll make sure not to filter the location in the future. It’s good to know that it was a bug that was making the algorithm match back to our own whales. I’ll try batch uploading some more and different whales to see if it can match them.