This is under review and we’ll post info as we get it. Thanks, and help incoming!
Tanya
This is under review and we’ll post info as we get it. Thanks, and help incoming!
Tanya
Hi there, any update on this issue? thanks
Hi Maureen,
Unfortunately we can’t confirm that this is a bug rather that just how the system works. We’ve put a lot of eyes on this issue and the consensus is that hotspotter works best matching these dogs with very clear, side-on photos, which is also understandably the criteria used for “ID” pictures. Hotspotter works by finding similar regions (each region is very small) between two pictures, and (to drastically simplify things) basically counting how many of these regions are shared by a candidate match. I suspect it is just able to find significantly more of these regions of interest on the sharp, in-focus ID pictures that naturally have a lot of information. That’s why you see many more results on clear photos than on blurrier ones. And in comparison, the unassigned photos at the end of your post are more difficult to match than the ID and ID-style pictures in the earlier examples.
The fact that there are un-id’ed “ID-style” pictures in the results supports to me that there is not a bug going on in the filtering logic that selects the match-against set, which would be the other potential culprit for what you are describing.
I’m curious if you would see the lower matching accuracy you associate with unid’ed photos on id-quality, yet unassigned photos. Input photo quality can make a big difference in matchability.
Thanks for the detailed reporting, having a lot of links really helps.
-Drew
Regarding your second comment, it sounds like the ID system is working as intended with viewpoints but there are some mislabeled viewpoints, and some that are simply not labeled. This could simply be due to the viewpoint labeler system making an incorrect call, as will happen sometimes. If you believe there are systemic errors (or missing labels) on any particular data set let us know and we can re-run them through detection.
And your final comment I’m afraid also has an unsatisfying answer. Hotspotter is not necessarily symmetrical, meaning if A matches to B, it is not a given that B will match to A, which might be confounded by changing the annotation in between both matches (though I recognize the highlighted relevant region is still in the annotation). These would be especially true if the matching sets are not the same. I would not worry about this as a bug unless you see it systemically; there is sometimes edge-case weirdness with these things.
Hi @Drew, that’s a lot to take in!
And while I understand what you’ve explained about why the ID’d images appear to be matching to each other even tho they’re different individuals and not actually a match, it’s disappointing to see as that seems to render the ID images less useful than one would expect, almost not useful at all. Our plan was to run matching on all of the ID’d individuals first, before running through any un-assigned images because we thought that would pick up a big swath of animals from the un-assigned batches, but it doesn’t look like it’ll work that way. So we’ll probably change that approach.
But why aren’t we seeing this problem with cheetahs? They have much less differentiation between them than wild dogs (plain black spot patterns, no marbling of tri-coloured patterns) therefore are much more likely to match to each other, especially with ID’d pics, but we don’t see this same issue when running ID’d cheetah individuals. I would think it would be much worse with cheetahs than with wild dogs but it’s the opposite.
My biggest concern - it worries me a lot that un-assigned images that are not as clear as the ID pics may never get matched to a correct and already ID’d individual via the system, possibly leading a user to assign a new ID, not realizing that the animal already exists in the system. What are your thoughts on that?
We’ll keep an eye on the viewpoints issue; my concern there is that it’s not fixable by the user, like keywords are, and it impacts matching. Is there a way to make it fixable? I.e. if the viewpoint is materially wrong, is there something we can do to fix that? Or do I need to make that a feature request?
I understand and accept that A matching B doesn’t always mean that B will match to A but with my concern above under 1), this additional level of randomness adds to that.
Would really appreciate your thoughts on my concern above at the bottom of 1).
Thanks!
Maureen
Hi @ACWadmin1,
Regarding cheetahs, it’s a bit unintuitive but HotSpotter works best on clear and distinct patterns of local contrast, and it’s worse at looking at larger blocks of color. It was first developed for zebras, which as far as the algorithm is concerned, have more similar types of patterns to cheetahs than wild dogs. Even though cheetahs are quite flexible and move as much as any cat, I expect they more often have clear sections of shared contrast-pattern between photos than wild dogs. The density of spots is like a density of information for matching purposes.
I will see this week if we can’t lower the sensitivity for HotSpotter on wild dogs so that you see more results. It’s currently an unknown whether that will truly increase the accuracy or whether additional results will generally be incorrect.
Maybe a good way to update your planned protocol, if it’s doable, would be to prioritize the clearest photos first, since the quality seems to impact matching so strongly.
Thanks, and I’ll be in touch about the HotSpotter sensitivity.
-Drew
Hi @Drew - honestly wild dogs sure like to make it tough for us! I’m concerned about the same thing as you with changing Hotspotter’s sensitivity for wild dogs - that we don’t know if it will help, hurt or have no impact on the match results.
Unfortunately, it’s not really practical to make the shift you’ve recommended in our approach, assuming I’m understanding it correctly. I think you’re saying we should select the clearest photos from the un-assigned imageset to run matching against first. We have thousands of un-assigned encounters and there’s no easy way to review them ‘en masse’ to cherry pick the best. Also, it doesn’t leave us with a systematic approach through the data so we can’t keep track of which ones we’ve run matching against and which we haven’t. Our first choice approach of running matching against all ID’d individuals first was intended to find all matches of all known dogs so that when we find matches between 2 un-assigned encounters, we can be more sure that it’s a new individual that needs a new ID. For now, we’re continuing with this approach.
thanks and we really appreciate the insights and efforts.
best
Maureen
Hi Maureen,
Indeed these dogs make it difficult! We’re glad for the challenge, but it is a challenge!
I have decreased the sensitivity of hotspotter so you should see more results; let us know if that helps and we’re definitely interested if there’s a perceived additional accuracy from more results.
One option to find the best quality unmatched photos would be, on an Encounter search, to click “matching photos/videos” to be presented with a gallery of images you can compare side-by-side. This would be after the process of going through and running the ID’d individuals first, which I think is a great idea.
Thanks,
Drew
Hi, @ACWadmin1 !
This was a super informative thread to read through.
I’m hopeful that some of this difficulty has been resolved over time?
Is there any additional resolution required here?
Thanks,
Mark