If identification is still running for an encounter will the options on the photo include ‘no matchable detection’ rather than ‘match results’? If so, is there ever a case that matching fails and it will continue to say ‘no matchable detection’, in which case, is there a way to distinguish the two situations?
For one encounter that was bulk imported with IDs already provided, I am noticing that the top proposed match is the same individual. My assumption is that I do not need to accept this match as the software should already know it is the same individual, given that the IDs provided in the upload were the same. Is this correct? Or is any further action required for me to now be able to use the photos I have bulk uploaded (with ID provided) as a catalog to match new photos against?
For that same encounter I linked to above, which is a left side image, the image shown for the third best match based on the PIE algorithm is a right side image. Is it expected behaviour of the software? It makes it challenging to visually assess the match; I suppose the computer may be able to mirror the image to assess the fin shape match, but it’s not as easy for a human verifier.
Something else I just noticed which may be related to my second question - I just did a second bulk import (it is in the process of being sent to annotation), for which the IDs were also included in the metadata. It included some additional photos of animals in the first import. I am noticing now that when I go to My Data > View My Individuals, the second bulk import has created a second individual with the same ID, rather than merging the photos for the same individual as the same individual. This this the intended behaviour or a bug? How can I merge these duplicate individuals?
If I recall correctly, “match results” is greyed out when identification is still underway. Generally, you can start another match if they’re not loading correctly and you know there are no big jobs in the queue.
In today’s case, we’re still in the midst of addressing disc space issues in Flukebook, so you may want to hold off re-running matches for now.
You will always need to confirm matches. While the algorithm does a good job of identifying matches, we rely on users to give the final confirmation on accepting a match to ensure accuracy.
Yes, this is intended behavior. I’ve seen this in Internet of Turtles when it tries to match the scales on one side of the face with a turtle facing the opposite direction. The algorithm is essentially saying, “I think these images are similar, can you tell me if I’m right?” and then we are free to dismiss anything that’s not an obvious match. You can ignore matches that show the opposite side of your animal when you’re unsure that there’s a clear match between them.
To clarify - even in the case where the IDs are already provided for both images being matched? I just uploaded 2000 photos of ~ 75 animals (for many photos of some animals, but with the ID provided in the metadata for all) and am confused about whether I need to go in for each of those photos and say ‘yes I gave these photos the same ID because they are the same animal’ or is flukebook does that automatically.
If I am understanding correctly, I may submit this as a feature request then - since flukebook has already detected L vs R it would be ideal if it could show an image of the matching side. Especially since when matching by individual scores instead of image scores, the only image shown for that individual could be of the opposite side for that individual.
Ah, thanks for clarifying that. In that case, you can view those match results to see if any new matches come up for that individual. Otherwise, you don’t need to do anything extra.
I don’t see why not. I did find an older feature request related to viewpoint issues in matches. It’s helpful for us to to get this feedback so that we can plan improvements accordingly.
or if this is more helpful, these are the two pages for the same animal (J16) Flukebook Flukebook
though there are > 60 individuals with the same issue
Thanks for sending that! According to the Bulk Import doc in the Fields available section, Encounter.submitter0.fullName doesn’t save unless submitter0.emailAddress is also reported.
I’m not seeing anything odd in your spreadsheet, so I’m going to report this as a bug to our devs. I don’t have an ETA for a fix at this time, but when we have one, I’ll share it here.
Hi Anastasia, the next step I was hoping to do is a formal quantification of how well the matching works for our population, and that wouldn’t be a proper evaluation if there are duplicates of individuals. So, unfortunately, I will be holding off on further processing until this is resolved. Alternatively, if you think it will be some time I could manually resolve the matches, but as you mention this may not be ideal if those duplicates are needed for troubleshooting.