I am currently researching the internet of turtles and its ability to match images. In January 2024 i uploaded around 200 observations some of which were the same turtle but from various years to test if the IoT would correctly match individuals. I haven’t received any updates of a match via email so i began looking into it. It appears that the proposed matches have to be verified however i don’t believe i can do this myself and if i can i cant understand how to as my IoT page doesn’t match the instructions i’ve found online, for example when i go to encounters, matching images/videos and look at the images surrounded by green dashed boxes the annotations simply show the encounter and the view point and when i click the menu to the right of the images i am only given 3 options: “remove annotation”, “visual matcher” and “add annotation” none of which show potential matches.
I cant understand how i can view potential matches found from photo ID systems and if i need to verify my submissions.
Is this something i have to wait for a researcher to do and await my results and is there any way to speed up this process?
Hope someone can help! Thank you.
Hi @nesteban, welcome!
When you have an IoT account, you’re the only one who can review and confirm your matches unless you set up an edit Collaboration with another user. You’re the researcher in this case.
The exception is for submissions from the public/citizen scientists. When they submit encounters, any logged-in user with the researcher role assigned can review and confirm matches on those submissions (they will have the username “public”).
Can you post the links to some of the Encounters you’re having trouble with so I can see if there’s an issue with missing annotations or identification workflow issues?
Hi Anastasia
Thanks so much for responding. We have uploaded about 200 turtles and haven’t been able to see how to confirm matches as there aren’t any options on a lot of them.
When I look at the matched photos, there are 3 options only sometimes (remove annotation, add annotation, visual matcher) and sometimes 2 more (remove this image, cannot start match (greyed out)) instead of the 6 options that are suggested in the Instructions. Is this because all the annotations are incorrect and we need to add annotations? When we tried to remove annotation, the whole image disappeared from the record. How do we remove annotation?
Sometimes there are 2 photos that look almost the same (e.g. Internet of Turtles) and one says “match results” and the other says “cannot start match”
We can now see how to match using photos with correctly bounded annotation. A lot of the encounters where we uploaded multiple photos for one encounter seem to have been split into separate IDs and perhaps that’s why there are 409 encounters?
Can you possibly let us know what the best way forward is to merge encounters?
Could you let me know how to remove duplicates encounters?
Thanks very much for your help. Nicole
Hi @nesteban
It looks like part of the issue is related to WBIA (our image analysis server) issues in IoT that we’re currently troubleshooting. I may not have a resolution until tomorrow.
In the encounter example you posted above, there are 2 annotations: one for the head and one for the body even though the photo is of the turtle’s head and its body isn’t visible. That’s not what’s supposed to happen. The annotation that can’t start the match is the one for the body. Because the body isn’t actually selected, it’s having trouble initiating a match.
In addition to some of the match results not returning, I’ve spot-checked some of your uploads and noticed that the annotation boxes aren’t entirely selecting the turtles (Internet of Turtles). We’ll likely need to re-run these through detection and identification again once WBIA is working correctly.
For the rest, I can can at least address those questions today.
When you remove an annotation from an encounter, the image itself will also be removed from the encounter if the same image exists on other encounters, too.
When there are multiple annotations of the same iaClass on a single image (such as two head annotations or two body annotations), Wildbook assumes that means there are multiple animals in the image and creates a new cloned encounter record for the additional annotation.
You can tell which is the original vs the clone because in the Audit trail section of the page, you should see a note that says something like “Clone of Encounter ID ###”. I tried to grab an example screenshot, but apparently IoT isn’t currently cloning encounters the way it’s supposed to right now.
In short, IoT is misbehaving in a huge way right now and we hope to have you unblocked soon. I’m sorry about your rocky start on Wildbook. I’ll follow up again as soon as I have an update.
Quick update on WBIA: Matching in IoT should be ready within 24 hours. WBIA will automatically work on failed ID jobs once it’s ready.
Thanks very much for the update. Will we receive advice when it’s all completed or shall we assume it’s finished by Monday morning?
WBIA was restarted and it’s been slowly but surely re-running the previously failed ID jobs. I checked on the example Encounter link you shared earlier and verified it’s in the queue for re-identification.
On a positive note, within the next few days, we expect to have MiewID available in IoT for multiple species. It runs more efficiently than Hotspotter and has more accurate match results. We’ll share more details once it’s implemented.