Can you describe what the issue is you’re experiencing?
Can you provide steps on how to reproduce what you’re experiencing?
I am working through a bulk import right now. I started the matching from the project page to only match them against images in the project. I noticed that some images don’t complete the matching process. I looked into it and the automatic detection had failed on those images. After manually adding the annotation i started the match again.
Unfortunately they only seem to match against a very limited amount of images instead of against the whole project dataset. There is a total number of 507 images.
Generally I am confused by against what each image is matched against. Even the one that automatic detection worked, only match against 474 image.
Those images also were directly matched with both MewID and HotSpotter but when I start the match for
the once with manual annotation, it only uses HotSpotter What determines which images it gets matched against and what algorithm(s) are used?
I know that I can start a match manually from an encounter and set the algorithms there but then I can only match against images at a certain location or “my data” and not against a specific project.
I also stumbled across some issues when doing the manual annotations. Besides the bug that annotation frames jump after creating them, I have the issue that if I want to redraw those frames, I seem to only have 2 options. Add another frame which will duplicate the encounter but it won’t have the original encounter name anymore (Encounter unsassigned) or I delete the existing annotation which results in the encounter loosing it image. The only way I could figure out how to redraw is to delete the image and upload it again.
Can you post the link to the bulk import you’re referring to? It’ll be easier for me to answer your questions once I have that.
Match candidates are determined based the Location ID, taxonomy, and viewpoint (the side of the animal the photo is of).
From the match page, use the Project drop-down menu at the top of the page to select your project. This will display match candidates within that project.
If the same image exists on another encounter, removing the annotation won’t delete the image. I agree this isn’t intuitive or user-friendly, but it’s expected behavior.
The good news is that the latest Wildbook 10.6.0 release now lets users edit and reposition annotations. ARW is currently on Wildbook version 10.2.0 but is on track to receive an update in the next few months as we work to bring all Wildbooks on up to date.
Oh I see on one hand this makes sense but unfortunately it seems like the automatic detection labels all of them with viewpoint: up. Therefore, me trying to label the manual annotations correctly, resulted in those images not getting matched against since the viewpoint is not the same/similar. That’s rather inconvenient. Any way around this?
This import hasn’t been sent to identification yet. After you send it to detection, you still need to manually send it to identification once detection is complete. When you submit encounters manually through the report an encounter page, detection and identification run automatically. This is not the case for bulk imports and explains why you don’t see matches yet on your encounters.
The location ID used on these encounters (FSWBD-TD) isn’t one that exists in ARW. As a result, you’re only going to see match candidates that you’ve uploaded with this ID. The list of available location IDs is in the location ID drop-down menu of the Report an encounter page. You can submit a Feature Request here on the forum whenever you want a new location added. I should point out that we generally avoid abbreviated location names like the one you’ve used because we want location names to be understandable and useable by other researchers on Wildbook. You may want to delete this import, correct the Location ID name in the spreadsheet (by using an existing one or requesting a new location and confirming it’s been added before retrying your import)
You may also want to review our Bulk Import Cheat Sheet to avoid the most common issues people run into when setting up their spreadsheets.
Regarding your question about why only one algorithm was run, identification submitted from the bulk import page will only run MiewID due to the large amount of data Hotspotter uses to run. It is still available when you manually start a new match from the encounter page, but you have to check the box first in order for it to run:
Additionally, notice how in the screenshot it says None selected next to Location ID? When nothing is selected here, it will look for match candidates against the global database, which will take a long time for your match to come back while it processes (especially if Hotspotter is selected).
I spot-checked a few of your encounters and “up” does look to be the correct viewpoint label. In ARW, “up” represents the photographer looking down at the salamander. Here’s an older forum post where the issue came up previously: Viewpoint for manual annotations - #6 by Anastasia
That’s quite unfortunate because as you mentioned in that post, the documentation still is not clear/contradicting on that.
such as the animal’s left, the animal’s right, looking down at the animal, looking up at its belly, etc.
I also looked at the import again and all images with automatic detection got the “up” label. Even ones that are clearly photographed from the front or sides
I’ve created a ticket to fix this in the docs; it appears the update was missed when this last came up. To my knowledge, this is the only exception to the viewpoint rules based on how the algorithm was trained for this species. Thanks for bringing it up.
Feel free to manually redraw the annotations on these to correct the viewpoint. The data we received for training adult salamanders were for the “up” viewpoint (photographed from above). For juvenile salamanders, it was trained on left and right viewpoints only. Angled viewpoints from your example links (such as front right) were not in the training data, so they’re not as likely to be detected accurately.
Maybe It would be helpful to have a small image or two on the “manual labeling” page to show what the viewpoints mean. Or in the documentation as well
For example like this
Or labeled images from the different viewpoints would be even more clear.
Thank you for the reply. Regarding the documentation. As you mentioned, the algorithm matches against images of a similar viewpoint. Since markings can often be seen from multiple angles, I was wondering which viewpoints get taken into account besides the ones that exactly match. From a quick test it seems to at least take viewpoints in proximity into account (left also uses downleft). Unfortunately I couldn’t find anything about that in the documentation.
I think it’s a reasonable callout, but might be better in a training video. With hundreds of aquatic and terrestrial species supported, having a video demonstration for some of the less intuitive outliers like snails or flukes might make more sense. I’ll need more time to think on it.
You’re correct: left viewpoints also get included with other left-related viewpoints like back left, front left, etc. I’ve created a ticket to update the viewpoint question on our Matching FAQ page with this clarification.