Hey @ACWadmin1,
We started digging into this one last night and came to realize that this is also a bug. There are weird cases where the viewpoints are not being filtered correctly because multiple animals in the picture with different viewpoints, but theyâre matching individuals between the images. Weâre working on at least getting a solid understanding of what is happening because this should absolutely not be occurring.
Weâre tracking this work under WB-866.
Weâve found the issue here. We do always filter by viewpoints, but there was an exception case
here that allowed some undesired ones through. @parham is working on a fix that will be deployed
soon, and we will deliver an update when it is live.
The images that were affected had each been sent in the past as well, whether through an test encounter submission, the training session you mentioned or something else. Any new imports with images not received by the system before will not see this issue.
The fix for this issue is deployed to the image analysis software and it should not happen in the future.
If there are existing match results that include annotations with viewpoints that do not make sense,
running another identification job against the same matching set should show only relevant viewpoints
if they were detected or manually annotated correctly.
Hi @colin, Iâve run into this issue again after running a new ID match (âstart another matchâ). in the link below, matches #6 & 7 both show the opposite viewpoint. Am I doing something wrong?
What you are seeing here is different. The annotation you are matching has a set viewpoint of âfrontâ.
When we gather the annotations for a matching job, due to the physical flexibility of some species and resulting subjective nature of some viewpoints we allow âadjacentâ viewpoints.
For a matching a left side image, we allow left, up-left, down-left front-left and back-left.
For a front facing image, this allows front, front-left, front-right, up-front and down-front.
What you are seeing with this matching result is not a right matching a left, but rather a mix of
front, front-left and front-rights matching against a front.
Looking at the image you are matching, we can see how difficult it is to apply a single viewpoint label to an animal- if I look only at the left half of the image, from the head, chest and leg position it looks clearly like a front facing image. However, if i looked only at the right half of the image I would see a left or front-left viewpoint.
Okay, that makes sense, thanks. FYI, when you mention the potential matching viewpoints for left side, for example, including down-left sounds academic, if that makes sense? Earlier this week I found out from @tanyastere that when trying to add an annotation manually, the viewpoint options only include âdownâ, with no other down-variations: https://community.wildbook.org/t/missing-viewpoints-in-add-annotation-functionality/303
Does the system apply other âdownâ variations when it processes detection automatically? Is it only when I try to do a manual âadd annotationâ that the down options are limited?
Whether or not there are âdown-leftâ and other secondary viewpoints in the database, we will still look for them in the matching set if an appropriate primary viewpoint like âleftâ is found.
The viewpoint adjacency code is not model or species specific. In many cases for terrestrial species there are very few examples of any secondary down viewpoints like âdown-leftâ or âdown-frontâ available for training, and based on this sparse information the modelâs best guess ends up being simply down.
If there are only âdownâ viewpoints and no secondaries like âdown-leftâ in the database, the only annotations that will match against a âdownâ are other downs.