Hey @ACWadmin1,
We started digging into this one last night and came to realize that this is also a bug. There are weird cases where the viewpoints are not being filtered correctly because multiple animals in the picture with different viewpoints, but they’re matching individuals between the images. We’re working on at least getting a solid understanding of what is happening because this should absolutely not be occurring.
We’re tracking this work under WB-866.
We’ve found the issue here. We do always filter by viewpoints, but there was an exception case
here that allowed some undesired ones through. @parham is working on a fix that will be deployed
soon, and we will deliver an update when it is live.
The images that were affected had each been sent in the past as well, whether through an test encounter submission, the training session you mentioned or something else. Any new imports with images not received by the system before will not see this issue.
The fix for this issue is deployed to the image analysis software and it should not happen in the future.
If there are existing match results that include annotations with viewpoints that do not make sense,
running another identification job against the same matching set should show only relevant viewpoints
if they were detected or manually annotated correctly.
What you are seeing here is different. The annotation you are matching has a set viewpoint of ‘front’.
When we gather the annotations for a matching job, due to the physical flexibility of some species and resulting subjective nature of some viewpoints we allow ‘adjacent’ viewpoints.
For a matching a left side image, we allow left, up-left, down-left front-left and back-left.
For a front facing image, this allows front, front-left, front-right, up-front and down-front.
What you are seeing with this matching result is not a right matching a left, but rather a mix of
front, front-left and front-rights matching against a front.
Looking at the image you are matching, we can see how difficult it is to apply a single viewpoint label to an animal- if I look only at the left half of the image, from the head, chest and leg position it looks clearly like a front facing image. However, if i looked only at the right half of the image I would see a left or front-left viewpoint.
Whether or not there are ‘down-left’ and other secondary viewpoints in the database, we will still look for them in the matching set if an appropriate primary viewpoint like ‘left’ is found.
The viewpoint adjacency code is not model or species specific. In many cases for terrestrial species there are very few examples of any secondary down viewpoints like ‘down-left’ or ‘down-front’ available for training, and based on this sparse information the model’s best guess ends up being simply down.
If there are only ‘down’ viewpoints and no secondaries like ‘down-left’ in the database, the only annotations that will match against a ‘down’ are other downs.