I’ve taken a look at this image and the annotations detected on it. Instead of left or right, this
annotation has been detected as a back viewpoint. Like all viewpoints in the system, this annotation will compare against other annotations with the same viewpoint, and neighboring ones like backleft and backright.
This is something that will happen occasionally with the trained model’s margin of error. I’d personally call this a strong backleft, and a detection as such would have it matching against left and back images, but not backright. Since this information is produced by the model, the only way to influence decisions like this is to provide more true examples of back viewpoints, possibly adjust some of the already annotated back and back adjacent viewpoints then retrain the model.
Thank you Colin for looking into this and for the explanation! That totally makes sense. However it turns out that Nerida has been having this issue often, with photos that don’t have a back viewpoint. I’m going to try to find some more examples because I know you need those to look into it. Will try to provide those soon! But just wanted to keep you in the loop, as we aren’t sure how much it matters that the system is storing up incorrect matches? Anyway, will be back in touch with more details soon. Thanks!
I’d be happy to look at more examples. It’s important to know that viewpoint is still decided by the
model, so any attempt to change the output would need new manually annotated training data including a larger proportion of non-typical viewpoints.
These viewpoint decisions will not influence your other left/right matching at all.
For some background here are the current left/right and back adjacent annotation viewpoints in the database: