Missing viewpoints in Add annotation functionality

In which Wildbook did the issue occur? ACW

What operating system were you using? Win 10
What web browser were you using? latest chrome

What is your role on the site? admin

What happened?
I’m adding annotations to an image and realized that the viewpoint dropdown does not include any “down” variations, like “up” has.

What did you expect to happen?
Able to select downright, for example and other down variations.

We have sorted out that all viewpoints that currently exist on a wildbook display in that list. This is intended to avoid a longlong list of viewpoints that may next get used (ex: top and bottom and all of their related viewpoints don’t exist on zebra wildbook because those viewpoints don’t ever get used).

I guess my question is, do you have a picture where you need to use a down variation? Or is it just an inconsistency that’s bugging you? (both are fair answers but change how we look at the problem).

I guess it comes down to the way in which the system leverages viewpoints for matching. AC species like to lie down a lot, on their sides but when the photographer’s vantage point is from above (looking down from a vehicle, for example), many down viewpoints will also have sufficient side view of the animal to enable matching.

Since the down viewpoint alone isn’t used for any of our species for matching, I guess what we could do is simply use the correct side viewpoint solely and only use down when there isn’t any additional viewpoint that would facilitate matching.

This approach assumes that the down viewpoint has effectively no value in the system other than to exclude it from matching against other viewpoints. Is that correct?

This approach doesn’t exclude any sort of value to any viewpoint; we’re just trying to limit display to what is already in the system based on the viewpoints already used by IA. It sounds like you’re concerned with not having a viewpoint available should it be the appropriate classification, which can be easily solved by populating the entire list of viewpoint options.
Does that sound right?

Yes, publishing the list of viewpoints so that all down-variations are available would make sense for us. Meanwhile, I just saw “ventral fluke” in our keywords list so I need to do a clean up there too. Can you remind me what the difference is btw “ambiguous” and “general” in dog body & tail categories? It seems like something we could collapse into a single category, maybe?

I know there’s a reason there’s two, but I can’t swear to what the reason is. @colin or @jason or @parham can probably help? :sweat_smile:

That being said, I’m going to treat the post topic as a feature request. Especially with young wildbooks, I can see the viewpoint situation being limited, particularly if using data from a curated catalog for training and then gaining access to one that has a wider style of photographs taken.
We’ll be tracking this under WB-960.

1 Like

A “general” wild dog body or tail is intended to be a placeholder for when the color code or tail code can not be determined by the user. We run into this scenario when a body is clearly visible and defined, but it is cut off, too blurry, too occluded, or has some other visual issue that prevents a confident decision for the color or tail code.

At the end of the carnivore Wildbook phase 1 development, we added a new feature to blend the outputs from 3 distinct models, each of which focus on different strengths between: bodies vs tails, all body color codes, all tail color codes. The new feature uses a multi-tier blending of the three models response and, as a direct result, we needed to add three new classes (the two species “wild_dog_ambiguous", “wild_dog+tail_ambiguous” and the viewpoint “ambiguous”). These classes are invoked whenever the models disagree and a resolution cannot be done automatically.

Moving to a unified model in the future will eliminate the need to do conflict resolution since it will happen implicitly inside the network. This will happen whenever we have enough data to justify annotating more examples and retraining the system.