Hotspotter Matching backgrounds in Whiskerbook

In which Wildbook did the issue occur? Whiskerbook

What operating system were you using? MacOS 11.6.1

What web browser were you using? Chrome

What is your role on the site? Researcher

What happened? I am seeing some issues in whiskerbook with the segmentation/background subtraction on some of the images when I do matching. It isn’t happening in every image, but I am seeing it occur for a decent number of the images. Below are a few examples. The first example is one of the more extreme examples I’ve encountered.

https://tier1.dyn.wildme.io:5014/api/query/graph/match/thumb/?extern_reference=lcnfydckgfbvufva&query_annot_uuid=88a01d96-b91b-40c5-95eb-16a903e8942b&database_annot_uuid=9f5d9783-b104-44ec-b0ac-808c7ef1a8a7&version=heatmask

https://tier1.dyn.wildme.io:5014/api/query/graph/match/thumb/?extern_reference=wwipujpqwpvwgwfh&query_annot_uuid=21a0bb58-664f-4492-8796-1468ed1aac8e&database_annot_uuid=399a73c5-ea4a-4d36-8419-a5fee2197d90&version=heatmask

https://tier1.dyn.wildme.io:5014/api/query/graph/match/thumb/?extern_reference=jkgromwflurrpobk&query_annot_uuid=05b90a75-c140-4f1a-9067-9f22f71b5a7d&database_annot_uuid=d099463f-ab84-404c-a1c0-b60df1a4674e&version=heatmask

What did you expect to happen? Not matching the backgrounds.

We are exploring whether this is avoidable with the current detector or not. We are check whether background subtraction exists and is being used. At first glance, we would expect a high amount of background matching for:

  1. Images very close together in time
  2. Small patches near the body where background subtraction (which is learned and not actual ML segmentation) is less certain about the animal/background boundary.

Thanks,
Jason

Thanks Jason,
It does look like a majority of the images I’m seeing this issue for fit the cases you’ve listed, but it does concern me how much of the background is being matched in cases like the first one above. I also came across another example, linked below, where almost all of the matching appears to be on the background not the individual. I know in the case below it is the same individual in both images, so I would expect something like the first case above where both the leopard and backgrounds are being matched, but it doesn’t appear to be the case here.

https://tier1.dyn.wildme.io:5014/api/query/graph/match/thumb/?extern_reference=jquhfqdvlzmaaxps&query_annot_uuid=0fdbe680-6586-4c29-93c3-6f3c80a33dca&database_annot_uuid=839e6db1-cfd2-434c-9027-90487aa8a2ee&version=heatmask

Hi @cotron.1,

I have looked into this issue for you on the backend computer vision server, and I can confirm two things:

  1. The HotSpotter ID algorithm was and is using the background subtraction code as expected for the first three leopard matches you posted (4 days ago), and
  2. The most recent match you posted (4 hours ago) did NOT use the background subtraction code. I have pushed a fix to our open-source repository to address this issue; the fix should be ready and deployed on the Whiskerbook platform by the end of the week. Please re-submit any ID jobs to use the updated code.

For number 1, all of the annotations used for those matches had the species label set as leopard. This is ideal because the computer vision code will automatically apply the appropriate background subtraction model when it sees the species leopard or panthera_pardus. However, as @jason has already described, the challenge is that the background segmentation algorithm is not perfect and will still let through some background from time to time.

Moreover, the background subtraction algorithm does not blackout the background pixels; instead, it will down-weight them. A very strong background match (e.g., from images taken seconds apart) will sometimes still outweigh a poor foreground match, even after background subtraction has been applied. Below is a perfect example of this case, taken from your original match examples. We can see that the original images (blue box) are given to the background subtraction model, which produces a foreground/background mask (yellow box) where light pixels correspond to “likely leopard” and black pixels to “likely background.” After the mask is generated, the keypoints found by HotSpotter are weighted based on the brightness of the mask pixel area they cover. This is hard to see in the yellow highlighted match visualization (red box, top) because the individual matching keypoints are converted into a heatmap. When we visualize the actual matching hotspots (red box, bottom), we can see an overwhelming correspondence between these two images. This is not surprising as the time between these two images is only a few seconds. What we can see, however, in the bottom visualization is that the color of the individual hotspots (and their correspondence lines) is based on the background mask. We color the ellipses with a bright yellow when they are determined to likely be leopard and dark red color or likely background. I have added a curve overlay to the bottom image to show where the matching keypoints transition from foreground matches to background matches, which predictably follows the curve of the leopard’s spine. HotSpotter visualizations by Dr. Crall.

The other matches are clearer to see how the background segmentation is down-weighing background keypoints (dark red color).


For number 2, it appears that those two annotations had the species panthera_pardus_fusca, corresponding to the “Indian leopard” species. While the backend computer vision server has been configured for leopards, it was not configured for that specific species of leopard. I have updated the code to make that change permanent, and all future HotSpotter ID matches will use the pre-existing leopard background model on Indian leopard images.

The match you posted is challenging for background subtraction because the leopard is seen behind tall grass. This will likely confuse the background model to a degree, and the matching keypoints will probably focus more on the tree branches, twigs, and grass details. For now, I’ve manually run those images through the leopard background model so you can see what it produces. See below:

Image 1


Image 2


Because the background model was not used for this HotSpotter match, we can see clearly that the background is matching with high confidence (bright yellow keypoints). Unfortunately, this match was not found by comparing the leopard’s body texture at all, but this behavior will improve as the background model is turned on for Indian leopards.


The fix is tracked as SAGE-505.

@cotron.1 Can you please respond here after running some more of your experiments and let us know if the issue of background subtraction is generally better?

Thanks
Jason

Thanks, the explanation makes a lot of sense. I just tried to run a few more matches to see if there is any difference, and I am getting errors for each one. Looks like they won’t run and are throwing an “unknown error.” I’ve linked what I am getting below.

This was a WBIA library component upgrade error. Now resolved. Please retry your matches.

Thanks,
Jason

Just rechecked a couple; the matching looks much better. I still see some minor spots matching background, but it sounds like that is expected. Even my last example with all of the grass looks somewhat better. Thanks!

1 Like