Completed match pair turking of Cape mountain zebra WildBook installment

What Wildbook are you working in?

Cape mountain Zebra Wildbook - We are currently training the Wildbook annotation software for Cape mountain zebra (Equus zebra zebra) with the Wildbook team and Professor Daniel Rubenstein and Professor Susanne Shultz.

Can you describe what the issue is you’re experiencing?

I have completed the turking of match pairs for this set of images. Manual annotations are now approx. 17000. I believe when I got to this stage with a smaller initial image set, Jason Parham proceeded onto the detection training task, would this be the next step?

Can you provide steps on how to reproduce what you’re experiencing?

There are no issues being experienced, I would just like advice on how to proceed

Best wishes,
Jake Britnell

Hello Jake,

First, apologies for the long delay.

Second, this an amazing level of progress! I can confirm a few of the details on this database:

  • Images: 2,165
  • Annotations: 4,068
  • Names: 1,764
  • Reviews: 17,985

I’ve taken a look at the distribution of names by viewpoint. In total, there are 1,607 sightings of right side zebras and 1,420 left side that are also of passing photometric quality. In order to do a simple feasibility study, we will now want to use the human-reviewed IDs and see how well the system will accurately predict the same ID associations as what the humans have determined is correct. In order to do this, we need to first filter out the names where there was only one sighting. Here is the breakdown of the number of annotations per name:

  • 1: 1278
  • 2: 186
  • 3: 72
  • 4: 59
  • 5: 27
  • 6: 28
  • 7: 15
  • 8: 15
  • 9: 16
  • 10: 5
  • 11: 8
  • 12: 6
  • 13: 2
  • 14: 7
  • 15: 3
  • 16: 4
  • 17: 3
  • 18: 2
  • 19: 6
  • 20: 1
  • 23: 2
  • 24: 3
  • 25: 1
  • 26: 1
  • 29: 4
  • 31: 2
  • 32: 2
  • 33: 1
  • 34: 1
  • 35: 1
  • 36: 1
  • 39: 1
  • 42: 1

This means 1,278 names only have 1 sighting so we exclude them from the automated study. We also enforce that all of the remaining names have a max of 5 sightings (with a minimum of 2) so that the names seen a lot of times do not bias the results. In order to prevent obviously easy matches from also biasing the results, we further filter the sightings to ensure that no sighting in the study for a given name is taken within 1 hour of each other. After all of these filters are applied, there are 146 right side sightings and 130 left side sightings, which is what we will use as the basis for the preliminary study.

Left side

Right Side

Above are the ID results for each viewpoint separately. It shows there is a high similarity between the performance of left side viewpoints and right side viewpoints, each peaking around 90% recall top-12. If we allow only one match result to be returned, the system achieves around 60-70% correct recall.

Lastly, I’ve taken this latest mountain zebra bounding boxes and metadata and have started training new detector, labeler, and background models. They should be ready for review tomorrow and may be used to automate this workflow. As for the ID match data, we can also look into automating the yes/no match pair decisions with this level of ground-truth data.

Training progress

As for the next steps, we will take a look at the performance results of the updated detector models once they are trained. If you would like to upload more data, we can either annotate it for more ground-truth data or begin to run the automated models on the new data as a part of the Flukebook platform.

@JakeBritnell

The detection pipeline for mountain zebra has successfully been trained. We have trained this as a standalone collection of models and, in the future, we may incorporate these Mountain sightings with other Grevy’s and Plains zebra sightings and train a unified set of models for all zebras our system supports.

The detection and classification results are good. I’ve attached a random selection of the automated bounding box regression and background classifications.

Human boxes
bboxes_1049_ground_truth
bboxes_450_ground_truth
bboxes_2090_ground_truth

Predictions
bboxes_1049_prediction
bboxes_450_prediction
bboxes_2090_prediction

Backgrounds

1 Like

@parham ,

This looks fantastic! Really exciting! Brilliant to see that the models are performing well.

Looking forward to seeing how well the yes/no match pair automation works with this level of data.

Thank you so much for all your work, the Wildbook system continue to impress me.

If you need me to do anything, please let me know.

With best wishes,
Jake

1 Like

Hi @parham and the rest of the Wildme Team,
I hope you had a lovely holiday season and a good new year!

I’ve completed uploading our entire directory to the CMZ wildbook. Total images is now 14, 231. This spans multiple reserves across the species range from 2012 - 2020. So fingers crossed this will produce a good foundation for the installment.

It would be great if we could run the trained detection pipeline and test the yes/no match pair automation (Jason mentioned previously using the Flukebook platform?) on this new dataset. Would this be possible?

With best wishes for 2021!
Jake

1 Like

Hi @JakeBritnell,
Happy new year to you as well! We have a ticket to get this going for you as soon as we are able (WB-1345). We are expecting to kick this off in about 2-3 weeks, and will reach out to you here as soon as it is available to work with.

Thanks,
Tanya

@JakeBritnell

Apologies for taking three weeks to get back to you soon your post.

We can confirm that your database has new images – totaling 14,231 – that adds just over 12k to the existing 2,165 we had there before. We were able to successfully run the detection pipeline that we had trained on this new data and automatically created 22,050 annotations for Mountain Zebras (originally there were 4,064 zebra annotations). This represents a greater than 600% increase in the number of annotations that we have available in the dataset for this species. Below are 15 example (randomly selected) detections on images the detector had never seen before.

We also were able to run the labeler model to automatically verify and assign viewpoints to the zebras. This is the breakdown of how many viewpoints the system now contains:

'left'       : 8313
'frontleft'  : 1625
'front'      : 2742
'frontright' : 1868
'right'      : 8166
'backright'  : 1093
'back'       : 1307
'backleft'   : 1000
'upfront'    : 1
'upright'    : 1

As for the decisions, you and your team had compiled a set of just over 17k pairwise review decisions. We trained a random forrest classifier ensemble (VAMP) that can try to automate the decisions going forward. We have balanced the runtime performance of the new VAMP model such that it decides positive, negative, and “cannot tell” decisions with independent thresholds. These thresholds have been selected where the FPR is 1% in held-out test data. The model was deployed to our model CDN in Azure, the source code was updated to support the new ID algorithms for Mountain Zebras, and built into our latest development Docker image for deployment.

Your database is now running the newest WBIA deployable image and is processing (and caching) the ID results for a “all-vs-all” match using the GraphID algorithm. I will email you a direct (private) link of where these matches are once this initial compute is complete and there are new match candidates to inspect. Expect this link by the end of this week (there is a lot of data to compute).

As for the new dataset, you originally had 4k annotations in your ID graph and it required 17,000 reviews to resolve. The new graph has 26k annotations so the expected number of pairwise reviews is estimated to be around 100k. Note that the automated VAMP decisions will not eliminate the need to do reviews as the 1% error rate will lead to some inconsistencies that will need to be resolved, but we expect the workload on the number of reviews to decrease by at least half. With VAMP enabled this leaves around 50k decisions to do. Luckily, we restart the new graph with the decisions you had made previously, so the estimate for the number of reviews is going to be around 30k for this new graph to be consistent. This estimate will fluctuate based on how identifiable the animals are and how many resightings are in the underlying sightings data. Also, this estimate may fluctuate based on how well VAMP is able to automate the decisions that are presented to it. Also, the level of “consistency” can be configured and, by design, the system should quickly drive the results towards an approximate answer. Lastly, we can reduce this workload by focusing on only left or right viewpoint sightings, which should significantly reduce the amount of reviews and should still give a representative understanding of your population dynamics.

Thanks,

Jason Parham

P.S. If the images fail to load, we migrated the domain for this community website. Please use this link instead: Completed match pair turking of Cape mountain zebra WildBook installment

1 Like

Dear @parham, @tanyastere

As I said in the email, thank you so much for all the work! I’ll get to work to try and get all this review process completed over the next couple of weeks.

I think I may have encountered an error. The CMZ Wildme seems to have a red box around the job and the queue size isn’t decreasing (see image

). This might be due to the quality control component but I thought I’d let you know incase there is something wrong.

With best wishes and I cannot express how grateful I am for all your help and support,
Jake

Hello Jake,

This is indeed an error on our side and I have pushed a fix for it. The reason you were encountering this problem was because the system was incorrectly writing your feedback at the same time automated decisions were being added to the system. That behavior is standard, the error was not.

As for the reviews, the system has made 12,438 automated decisions against your 18,391 human decisions. The graph is not in an inconsistent state currently so the data should be relatively reliable. Currently the queue of reviews is sitting at 3,658 remaining, but after that number goes to 0 the system will be adding more reviews as it continues to process the consistency of the data. Your graph contains 26,114 annotations of all viewpoints and is currently tracking 11,314 unique sightings. This number is a bit misleading since it includes the number of unique animals within comparable viewpoints; in other words, if you had 1,000 animals in the population each with left and right photographs, the best number the system could produce is 2,000 unique individuals because it would be unable to merge the left and right shots of the same individual using visual data alone. This is why, at the end of the analysis, we will suggest looking at left-only or right-only data to get a more reliable number of individuals.

The link you received via email last wee should still be working. I recommend using that link and continuing with the review process.

Dear Jason,

I hope everything is going well. I apologize for not contacting you sooner, I’ve been completing lab work. I’m happy to confirm that we have completed the match turking for this final set of Mountain zebra images.

I believe it is now at the stage where we can start extracting data. I thought I would update you on the progress in case there would be any further steps that the Wildbook team would like to complete/test before I begin.

Additionally are there any pre-existing analysis pipelines (such as co-occurrence) which we could use for the data generated? If so, these would be really helpful and great to know.

Look forward to hearing back from you.

With best wishes,
Jake Britnell

P.S. This is a repeat message from the email I sent but thought it may also be good to reach out here too.