In which Wildbook did the issue occur?
What operating system were you using? (eg. MacOS 10.15.3)
What web browser were you using? (eg. Chrome 79)
What is your role on the site? (admin, researcher, etc)
Researcher, Rest, Machine Learning
- Matching process gives me this error: “there was an error parsing results for task 26d079b8-bdde-47c9-9ba6-2cba073fb4bd”
( Internet of Turtles )
- Matching process stays on “Waiting for results” with the spinning wheel in front of it. Not giving a match (when it should) or that there would be no match.
( Internet of Turtles )
What did you expect to happen?
To find a match (these are pretty good side profile pictures of a turtle that I had previously entered to be matched to)
What are some steps we could take to reproduce the issue?
Upload photos and try to match a turtle
If this is a bulk import report, send the spreadsheet to email@example.com with the email subject line matching your bug report
Thanks for reporting this.
The first link provided seems to have resolved and shows results:
The second indeed did not complete. I think this may have just been a timing issue- we had to physically move some of our image analysis infrastructure due to hot summer conditions and there was some brief outages.
This image has been run again and the new results link is here:
Matching jobs individually run pretty quick against a location’s data set (slowing down a bit when you get to matching against many thousands of images), but during high traffic times or when a large bulk import is processing they can take longer. If a job is taking more than 10 minutes there is likely a queue and it’s best to just close the results window and come back a little later through the link on the encounter page.
Thanks, hope this helps.
Thank you for the quick response! It does seem to happen frequently today. But some of them resolve themselves after a bit.
When it finds multiple matches, is it best to select the highest match? (even though a lower percentage match is the same individual)
If you have some jobs that don’t seem to resolve after a few hours or overnight please send them to me and I’ll do a deeper dive. A single job should never take more than a few minutes, but if another user starts a large amount consecutively or the platform is very busy it can add up.
For selecting a match take a look at the ‘inspect’ link next to each candidate. It will bring up side by side images highlighted with what the algorithm considers important for you to review. This is very useful for eliminating any false positives. Here’s a great example from the second link above:
This is the top result, and these highlighted areas look the same to me.
You can also click the above button to switch from ‘Individual Scores’ to ‘Image Scores’. If there are multiple images with the same animal in different backgrounds (assuming that these are available in what you are comparing against) ranked highly, this is a great indicator.
Finally, keep in mind that these scores are unbounded- it’s not on a scale of 1-100 or anything. I see good matches happening in the 2.0-7.0 range pretty often, and most negatives will be below this. Rapid fire images taken less than a second apart of the same animal can have wild scores up in the hundreds. This is only a rough idea of scale and can vary quite a bit as data is added and ID’s are assigned.
Hope this helps!
I have been trying to run matching jobs as previously stated above. The thing is that I still have issues with the matching process. It mostly will keep loading. I have closed them and come back at later times throughout the day via the link on the encounter pages, for multiple days and still run into these issues where nothing happens.
Is there anything I should try or change?
I’ve looked at the two examples you sent and there indeed did seem to be a service outage that blocked them. I’ve send them back through the system, which currently has 56 jobs in queue and an rough wait time estimate of 80 minutes. The other jobs in queue appear to be returning results normally.
If there are any other encounters that did not process from this batch let me know and I will reprocess them manually as well.
I do see in the server logs that there are many results pages left open currently. I cannot see which users are looking at the pages, but If there are long wait times it might be helpful to close results pages and revisit them from the encounter page in case they time out or are cached by your web browser.