Thank you but this is unfortunately still unclear for me. As far as I can see the automatic detection process did not detect any jaguar on my about 1000 images so that I must do all of them manually. Many of my images are of course unclear but many of them are very sharp and clear and should be easy for the automatic system to detect! Is it possible for you to check and point out which images in my uploads that actually were run through the automatic annotation and identification process?
It appears to me that all of your images from the import were processed through detection.
Sharper and more distinct images in the import have received automatic annotations:
A couple of failed images I cannot even find the animal in myself:
And there are some with in-between quality where I can see the animal but detection did not, although it is likely that the quality would be very difficult to use with Identification algorithms:
The latter it is possible could be picked up by a detection model retraining, but that it not a trivial step and would require more data and dedicated development time.
The detections are not the problem I was addressing in my original reply- It is the post on Feb 6th where Tanya replied with ticket number WB-1440 to your screenshot of the results list:
For future large uploads the results display should be much easier to handle.
If you have a few examples of image like the first two I posted: crisp good quality images that did not receive a detection box that is a separate issue. Please provide links to those specifically and I will ensure that detection was correctly run on them.
Thank you for further information! I realize that there now are many annotations that have been done by the system. All those I have done manually must then be of those pictures that were unclear and not able to be picked up by the system. I also know there are pictures that were uploaded without any visible jaguar. That was actually partly the reason why I initially wanted to remove the whole upload in February and start from scratch again. But looking at the automatic Matching results I still experience difficulties. For example for:
When I click on Match results in the hamburger menu it run for 15 minutes without any result:
Then I tried Start another match, but that one also run for 15 minutes without results before I gave up.
I performed Match result: Process was never ending
Match result: no results after 15 minutes
Start another match: stopped with this result:
Why do I experience these difficulties?
These jobs must have required additional time. If there are other people working and the queue of
image analysis tasks in Whiskerbook is large, or if the set of images you are comparing against is large it can take a while.
I can see on the server that most ID tasks take 2-8 minutes but there have been a couple short spikes of activity where the wait time was a couple hours- usually this is from one or more researchers starting a large amount of jobs at once to come back to later.
All of the links that you have provided above currently show matching results when I visit them, so I think they were just waiting in queue.
Thank you very much. I am glad to hear that my images have been processed. I do understand that this is time consuming and dependent also on how many people are working on the system at the same time. Since I am working on a database containing about 1000 images and I can hope for doing 10 in an hour, my understanding is that I have to spend 100 ++ hours do go through them all. Is that about correct?
I certainly appreciate all your support!!
If you want to preserve the work you’ve already done the remainder of jobs must be started independently as you mention.
I don’t think it should take 100 hours however, especially of your attention. You should be able to quickly initiate some ID processes, and since they will be executed in sequence you can return later to inspect the results.
Even starting from beginning again with the 970 images in this import, I’m seeing a processing time of slightly over 2 minutes for most identification jobs to return results with a rare one taking up to 8 minutes, and some taking less. This comes up to a maximum of 30-35 hours of processing time that do not require attention, and the results review which would require the same time and attention independent of processing. Since some of your images did not result in a detection this time will be less.
Anecdotally, I often see processing times speed up as a large amount of of images are matched and I hope this will be the case for your data too.
Thank you again! I thought I actually had to do one at the time and doing many would actually slow down the system even more! So, I will continue as you suggest! I learn all the time!
I will just let you know that after I Removed about 100 bad quality images, the ID process suddenly run as timely as you described above. From my perspective it seemed as the bad images in a way “blocked” the ID processing.
I want to retry a bulk import and automatic annotations and viewpoint assignments of my jaguar images to compare with ythose done one year ago. This time I will use only 260 images. I wish to see new identification of individuals independent of those identifications done before. Since all these images are already in WB, would it in this case be smart to add a prefix on all image names to distinguish them from those already in WB? When imported, will I be able to send through the pipeline or do I need to ask for help from you? Please give any other advice you feel appropriate. Thank you for help!