Based on the data below - we have another problem altogether.
None of my imports have so many images (media assets) and I would expect to have at minimum, an equal number of encounters as images (counting the trivial detection) - so the data in the table below looks off.
I did some more benchmarking and updated the code. Calculation (via a query) and evaluating accuracy are tricky because there are several numbers to consider:
-Number imported spreadsheet rows
-Number Encounters, which may not be the same as the number of rows for social species as finding multiple annotations in an image spawns new Encounters
-Number MediaAssets, which for solo species may be close to the number of Encounters, but may be much smaller than Encounters for social species because one photo can have many annotations
And the above may not match exactly if there is any photo duplication in the bulk import spreadsheet rows.
Going through the current image # calculations, I see the table now matching each Import Task page’s number of media assets. So I think we’re closer if not there in the estimation. I welcome feedback as the calculation has some complexity to it.