Across all of your examples, these are 0 score results, meaning that there is no evidence of matching. When we ask PIE to take an annotation and match it to a another set of annotations (e.g., all of them for snow leopards or just a subset for a location), it will return a ranked list of potential matches with scores sorted in descending order. However, it actually has no way of sorting between potential matches with 0 score (no corresponding features). Thus we essentially get a random list of unsorted things that didn’t match at 0 score. How/if we should display 0 score results from PIE is worth discussion internally (e.g., Hotspotter does filter out 0 score matches I believe), but essentially for this use case 0 scores should be ignored.