You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If assessing the latter, a true positive would mean that (a) the object detector correctly identified the object, and (b) the classifier correctly identified the class. So a false negative could mean that the object detector correctly identified the object, but the classifier incorrectly identified the class.
This is fine if you're interested in assessing how a pipeline is performing end-to-end, i.e. to answer questions like, "If a rodent trips a camera, how likely is it that both my object detector and classifier model will work as expected and I will receive an alert"?
However, it's not as useful for assessing the performance of the classifier independently from the object detector. To analyze classifier performance independently, you'd need to filter the objects we evaluate to only include those that had the Automation Rule conditions that they needed to meet to get run through the classifier. In most cases this would jut mean that the object had a megadetector prediction of an "animal".
The text was updated successfully, but these errors were encountered:
Right now both scripts/analyzeMLObjectLevel.js and scripts/analyzeMLSequenceLevel.js can analyze Megadetector independently, or they can analyze the results of using both Megadetector and a classifier together.
If assessing the latter, a true positive would mean that (a) the object detector correctly identified the object, and (b) the classifier correctly identified the class. So a false negative could mean that the object detector correctly identified the object, but the classifier incorrectly identified the class.
This is fine if you're interested in assessing how a pipeline is performing end-to-end, i.e. to answer questions like, "If a rodent trips a camera, how likely is it that both my object detector and classifier model will work as expected and I will receive an alert"?
However, it's not as useful for assessing the performance of the classifier independently from the object detector. To analyze classifier performance independently, you'd need to filter the objects we evaluate to only include those that had the Automation Rule conditions that they needed to meet to get run through the classifier. In most cases this would jut mean that the object had a megadetector prediction of an "animal".
The text was updated successfully, but these errors were encountered: