How Well a Model Predicts: Measuring Results


  • Precision in an AI model is the equivalent of a positive Positive Predictive value (PPV).
  • PPV is the proportion of true positive tests out of all positive tests. (odds of having the affection if the test is positive.)
  • It thus measures the capacity to find true positive out of all positive results


  • Precision in an AI model is the equivalent of sensitivity in medicine.
  • It measures how valid the results are.
  • A high recall implies that when the test is positive, the test is highly relevant to confirm affected individuals.
  • This is not to be confused with positive predictive value (PPV) which gives you the

Examples (Wikipedia)

  • 1. Suppose a computer program for recognizing dogs (the relevant element) in photographs identifies eight dogs in a picture containing ten cats and twelve dogs, and of the eight it identifies as dogs, five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program’s precision is then 5/8 (true positives / all positives) while its recall is 5/12 (true positives / relevant elements).
  • 2. When a search engine returns 30 pages, only 20 of which are relevant, while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3, which tells us how valid the results are, while its recall is 20/60 = 1/3, which tells us how complete the results are.


Scroll to Top