LISEF Judges – Z-Score Ranking

Z-Score Ranking

 
  • LISEF normalizes judges’ scores to calculate a project’s position relative to other projects in the same category. As an individual judge’s scores are entered, a mean score and a standard deviation for all of that judge’s projects are calculated and these are used to determine a Z-score for each of the projects he/she judged as they deviate from the mean.
  • An average Z-score is calculated for each project by averaging the Z-scores from all of the judges who scored that project.
  • The projects are then ranked based upon the Z-score averages. The project with the highest Z-score average ranks the highest in the category.

Each judge uses a different range of scores. A sample scoring (pdf) of a category with 5 judges demonstrates how the range a judge uses to score their assigned projects impacts the outcomes.

 
 

Advancement of Projects

 
  • On Day 1, we determine the top 25% (3 or more rounds of judging) or top 33% (2 rounds of judging) to invite to Round 2, a more intimate fair. Whereas, we try to winnow out the best projects from Day 1, we want to be even more discerning on Day 2.
  • Day 2 is the day we determine which projects will advance to the International Science & Engineering Fair. Both Day 1 Z-score average and Day 2 Z-score average for each student will be combined to determine our winners, with the Day 2 Z-score being weighted more heavily using the following rules:
    1. If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 < .5, we would count day 1 avg z-score as 15% and Day 2 avg z-score as 85%.
    2. If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 = .5, we would count day 1 avg z-score as 25% and Day 2 avg z-score as 75%.
    3. If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 > .5, we would count day 1 avg z-score as 33 1/3% and Day 2 avg z-score as 66 2/3%.