LISEF Judges – ZScore Ranking
ZScore Ranking

LISEF normalizes judges’ scores to calculate a project’s position relative to other projects in the same category. As an individual judge’s scores are entered, a mean score and a standard deviation for all of that judge’s projects are calculated and these are used to determine a Zscore for each of the projects he/she judged as they deviate from the mean.

An average Zscore is calculated for each project by averaging the Zscores from all of the judges who scored that project.

The projects are then ranked based upon the Zscore averages. The project with the highest Zscore average ranks the highest in the category.
Each judge uses a different range of scores. A sample scoring (pdf) of a category with 5 judges demonstrates how the range a judge uses to score their assigned projects impacts the outcomes.
Advancement of Projects

On Day 1, we determine the top 25% (3 or more rounds of judging) or top 33% (2 rounds of judging) to invite to Round 2, a more intimate fair. Whereas, we try to winnow out the best projects from Day 1, we want to be even more discerning on Day 2.

Day 2 is the day we determine which projects will advance to the International Science & Engineering Fair. Both Day 1 Zscore average and Day 2 Zscore average for each student will be combined to determine our winners, with the Day 2 Zscore being weighted more heavily using the following rules:

If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 < .5, we would count day 1 avg zscore as 15% and Day 2 avg zscore as 85%.

If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 = .5, we would count day 1 avg zscore as 25% and Day 2 avg zscore as 75%.

If the ratio of rounds of judging on day 1 to the number of rounds of judging on day 2 > .5, we would count day 1 avg zscore as 33 1/3% and Day 2 avg zscore as 66 2/3%.