2014-2015 Statistics of Scores:
Hey there scoresheet junkies! I’m always asking questions of our scores and processes. I am a quality control professional in my “real life” so this is kinda how I think about everything. I look at 3 major questions this year about our scores – and I’ve included the 2014-15 Season Score Trending for you to download and look at the details. If details aren’t your thing – I’ve explained a bit below on each topic.
Topic 1: How often do ranks impact the final placement instead of total points?
We have total points on the score summary, but we use rank points as the final say in who placed where. I wondered how often ranks did not agree with our straight, no-drop, total points. I guessed we’d see about 5% of all results be different, and it turns out it was 14.4%! Wow. Often (to me). Now half the time, that result change was “in your favor” and half the time it wasn’t. There were a minor number of cases where it changed results by more than one place. Usually in very tight meets. It shows that if we want to follow the judges “intention” – then rank points are a necessary complication nearly 15% of the time. What would be interesting to see is if we dropped high and low points out of our total points – would we be more in line with what the rankings suggested the outcome would be?
Topic 2: Do large meets with more than 12 teams reduce the points spread between teams? Is that impacting the result?
For small meets with 2-6 teams (in your division), the points between each team per judge averages about 4.6 points.
Medium sized meets with 7-12 teams that spread drops to 3.7 points per judge between each team.
Large meets with 13+ teams in the category have a spread of only 2.4 points between teams on average.
What’s that mean for us? I think we see placements that are accurate at large meets, but the scores lose of some their value as feedback because they get closer and “tighter” between teams the more of you there are. Team Orange might have been 5 points ahead of Team Blue on Friday night, but by Saturday’s mega-meet, that win margin dropped to about 2-3 points. Did Team Blue dance better? Maybe, maybe not. Not enough to beat Team Orange. What this tells me as a coach is to not get too invested in “we’re getting closer to beating them!”. You either did, or you didn’t beat that team and just because you’re closer or further away points wise isn’t 100% accurate.
Topic 3: Do scores follow a generally upward (linear) trend during the season?
Generally, Teams I talk to during the year feel their improved performance over a season should be both fairly linear, and reflected in the judges score within a margin of statistical error. I used an example of state-level A and AA teams who should have had fairly smooth seasons from a starting place to a final score, demonstrating improvement as they went. Teams in about half the cases in kick, and even more in jazz – did not see their scores follow an upward trend from November to Sections. Rather the numerical result is not indicative of the progress made to a certain extent – assuming these teams all felt they improved over the year. This is another example of the subjectivity of the scoresheet creating difficulty in determining feedback and progress.
What our scoresheet does do is determine the winner that day, of that group. (which reminder: that’s all it was designed to do!) It hasn’t been effective in charting progress of a team from average up to excellent in a meaningful way about half the time. Rank points, video review, and self evaluation become even more important when you can win with 58 points on one day, and 45 points the next, without a poor performance from the coaches eye. The table on tab 3 is an example of average total score only, and not by individual category – so it’s pretty general. However it is likely that we would see the same trends (or lack of) in individual point categories over the season. There is an additional factor of being at a “large meet” that was compared, but found to not impact if the score was on trend or not. Therefore the “squeezing” together of scores at large meets isn’t what is throwing off your trend. You are just as likely to have an “off” (high or low) score from a mini meet, conference event, or sections. In fact, many people had fairly low sections scores this year. Something I’ve personally seen almost every year as a class A coach. I also notice jazz is seemingly “more subjective” than kick tends to be. I find that to agree with my personal experiences, and it’s nice to see that subjectivity come to life in a real meaningful way in scoresheet trending. I left out AAA teams because I didn’t think we could trust their data to be linear when they tend to change dances or take extended breaks from competing. You of course can look at your own scores and see if they were more or less “onward and upward” and if they were numerically accurate.
What I do with weekly meets to get feedback is always to compare to the other teams that day – not so much to yourself from last weekend. If you want to know what to work on – see where you’re behind compared to the division, it could have been your highest mark in that category all year! Doesn’t mean its not your area of improvement still. Funny how that works — it would be nice if turning a 5 to a 6 meant you got better, but it doesn’t always. Not if your friends all got 7s and you didn’t. Makes you wish you had a math degree huh? Or at least a magic 8 ball.
Happy analysis friends!!