Ranking spreads: is there a solution to the issue of far-apart judges?
By Andrew Berthoff
The 2004 World Pipe Band Championships, true to form, had its share of piping judges being nine, 12 and, in one case, 21 places apart in how they ranked certain bands. The usual beer-tent and back-room discussions have taken place, generally by people unwilling to discuss the matter in public, or put their name to their thoughts, for fear of political reprisal to themselves or their band.
But the matter deserves to be brought out into the open and, if Piper & Drummer Online won’t do it, who will?
First, let’s consider the basics of the issue. (We refrain from calling it a “problem,” since our game is subjective, and it is conceivable that two judges can honestly deem the same pipe section to be first and 22nd.) When two judges give the same band a dramatically different ranking the inevitable result is the band feeling confused. Was their pipe section really first, or was it really 22nd? Or is it actually more deserving of somewhere in between? What margin of separation between piping judges is acceptable? Were judges’ actions motivated by anything besides what they heard?
All of those questions are difficult to answer. The acceptable margin, or, more aptly, the threshold at which difference of opinion becomes a concern, depends on the number of bands in a competition. In a contest of 10 bands, piping judges who have a band within two or even three placings generally don’t raise eyebrows. Four, five, or more rankings apart in a 10-band event, and people will talk.
Similarly, a 20-band contest can generally withstand separation of four-to-six placings, but eight-to-10 makes for lively beer tent chatter. The 21-place variance in the World’s 24-band 3A event has resulted in tremendous hue and cry and consternation.
How can such a thing happen? While 21-band spreads aren’t usual, they are plausible if one remembers the subject matter.
We asked about 10 prominent judges and pipe-majors for their take on the ranking spreads issue. As usual, those who responded requested that their name not be included. Some preferred not to comment at all, presumably fearing for their political necks. Perhaps a day will come when so many prominent people aren’t afraid to put their name to their words for fear of reprisal and, remarkably, that’s part of the issue at stake here, too. More on that later.
But how can such big ranking spreads happen?
“A number of things come into play and all of them are based on the subjective opinion of the adjudicator,” says one experienced adjudicator. “That’s all based on the judge’s experience and training, which includes their natural bias regarding music. For example, some judges may dislike syncopated hornpipes or harmonies in marches. One judge may hear significant errors on the side of the band they stand, or, conversely, heavenly playing and sound, while the other judge may have missed it due to their position while listening.”
“It can happen for a number of reasons,” says an RSPBA adjudicator who requested that his comments be kept anonymous. “A few years ago one piper in a prominent band lost the clamp on the back of his Ross bag in the middle of the performance. Both piping judges caught it and put the band last. But, if one judge had missed it and had placed the band quite high in the list, the matter could have been sorted out in the consultative process that was used then.”
But it’s not always as clear-cut as a piper dummying for all or part of a performance. Just as judging pipe bands is subjective, so too are “mistakes” subject to personal interpretation. A missed attack could be relatively inconsequential to one judge or a calamity to another. One judge may feel one or two mistakes from 18 pipers in a seven-minute medley are forgivable, while another may hone in on them and punish severely. Each approach is within the realm of acceptance.
Since the RSPBA and the Pipers & Pipe Band Society of Ontario did away with allocating X number of points for certain elements of the performance (i.e., 10 possible points for the attack, 15 for tone, etc.) penalization has become even more nebulous from judge to judge, and so judgment is even less black-and-white.
Is it reasonable for bands to accept such big spreads?
“I don’t think we have a choice,” continues one adjudicator. “Judging is subjective to the nth degree. This is music, this is art, and spreads are a fact of life as long as we hire experienced, qualified folks with strong artistic opinions. I’d hate to think we would hire, or try to create, a type of judge who didn’t have a strong picture in his or her mind of what good music sounds like or, worse, have some kind of arbitrary ‘musical’ template legislated as to what is a good and what is a bad band performance.”
But aren’t there certain fundamentals – like tone, tuning, and unison – that are either right or wrong? Chanters are either in-tune or out-of-tune, right? Drones are either steady or they’re not, right? Note intervals are correct or they’re out of balance, right?
Well, not so fast. Again, while some elements of the instrument and how it’s played are generally accepted as “correct,” there are always differing opinions, particularly with those – like judges – who have been around the longest and heard the most. One judge may prefer the quieter ring of a certain blackwood chanter over the more robust tone of a plastic model. Both sounds may be “correct,” but it always depends on whom one asks. A good judge will never be afraid to express an opinion, however different it might be, and thus ranking spreads occur.
Further, every year the overall standard of each grade has improved, especially in the top-tier. As quality improves, more and more hairs will be split, and ranking spreads between judges can understandably become wider. Judges will increasingly assess relatively miniscule nuances, because the big basics like tuning, tone, and unison become a given from band to band. One judge’s pet-likes and pet-peeves are often not even considered by his clipboard colleague.
The real impact of spreads
It should not be forgotten that the real impact on wide ranking spreads isn’t always on the band that had them, it’s often on another band altogether.
Another RSPBA adjudicator says, “One of the concerns when this happens is that if three judges place the band quite low and only one judge places them high, as was the case in the World’s Grade 3A contest, then the band doesn’t make the prize list and keeps a band that should make the list out of the top six. Conversely, if three judges place a band relatively high and only one places them low, it could potentially keep a band that should be in the top six out of the list and result in a band that probably should be just out of the list, popping in for a bottom-end prize. The contest in question this year was the former – only one judge had the band high up, while the others had them quite low. So, it wasn’t the band from Edinburgh that was denied a prize that is the problem. It is the band that was out of the prize list that was denied a prize because of the first-place that was at odds with the rest of the adjudicators’ results.”
It’s clear that ranking spreads can have a far wider impact than just the confusion the actual band that had the spread feels. Other bands’ results are usually directly impacted by the ranking variance, even if their own performance was ranked relatively closely by the same two judges.
The other impact is fun. Everyone who we know participates in pipe band competitions because they enjoy them, they have fun. When competitors are confused, or dissastisfied, it impacts their enjoyment. Associations should never lose sight of the fact that we’re all here to have fun.
In the 1990s the RSPBA introduced consultative judging. After all competing bands played, the judges would get together to discuss their results, with the aim of sharing opinions, pointing out pros and cons heard between the judges. Adjudicators could then have the opportunity to alter their marks, and rank bands accordingly.
While the consultative judging system was often effective, it just as often ran into problems. Stronger personalities and more senior or experienced judges were thought to be browbeating newer or more impressionable members of the panel into altering their mark. There were whispers of unpleasant scenes in the consultations themselves, and there were also instances reported in which some judges simply refused to alter their marks or even take part in the process. There were instances reported in which judges simply bypassed the consultation session altogether.
“I always heard horror stories in the past of how it didn’t work,” says a judge. “You know, ‘This famous, bull-necked pipe-major browbeat me into changing my result,’ or ‘The strong personalities always prevail.’ I do think there is merit in trialing it again if there was an impartial and qualified observer to assess the pilot. That way, the trial might be assessed with data that is not so personality-driven. The concept of consultative judging looks really good on paper – to me, at least.”
What about the competitive solo piping world? There, every competition with a “bench” – that is, two or more judges assessing an event – concludes with the adjudicators retiring off-stage to discuss their thoughts. Judges and competitors accept the fact that there will be differing opinions even when it comes to single performers who can be easily seen and heard. The bench comes to an acceptable agreement on a final prize-list and they and the competitors live with it. A judge who has a piper first can often accept placing that piper third, fourth, or even fifth if the judge’s two counterparts have the same piper far down, or not even in, their lists.
The consultative judging approach has worked in solo piping contests for 150 years, so why not in pipe band competitions? After all, assessing a pipe section of 18 can be far more challenging than judging a solo piper. What goes on one side of band can be missed if a judge is 50-feet away, sound blocked by space and drumming.
Pipe band judges who have experienced top solo piping competitions as competitors and judges are familiar with the consultative process, and these are the people who are probably best-suited to promote and teach it. It is understandable, then, that the pipe band judge who has never been a top solo competitor would find the consultative system foreign, and may in fact seize on it to work it to his or her self-interest.
One prominent judge summed the World’s Grade 3A example up nicely: “I’m sure that they would have sorted it out in the consultative process. So, to that end I say, bring it back!”
Should there be an inquiry?
After a contest with wide ranking spreads between piping judges, bands are inevitably left confused. If a contest is to be satisfying to those competing, it needs to have clarity. Since we have established that, because of art’s subjective nature, differences in opinion are inevitable, ranking spreads will inevitably happen.
When asked if associations should have follow-on inquiries, another prominent piping judge says, “Definitely. There must be a standard facility in place that allows assessment of adjudicating ‘surprises.’ Bands need to know that something is, at least, seen to being done to monitor and address wild swings of opinion. There’s a conspiracy theory at every turn in this competitive pipe band game and a process of assessment could easily be put in place. Assessment might mean a group representing the music experts of a given organization looks at the appropriate score sheets, listens to available recordings, has a standard set of questions that are posed to each adjudicator and, finally, consults with the event’s ensemble judge.”
But that still doesn’t solve the problem of a band’s confusion. Were they 22nd or were they first or should they have been 11th from both judges? This is where associations and their music boards should step in, not to change the results, but to assess the situation and report back to the band in question.
We suggest that any time there more than, say, a 50 per cent divide between two judges (e.g., a seven-rank difference in a 15-band contest, a 10-rank spread in a 30-band contest, and so on), the matter should be addressed as soon as possible after the contest by the association running the competition. The association’s music board should listen to any recordings that might be available, read the score sheets and hear other comments of those who were judging, and get back to the band with a brief report on why the discrepancy occurred.
As with most pipe band associations, improved communications between membership, judges, music board members, and elected officials would solve most big problems, and better communicating would go a long way to assuaging the issue of wide spreads in rankings. If we’re unhappy with the way things are, it’s generally because we don’t discuss it in open and intelligent ways, like we’re trying to do in this article.
It is a fact of pipe band and solo piping and drumming competition that, because of its subjective nature, there will be differences of opinion. As a result, a competition of any size with every competitor leaving completely happy after the results are announced is a near-impossibility.
But we can improve the level of competitor satisfaction simply by improving the way we communicate. Ranking spreads – even ones of 21 places – will happen, but they can be mitigated by:
1. Encouraging a post-event consultative process monitored by a third-party to ensure that powers are not abused.
2. Initiating a post-competition inquiry by the music board of the contest’s governing association, considering the various sides of the situation.
3. Reporting back to bands impacted by ranking spreads so that a resolution may be reached, and so the matter can be closed.
4. Discussing problems not in hushed tones, or by anonymous people in beer tents, band halls, chat rooms and forums, but in open, intelligent ways by people not afraid of rocking some political boat.
Ranking spreads will never go away, but they can conceivably be narrowed by the methods discussed here. The first step is to open up the dialogue, to ensure that an association’s members don’t fear the association itself. Once everyone starts talking – judges, competitors, music boards, and executives – the issue will gain clarity. Until then, confusion will continue to abound.
What do you think? We always want to hear from our readers, so please use our comment system to provide your thoughts!
Do you have an idea for a feature story that you would like to read or write? Be sure to send your concept to the pipes|drums. We can’t report what we don’t know about! Please remember to support the businesses that advertise and make the not-for-profit p|d possible.