This post has been percolating in my head for a while now, since I saw on Mark Lawrence’s blog post about the final round of this year’s SPFBO that… Well, I’ll just quote it here:
I’m also encouraging bloggers to use the range of marks since if they mark all the books between 7 and 8 they will have a smaller impact on the final result than a blogger who scores between 2 and 9. (the range is 1 to 10).
There was also a comment there (and I’ve seen it suggested a few places elsewhere, too) that bloggers might instead rank books from 1-10 in order of preference, to similarly eliminate too many similar scores from multiple reviewers.
I’ve been thinking about the positives and negatives of both systems. Rankings can work when you’re dealing with a specific set of numbers, like the SPFBO. It clearly wouldn’t work for regular book reviews, since they’re done as part of an indefinite number of books over an indefinite period of time. I suppose I theoretically could keep track of everything and constantly renumber my reading preferences with every book I read, but that’s just ridiculous, and a waste of time I could spend reading another potentially good book.
So ranking does work for something like the SPFBO.
But I hate the idea of it.
Simply put, I think that ranking the books in order of preference would tell too little and also do some books a disservice. Let’s face it; one book has to be dead last. And let’s say all the books in the final round are excellent, and under normal circumstances, I’d have rated it 8/10.
It coming in last place gives a much worse impression of the book than that 8/10 would.
Ranking like that gives no real indicator of whether or not a book is good, or even enjoyed by the reader. It says which books the reader enjoyed more or enjoyed less, but I think all it’s really good for is giving comparative information. It doesn’t really say at a glance whether I enjoyed it or thought it had merit or thought it was a terrible piece of writing.
It would make it easier to tell if a certain book stood out in the minds of the SPFBO judges, and for that reason, I can definitely see why some people would want a scoring system like that in place. If there’s a book that keeps hitting high on everyone’s lists, then chances are you know which book is going to be the winner. It’s an easy decision.
In theory. There’s always the possibility that all of us could rate the same 2 books at an even split between top spot and second spot on the list, resulting in a tie, and then we’re back at square one to figure out a fair way to decide the winner. Unlikely. But possible.
But as much as I dislike the idea of ranking the books, I can’t deny that the current system of scoring does pose its own challenges. We currently rank books between 1-10, not as an indicator of where they fell on a personal preference list but more akin to a star-rating system that you see on every website ever. Most book reviewers have a system like this, or at least some form of it. We use it as a general indicator of quality, and usually we’ve got our guidelines pretty clearly posted. When we give a book a rating, you generally know wouldn’t what it means in terms of how we judged the book’s quality.
This is where Mark Lawrence’s comment above comes into play, encouraging us to use a full range of numbers for our ratings. It’s not so easy. We all have our different preferences and ways of judging what we read, but for starters, all the books in the final round have been vetted by someone, and judged to be the best of their initial batch. Sure, that might not mean the book they chose to pass forward is objectively good, but it’s still a book that someone has said, “Yes, the rest of you should read this.” There’s already a skew to the positive, the higher end of the rating scale.
And yes, that means a lot of books are going to get similar numbers.
But that’s partly why I think it’s difficult to encourage us to use a wider variety of numbers. It mostly only works on paper. It’s easy to say, “Don’t be afraid to rate books low if you don’t like them,” (admittedly, advice I should probably have paid closer heed to last year), but if all the books are pretty decent ones that you enjoyed? It might come down to nitpicking small things in order to artificially create that wider range that makes the final score more varied and easier to pull a winner from.
Both methods have their ups and downs, especially in different circumstances. It was pointed out that a ranking system would either require no updates from judges until we’d read all the books, or else constant updating every time we read a new book in order to shuffle around the positions on the ranking scale. One’s less entertaining for people following the challenge (and, I imagine, fairly nerve-wracking for the authors), and the other requires extra work. Ratings are no less tricky, since, as previously said, if we all rate most of our books within the same narrow range, it runs the risk of having multiple people with the same final score and thus makes it difficult to choose a winner.
Myself, I’m fond of ratings over rankings. But that’s just personal preference, and it’s also partly based on what I’m used to doing. I don’t think either one is generally superior to the other. I guess it all comes down to preference and suitability for the task at hand.
But I am definitely open to opinions and discussion on it all here, especially as it pertains to the SPFBO. How would you like to see the books dealt with: raked or rated?