This article is more than 1 year old

Annual reviews: It's high time we rid the world of this insanity

There is no way a human could have invented such a devilish system

A couple of minutes to fill in a form? Think again... better book next month off

Nevertheless, the main reason that a wave of dread wafts through the office when the annual review season begins is that the whole thing takes ages and entails a tremendous amount of work for everyone involved.

You review yourself; your pool of reviewers all do the same; your manager writes your final review, taking all the above into account; you and your manager sit down and go over your final review; you’re given the opportunity to enter comments in response to your manager’s review if there’s anything you disagree with; and finally, you “accept” your review.

It’s perfectly understandable that the reviewing period always runs for several months, since writing just one full review is no small task, and people can have stacks of them to complete — in addition to their regular daily work.

I’ve known people who had to write reviews for thirty people or more. Many of them took multiple days off from work so they could work on them at home undisturbed. It’s practically unheard of for anyone to finish all his or her reviews before the final days of the review season, since the process takes so much time and people inevitably procrastinate for as long as possible.

On top of being painfully drawn-out and taxing, the financial industry’s review process is outright unfair in several ways. For one thing, if you take issue with anything in your review you have no recourse: there are no appeals, and not “accepting” the review is literally not an option. (You can officially note your disagreements in your comments, but this has no tangible effect on anything.)

Bosses looking for curves - and not in a good way

There’s also the more fundamental flaw that, underneath all the consultant-designed “precision” and "objectivity”, the process can’t prevent subjectivity from creeping in. People are people, and there’s always the chance that personal feelings, grudges, and rivalries may influence what they say. It’s also possible that one or more of your reviewers will have incomplete knowledge of what you do from day to day.

The system can also be “gamed” in subtle ways: managers will sometimes work backwards, deciding on what a team member’s final score “should” be and then entering ratings in the individual sections that will generate that final number.

But the action that, above all others, can make a mockery of the whole review process is the “curve-fitting” that some firms employ. Senior management generally expects the ratings in any manager’s team to roughly fit a curve: say, 10 per cent with a rating between 1.0 and 1.5, 40 per cent between 1.6 and 3.0, 40 per cent between 3.1 and 4.5, and 10 per cent between 4.6 and 6.0.

If a manager’s ratings are skewed in one direction — such as too many people in the 4.6-to-6.0 group, too few in the 1.0-to-1.5 group — he or she may be asked to move some people from one group to another to get the percentages closer to the ideal curve. I’ve seen this kind of thing at several firms, and it’s not fun: after spending days going over a team member’s accomplishments, and reading reviews of that team member from six other peers, a manager must change the final score he or she came up with simply by order of a more senior manager.

A 4.7 score now becomes a 4.4, so what really was the point of putting all that work into the original review.

This exercise can also introduce odd discrepancies: while the manager may have been forced to change the employee’s score, the text-based portion of the review will usually remain unchanged, reflecting the old higher score. (No one would go so far as to ask the manager to add unfounded criticisms of the employee to justify the lower score.)

Employees will often notice this and remark: “This seems like a 5.0 writeup to me. How could I have been given a numeric score of 4.5?” “Well, um,” is the response.

And then there’s outright falsification: I once heard a story, from someone who claims to have witnessed it, of a senior manager who wanted to fire an employee and single-handedly changed all the scores he had been given to justify the firing. All the “exceeds expectations” rankings this person had received were lowered to “fails to meet expectations”, and this was at a firm where all the scores everyone gives an employee are used in computing the employee’s final rating.

I have only the word of the person who told me this story to go by, but if it’s true this would be a shocking case of fraud, and would call the soundness of the whole annual-review process into question. If it’s not true, it’s hard to have complete confidence in the procedure anyway since it’s so riddled with tiny flaws and imprecisions: subjectivity, ignorance, curve-fitting, or the fact that a 4.6 score may be treated the same as a 5.9 (if “groups” rather absolute scores are what matters for the purposes of deciding on bonuses).

Most important of all, I’ve never once seen a manager who wanted to fire an employee, for whatever reason, be prevented from doing so by the ratings on that person’s annual reviews. So what’s the point of the thousands of man-hours wasted on this wildly bloated exercise year after year? Overly cautious legal advice? Super-persuasive Annual-Review System Consultants? Bureaucracy gone wild? “Every other financial firm is doing it”? Possibly all of these. But it’s hard to imagine that anyone would miss it if it disappeared tomorrow. ®

More about

TIP US OFF

Send us news


Other stories you might like