Original URL: http://www.theregister.co.uk/2012/11/28/annual_review/

Annual reviews: It's high time we rid the world of this insanity

There is no way a human could have invented such a devilish system

By Dave Mandl

Posted in Jobs, 28th November 2012 12:15 GMT

An inescapable and widely dreaded fact of life for people employed in the financial industry is the annual review. Unlike the way this process might have worked a few decades ago, and still does in most other industries, it’s not a simple matter of sitting down with your manager at the end of the year for a casual discussion of the work you’ve done.

A pat on the back for your accomplishments in the previous twelve months, and constructive criticism on your general areas of weakness and specific failures? On the contrary, at large banks the annual review now consists of a ridiculously complex set of actions and analyses potentially involving many people and sometimes taking literally months.

It’s clearly a good thing for both employers and employees in the industry to have some kind of annual review mechanism. For a firm, it’s an element of the banking world’s endlessly touted focus on “meritocracy”, an attempt at an objective measure of your performance to guarantee that you’re judged solely on the quality of your work, which in turn ensures that the firm retains only the “best and brightest”.

It also provides a tangible paper trail to be used as supporting evidence if you’re sacked and decide to fight that decision. (“You’ve had consistently low scores on your reviews for the past three years - an open-and-shut case!”)

For you, the employee, an annual review serves the purpose of formally acknowledging your achievements and helping you to recognise and correct your faults. In addition — and, not surprisingly, more important to most people — the results are used to determine what the discretionary portion of your salary (misleadingly referred to as your “bonus”) will be, and whether or not you’ll get the nod when you become eligible for promotion.

The importance of both those things cannot be overstated: your bonus can comprise a significant portion of your total compensation, and your title is a very public recognition of your importance to the firm; it, in turn, also helps determine what your compensation will be. These reasonable motives notwithstanding, it’s not clear that the thousand-tentacled procedure in place at most banks is much better than a simple one-on-one review from your manager would be.

Let's see how deep this rabbit hole goes

Here’s how the process works: there will usually be an online evaluation form with such vaguely defined categories as “Operational Excellence”, “Business Success”, “Franchise Building”, and, for technology people, “Technical Knowledge”.

For each of these, your manager will write a free-form summary of how you performed in the previous year, and also give you a score (from 0 to 5, say) on a number of more precise sub-categories: how good you are at “building relationships”, delivering on your commitments to your users, familiarising yourself with the technology tools available at the firm so as not to reinvent the wheel, and (if you yourself are a manager) leading people.

The numeric scores are averaged to give you a final score, and, while the written portion of the review is taken note of, that final number is all-important. It determines everything from the size of your bonus (which might turn out to be zero) to whether you’ll be promoted or whether you’ll be placed on probation and become a candidate for termination if you don’t start showing some improvement.

But it gets much more complicated. At most financial firms, several coworkers besides your immediate manager will do a similar evaluation of you. This number can get big, especially for higher-ranking employees, and it’s not unusual to have ten or more people — peers, more senior colleagues, and more junior colleagues — reviewing you. At some firms the numeric scores you get from all your reviewers are used to compute your final score, and at others the reviews and ratings you get from the “additional reviewers” are mainly there to give a more complete picture to your manager, who is still solely responsible for your official writeup and numeric rating.

(Your manager may also choose to select significant comments, positive or negative, from the other reviewers and include them in the text portion of your final writeup.)

You are almost always required to go through this same process for yourself, and again, depending on the firm, the number you come up with may or may not be used in computing your final score. (I was once told by a manager that my score was brought down by my own middling ranking of myself, an attempt at anti-egomania that blew up in my face.)

You generally select your “additional reviewers” yourself, but it’s not uncommon for your manager to add a few people to your reviewer pool, some of whom you might not really want reviewing you. To be fair, I’ve rarely seen cases of outright vindictiveness, but it can still be frightening knowing that your arch-enemy is evaluating you, and possibly contributing to your final score.

I can’t be absolutely certain, but I’m pretty sure I was once a victim of a fairly senior manager who simply didn’t like me, or in any event thought I was utterly useless. She gave me the lowest scores I’d ever seen anyone get, and though in this case they were not baked into my calculated final number, my manager was forced to take her devastating review into account when he came up with that number.

While the reviews you receive from everyone but your direct manager are kept anonymous by the system where they’re entered, it’s sometimes easy to tell who was responsible for a particular review. This can be particularly worrisome when you’re reviewing your own manager.

A friend of mine was once sacked shortly after giving his manager a very negative review, and he swore that this was in retaliation for what he’d said. I asked how that could possibly be, given the system’s anonymity. He pointed out that he was the only native English speaker in his group, so it would have been glaringly obvious which of the team’s comments had come from him.

A couple of minutes to fill in a form? Think again... better book next month off

Nevertheless, the main reason that a wave of dread wafts through the office when the annual review season begins is that the whole thing takes ages and entails a tremendous amount of work for everyone involved.

You review yourself; your pool of reviewers all do the same; your manager writes your final review, taking all the above into account; you and your manager sit down and go over your final review; you’re given the opportunity to enter comments in response to your manager’s review if there’s anything you disagree with; and finally, you “accept” your review.

It’s perfectly understandable that the reviewing period always runs for several months, since writing just one full review is no small task, and people can have stacks of them to complete — in addition to their regular daily work.

I’ve known people who had to write reviews for thirty people or more. Many of them took multiple days off from work so they could work on them at home undisturbed. It’s practically unheard of for anyone to finish all his or her reviews before the final days of the review season, since the process takes so much time and people inevitably procrastinate for as long as possible.

On top of being painfully drawn-out and taxing, the financial industry’s review process is outright unfair in several ways. For one thing, if you take issue with anything in your review you have no recourse: there are no appeals, and not “accepting” the review is literally not an option. (You can officially note your disagreements in your comments, but this has no tangible effect on anything.)

Bosses looking for curves - and not in a good way

There’s also the more fundamental flaw that, underneath all the consultant-designed “precision” and "objectivity”, the process can’t prevent subjectivity from creeping in. People are people, and there’s always the chance that personal feelings, grudges, and rivalries may influence what they say. It’s also possible that one or more of your reviewers will have incomplete knowledge of what you do from day to day.

The system can also be “gamed” in subtle ways: managers will sometimes work backwards, deciding on what a team member’s final score “should” be and then entering ratings in the individual sections that will generate that final number.

But the action that, above all others, can make a mockery of the whole review process is the “curve-fitting” that some firms employ. Senior management generally expects the ratings in any manager’s team to roughly fit a curve: say, 10 per cent with a rating between 1.0 and 1.5, 40 per cent between 1.6 and 3.0, 40 per cent between 3.1 and 4.5, and 10 per cent between 4.6 and 6.0.

If a manager’s ratings are skewed in one direction — such as too many people in the 4.6-to-6.0 group, too few in the 1.0-to-1.5 group — he or she may be asked to move some people from one group to another to get the percentages closer to the ideal curve. I’ve seen this kind of thing at several firms, and it’s not fun: after spending days going over a team member’s accomplishments, and reading reviews of that team member from six other peers, a manager must change the final score he or she came up with simply by order of a more senior manager.

A 4.7 score now becomes a 4.4, so what really was the point of putting all that work into the original review.

This exercise can also introduce odd discrepancies: while the manager may have been forced to change the employee’s score, the text-based portion of the review will usually remain unchanged, reflecting the old higher score. (No one would go so far as to ask the manager to add unfounded criticisms of the employee to justify the lower score.)

Employees will often notice this and remark: “This seems like a 5.0 writeup to me. How could I have been given a numeric score of 4.5?” “Well, um,” is the response.

And then there’s outright falsification: I once heard a story, from someone who claims to have witnessed it, of a senior manager who wanted to fire an employee and single-handedly changed all the scores he had been given to justify the firing. All the “exceeds expectations” rankings this person had received were lowered to “fails to meet expectations”, and this was at a firm where all the scores everyone gives an employee are used in computing the employee’s final rating.

I have only the word of the person who told me this story to go by, but if it’s true this would be a shocking case of fraud, and would call the soundness of the whole annual-review process into question. If it’s not true, it’s hard to have complete confidence in the procedure anyway since it’s so riddled with tiny flaws and imprecisions: subjectivity, ignorance, curve-fitting, or the fact that a 4.6 score may be treated the same as a 5.9 (if “groups” rather absolute scores are what matters for the purposes of deciding on bonuses).

Most important of all, I’ve never once seen a manager who wanted to fire an employee, for whatever reason, be prevented from doing so by the ratings on that person’s annual reviews. So what’s the point of the thousands of man-hours wasted on this wildly bloated exercise year after year? Overly cautious legal advice? Super-persuasive Annual-Review System Consultants? Bureaucracy gone wild? “Every other financial firm is doing it”? Possibly all of these. But it’s hard to imagine that anyone would miss it if it disappeared tomorrow. ®