This article is more than 1 year old

RBS MELTDOWN LATEST: 'We'll be the bank we should be ... next YEAR maybe'

Bank schtum, but cockup has similarities with 2012

RBS Banking Group is still refusing to say what went wrong with IT systems in its third major outage in less than two years, despite the fact that customers are still facing problems with their accounts today.

In a mea culpa statement, chief exec Ross McEwan said that the systems failure on Cyber Monday was "unacceptable" and admitted that the group had failed to invest properly in IT "for decades".

"It will take time, but we are investing heavily in building IT systems our customers can rely on," he said.

"I'm sorry for the inconvenience we caused our customers. We know we have to do better. I will be outlining plans in the New Year for making RBS the bank that our customers and the UK need it to be. This will include an outline of where we intend to invest for the future."

The investment pot is the same one that RBS talked about this summer when it said it would be spending an additional £450m on top of its £2bn annual IT spend for a "complete refresh of the mainframe".

Although the banking group, which includes Natwest and Ulster Bank, got systems back online yesterday, many customers found that their accounts were not in the same state they'd left them in.

After being locked out of online banking, ATMs and card payments on Monday, many customers who got back into their accounts yesterday found that money that had previously been credited, like salary packets, had now disappeared, resulting in bouncing payments with their accompanying charges.

RBS is promising to reimburse those who are "out of pocket" after the outage but customers were still complaining on Twitter today:

And the group opened over a thousand branches of RBS and Natwest early this morning to help customers with ongoing problems.

The fact that previously registered transactions like incoming salaries and outgoing payments had disappeared from accounts suggests that the problem was once again related to the batch processing software. In essence, the bank may have had to reset things to a point before the problem occurred, leaving a backlog of transactions to go through again.

The banking group's previous major outage in the summer of 2012 was caused by a human error with the CA-7 batch processing software. In that case, an upgrade to the tool went wrong, which ordinarily wouldn't be a problem as IT staff would just back out of the update. But sources told The Reg at the time that a massive mistake was made in the process of backing out when an inexperienced tech cleared the whole queue, erasing all the scheduling.

All the wiped information then had to be re-inputted into the system and reprocessed before the system came right again.

RBS refused to tell The Reg whether it was the same software that had gone wrong this time and would only issue this statement:

"It is too early to speculate on the cause. Our priority and focus has been to fix the problem."

A shorter outage early this year was caused by a hardware fault in the mainframe, taking online banking, cash machines and card payments offline for three hours. ®

If you have any details on this latest outage at RBS Group, drop The Reg a note or call the London office on 020 3189 4620.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like