NatWest suffers calamitous online banking breakdown
But flack insists system never actually died on arse
NatWest customers struggled to access the company's online banking, ATMs, telephone and even branch systems in the past few hours, after it was hit by a unspecified "technical issue" this morning.
A spokeswoman at the firm denied that the bank's systems were hit by an outage, and insisted to The Register that NatWest had simply suffered "slow response times but it never went down".
We phoned NatWest's phone service, which seemed to suggest a different story.
"We are currently experiencing a technical issue," said a recorded message that added NatWest had marked the matter as "urgent" and hoped to "resolve it shortly".
When pressed on what had gone wrong, the NatWest spokeswoman told us: "I've been able to access my own account this morning."
Sadly, for many other NatWest customers, the banking system has done a pretty good job of flicking them the V-sign today.
The taxpayer-owned bank RBS owns NatWest, so maybe keeping its cash inaccessible is good for UK.plc in some strange way.
The Twitterati are out in force, meanwhile, with the site bristling with reports that show the level of feeling about NatWest's banking system on what is for many people pay day.
"Dear Natwest Online Banking, I don't mind the old design, or the fact it's a bit clunky. I do mind it NOT LOADING. Thanks, Caius," says one of the more polite posts.
There's plenty more complaints here, but according to the NatWest spokeswoman "there are no problems now at all, actually".
So that's alright then. ®
They're making progress..
I can now put in my customer number.. Maybe by next Tuesday I can view my balance..
Bill G, cos his software must have something to do with this..
"NatWest suffers calamitous online banking breakdown"
Not the slightest vestige of exaggeration there?
People can't access their bank account for an hour or two?
The end of the world is nigh. Reboot the universe...
Live / Live
Please go away and learn how to set up proper IT systems
I'm always amazed at the "IT Specialists" who make stupid comments on here about DR plans, backups, AV , etc. There are BIG organisations out there who know FAR more about running these services than your piddly little IT group that supports 20 servers and 300 users.
I think you'll find most of the really critical services run live/live out of multiple locations, so a DR plan typically doesn't have a big red invoke button, but simply a "let's recover that bit that's failed in a controlled manner while the rest of the system picks up the load" - and yes, that plan is written, and tested twice a year as part of their license from the FSA.
Enterprise IT is pretty good at making sure the infrastructrure works, and since banks have been amongst the biggest users of IT for nearly the longest time, they generally have all the bases covered (yes, yes, occasionally some Business idiot manages to squeeze in some "service" that doesn't work properly)
Any financial institution has to fail key systems over to DR once a year to prove it works (I believe this is a regulatory requirement) So there would be no need to "give it a try". Also, when recovering a complex distributed system like an online banking application, you don't just randomly move bits to operate in different datacentres on the offchance that you make the situation worse.
The recovery management team will be looking into what is causing the problem, it may well be that moving to a disaster recovery site wouldn't resolve the problem. Possibly it's a problem with application code, possibly it's a problem outside of the datacentres with the internet itself. It may be a firmware bug on some backend system which is running identical hardware/firmware to the DR site, so failover wouldn't help. Hell, they may even be under a DDOS. Failover to DR isn't always the answer.
I've been using it for several years
and normally it's rock-solid, hence the consternation at today's outage.