Feeds

Google's Postini Fail pinned on bad filter, hardware glitch

Oh, and 'malformed types of messages'

Secure remote control for conventional and virtual desktops

The extreme email delays the plagued users of Google's Postini message management service earlier this week were caused by a shoddy email-filter update and a power-related hardware failure involving the company's database storage servers.

Today, the Mountain View Chocolate Factory released an "incident report" to Postini users, saying the "severe mail flow issues" began at 11:30pm Pacific time on Monday and extended through at least 12:30am Pacific on Wednesday. That puts the email snafu past the 24 hour mark.

The report does not say how many users were affected. Google tells us the problem was limited to customers on Postini's "System 7," one of several systems running the hosted email security and spam-filtering service, but at least one customer says the problem extended to System 5 as well.

"My company is on System 5 and our email was pretty much non-existent until we switched to a backup system. Once we pulled Postini out of the loop, all of that deferred mail hit our system (along with quite a lot of spam)," said Russ Meyer of the US-based Midland Paper.

At one point, Google rerouted traffic to another data center, which could explain the delays seen by Meyer.

Unlike so many on System 7, however, Meyer and Midland never had problems visiting the service's web-based admin console, which Google switched off for some customers in an effort to boost mail flow.

On Monday evening, after Google's monitoring systems detected the problem, engineers rerouted mail traffic from what the company calls a secondary data center. But this didn't help. So they returned some of the traffic back to the primary facility "to maximize processing resources." Then, at least for some users, they shut-off the admin console and some other web interfaces in an effort to reduce the strain on those resources.

Eventually, Google engineers decided the problem was down to three things:

  • A new filter update appears to have inadvertently impacted the mail processing systems.
  • Unusual malformed types of messages triggered protracted scanning behavior, and its interaction with filter update affected mail delivery.
  • A power-related hardware failure with database storage servers reduced input/output rates. The latency in database access reduced our overall processing capacity.

Which sounds like two things to us. Surely, it's the service's duty to deal with "malformed types of messages" - whatever those are.

"The combination of these conditions resulted in high failure rates for mail processing and the deferral of new connections from sending mail servers," Google's report says.

On Tuesday evening, a day after the delays first hit, engineers replaced the faulty hardware - with help from the vendor - and at 11pm Pacific, Google says, database disk throughout returned to normal. Then, an hour later, Google removed the offending filter update, and according to company, mail processing was back on track.

Google continued to process traffic across both data centers for another hour. The company does say, however, that users may still experience delays. "Although mail processing was at normal speed and capacity, some users may have seen delayed messages continue to arrive in their inboxes. These potential delays occur when the initial or subsequent delivery attempt is deferred and the sending server waits up to 24 hours before resending the same message." This explains complaints we received on Wednesday afternoon.

The report says no messages were bounced or deleted.

Originally, Google indicated the problem was limited to US users, but yesterday, the company acknowledged that at least some European users were affected as well. ®

Boost IT visibility and business value

More from The Register

next story
Why has the web gone to hell? Market chaos and HUMAN NATURE
Tim Berners-Lee isn't happy, but we should be
Microsoft boots 1,500 dodgy apps from the Windows Store
DEVELOPERS! DEVELOPERS! DEVELOPERS! Naughty, misleading developers!
'Stop dissing Google or quit': OK, I quit, says Code Club co-founder
And now a message from our sponsors: 'STFU or else'
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
Scratched PC-dispatch patch patched, hatched in batch rematch
Windows security update fixed after triggering blue screens (and screams) of death
This is how I set about making a fortune with my own startup
Would you leave your well-paid job to chase your dream?
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.