Tier 1 Compromise
Ultimately, however, I think the big vendors will decide that there's too much work involved to make such low-power gear worth their while. First off, you have the extra R&D, testing and support costs for what can be considered a niche market. Then, for this type of operation to make sense, you have to assume that enough customers will do the type of software work that takes place at Google to gear code for tens of thousands of low-power boxes and often failing components.
Even if the big vendors did decide such systems could lead to profits, they would no doubt release mediocre, compromise-rich gear. Such is the nature of the beasts.
That's why there's room for a start-up to take hold of this market.
You have to believe that Microsoft and others detest the idea of Google beating them from day one on data center economics. How can you do proper battle in search and Web-based apps when it costs you more than your major rivals to ship results and code to end users?
The service providers setting out to deliver so-called RedShift applications need a way of matching Google on cost. Surely, a plucky start-up could arise to serve that need by mimicking Google's approach and catering to the cutting edge of the service provider set.
Other companies such as Cobalt with its server appliances and then RLX and Egenera with their blades did something similar in the past. Cobalt made easy-to-use boxes for the web hosting set. RLX took that idea to the next level by shoving laptop chips in servers, and Egenera made systems specifically for the needs of the financial services community. (Sun bought Cobalt for $2bn; RLX kind of died before HP bought its remains for $20m; and Egenera lives on today.)
Sadly, the Tier 1 server vendors latched onto the blade concept and then watered it down as much as possible. The big boys' blades place very little emphasis on density and performance per watt gains and instead focus on cabling and management improvements.
So we still have a vacuum present for a radical web-friendly design.
Of course, no venture capitalist in their right mind would fund a server start-up of this sort.
But the good news is that there are plenty of batshit crazy VCs.
In order to get money for this venture, you'll need to disguise the operation as a Web 2.0 firm. You show up at a VC's office, pitching a Facebook application or Google toolbar add-on, and then explain how you will beat rivals by delivering this software in the most economic fashion possible. (That is if the VC even bothers to ask.) You're going to build your own servers to host the software just like everyone's darling Google.
"Did you say, Google?"
"Yes, we did."
Grab that $30m check and get cranking on the hardware. Ignore the questions about the expected arrival of the Facebook application for as long as possible.
Should you get rich off this venture - and you will - unless you don't - then I expect a substantial payment. If you flop, I'll be sure to write about it in as humorous a manner as possible. ®
Register editor Ashlee Vance has just pumped out a new book that's a guide to Silicon Valley. The book starts with the electronics pioneers present in the Bay Area in the early 20th century and marches up to today's heavies. Want to know where Gordon Moore eats Chinese food, how unions affected the rise of microprocessors or how Fairchild Semiconductor got its start? This is the book for you - available at Amazon US here or in the UK here.
Re: The Mac Mini
> AppleTV, at 16-20W
> you're unlikely to get anything with a lower current draw
My 1 GHz VIA C7-M webserver, with solid-state hard drive, takes about the same. The key is finding an efficient power supply: even ones that claim high efficiency turn out to be rather poor if you're this far below their design power.
If that's not low enough for you, these people: http://www.embedian.com/ run their web site from one of their own 400 MHz ARM-based servers. (The site's a bit slow; not sure if that's due to the server or the damp string linking Britain and Taiwan.)
I have the impression that many people have massively over-spec co-located servers for their web sites (but of course I have no data to back up that claim). Unfortunately, the service providers make more profit from hosting over-spec systems, and the cost difference is sufficiently small that your typical medium-sized business will play it safe and choose the bigger box. Software bloat (*cough* PHP *cough*) is also largely to blame.
Anyway, as for Ashlee's proposition: yes, good plan. Personally I'd run my massively-parallel web application on boxes full of VIA chips. I think the biggest challenge is persuading people to pay a profitable price for it: if you're offering a "high performance" solution, with the right marketing you can trigger some sort of visceral reaction in the customer that will make them pay a premium. Selling low-power kit is much less "sexy", and you'll really need to sell it based on the bottom-line numbers in your spreadsheet. That gives you less space for profit.
I didn't read anything after "fiber"...
...this is theregister.co.UK after all...this misspelling has no place here.
As if it didn't kill me enough having to write Serializable and living with the fact that "tabify" is considered to be a word.
I demand the article is withdrawn immediately and the author subjected to some serious waterboarding action (video-taped of course) or I will withdraw my subscription to The Register forthwith.
Excellent, another example of the South Park underpant gnome, 3 step guide to economic success:
1. Build cheap low power servers out of comodity components.