Sun pitches new cloud as 'Open Platform'

Like Linux, for grids

Remote control for virtualized desktops

The Demo

Tucker demoed the Sun Cloud, and this is the first time we have seen the graphical cloud creation tool that Sun got when it acquired Belgian data center management tool maker Q-Layer in early January. As we explained earlier today, the Sun Cloud has a Compute Service, which will be comprised of a mix of Sparc T, Xeon, and Opteron blades servers running OpenSolaris and the xVM interpretation of the Xen hypervisor.

Xen will allow Windows or Linux as well as Solaris to be run on the x64-based blades, while OpenSolaris will run in a virtualized manner on the Sparc-based blades, presumably using logical domain (LDom) partitioning. The Storage Service will run on Sun's "Amber Road" storage arrays (which are in turn Solaris servers with the Zettabyte File System and a whole lot of disk drives) and will support the file-level WebDAV protocol as well as an object-based storage protocol that is compatible with Amazon's S3 storage service.

The Q-Layer software, which Sun is calling the Virtual Data Center, is what does the provisioning of servers from a library of possible virtual machine stacks. Tucker demonstrated how easy it was to start with a public network IP address and plug in Web servers supporting a MediaWiki application, complete with load balancers and two back-end database servers, using the tool. He then proved that they were live on the Sun Cloud by opening up a Web browser and surfing to each Web server by forcing refreshes.

The underpinning of the Open Cloud Platform that Sun will be pitching to developers is a set of cloud APIs, the creation of which is focused under Project Kenai and which has been released under a Community Commons open source license. Sun wants lots of feedback on the APIs and wants these APIs to become a standard too, hence the open license. These APIs describes how virtual elements in a cloud are created, started, stopped, and hibernated using HTTP commands such as GET, PUT, and POST.

The upshot is that these APIs will allow programmatic access to virtual infrastructure from Java, PHP, Python, and Ruby and that means system admins can script how virtual resources are deployed. The APIs, as co-creator Tim Bray explains in his blog, are written in JavaScript Object Notation (JSON), not XML. The Q-Layer software is a graphical representation of what is going on down in the APIs, and you can moving virtual resources into the cloud with a click of a mouse using the dashboard or programmatically using the APIs from those four programming languages listed above. (PHP support is not yet available, but will be).

Rich Wolski, the director of the Eucalyptus project at the University of California at Santa Barbara, was trotted out to say that the project would be supporting Sun's cloud APIs in addition to the Amazon Web Services APIs that it already supports. The Eucalyptus project is trying to create a management framework for public clouds like Amazon's AWS and Sun's Cloud that can also be extended into private clouds that companies will surely build. As already reported, the Ubuntu distro of Linux is supporting the Eucalyptus framework and wants to position itself as the platform on which to build Amazon-compatible private clouds.

Tucker also demonstrated that OpenOffice and StarOffice will be tweaked to have two new commands in their drop down menus: Load from Cloud and Save to Cloud. The WebDAV support is going to be embedded right into the software so that if you have a Sun Cloud account, you can save files right to the cloud instead of your disk drive. You can sure bet that Sun wants to charge for that storage, and millions of people will probably want to back up to some cloud somewhere because, really, who trusts a laptop drive all that far?

Neither Douglas nor Tucker would talk about when the Sun Cloud would go commercial, but they reiterated that over the summer the company will have a controlled, private beta for the developer community. More details on that, and presumably pricing for the compute and storage services as well as the Q-Layer tools, will be announced at the CommunityOne West event in June. They were similarly not going to talk about pricing. "We are well aware of pricing in the market and we intend to be competitive," Douglas said. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Trio of XSS turns attackers into admins
prev story


Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?