Feeds

Solaris 11 due mid-2010

But Solaris 10 5/09 is here today

5 things you didn’t know about cloud backup

The number and gee-whizness of features Sun Microsystems is putting into updates to both the Solaris 10 commercial operating system and the related OpenSolaris development release of Solaris are slowing. That's the best indication that Nevada - the code name for Solaris Next or Solaris 11 or whatever you want to call it - is getting closer to release.

Closer doesn't mean close, however. According to sources speaking to The Reg, Sun is quietly telling customers that Solaris 11 is targeted for launch sometime around the middle of 2010.

But they add that Sun is also telling customers that this date is "not set in stone," and depends on the development effort, market conditions, and Sun's status as a unit of Oracle - or as a freestanding company should Oracle's $5.6bn acquisition of Sun evaporate.

Sun hasn't said much publicly about Solaris 11's launch date. But as we reported back in October when the Solaris 10 10/08 update was put out, Dan Roberts, director of marketing for Solaris, said that after a few releases of the OpenSolaris Project Indiana distro were out and the feature set was expanded, Sun would take a snapshot of OpenSolaris and harden it into the commercial release - which people will call Solaris 11 even if Sun calls it something else.

The expectation - and undoubtedly the desire of Sun - is to get the launch of Solaris 11 and its Rock 16-core UltraSparc-RK processors into synch. That doesn't seem to be happening, however, unless Rock-based systems are pushed out to the middle of next year or Solaris 11 is pulled into this year. Neither of these seem to be the plan, however.

The last time Sun talked publicly about Rock, back in January, Sun's president and CEO said that Rock-based Supernova servers were expected by year's end. If they do appear by then, they'll come to market anywhere from 12 to 18 months from their original expected launch date.

While Oracle has been clear to Sun's employees that it will keep Sun's hardware business going, that doesn't mean schedules and priorities won't shift. So until this deal is done - or undone - it is hard to say when any Sun product, be it hardware or software, will appear.

In the meantime, Sun continues to kick out semi-annual updates to the current Solaris 10 commercial release, and today the 5/09 update appeared. You can get the release notes for the update here. Many of the features that Sun talked about in an interview are not in this document, so it's by no means a comprehensive release note.

While the October update had support for Intel's Nehalem EP Xeon 5500 processors, today's 5/09 brings out a richer set of support, including all the tweaks for predictive self-healing and power management that were rolled into OpenSolaris 2008.11 last fall. The Reg already went over the Nehalem-specific features in detail back when the Xeon 5500s launched at the end of March.

It is not clear if the Solaris 5/09 update has support for the six-core Istanbul Opteron chips that AMD pulled forward last week by several months, with chip shipments now scheduled to start in May to OEM customers and servers packing the six-shooters expected to hit the streets in June. Sun was still checking on Istanbul support as we went to press.

In addition to the Xeon 5500 enhancements, the Solaris 10 5/09 update has tweaks in support of the logical domain (LDom) virtualization partitions used on Niagara Sparc T processors, allowing virtual disks to be larger than 1 TB. LDom virtual networking also supports jumbo frames, which can be used to boost virtual network throughput.

Sun's Zettabyte File System (ZFS), which was made the root directory for Solaris 10 with the prior update, has various tweaks from the open storage projects at OpenSolaris to increase the performance of solid state disks (SSDs) when used on Sun's Sparc and x64 servers as well as other x64 machines.

Sun has also integrated the process of cloning Solaris containers with ZFS file cloning, allowing them to be done in one fell swoop. This is not only simpler and faster, it also gets rid of the whole problem of having to de-dupe files once you make a clone of a container. ZFS does that all by itself. ®

5 things you didn’t know about cloud backup

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
China hopes home-grown OS will oust Microsoft
Doesn't much like Apple or Google, either
Microsoft refuses to nip 'Windows 9' unzip lip slip
Look at the shiny Windows 8.1, why can't you people talk about 8.1, sobs an exec somewhere
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
This is how I set about making a fortune with my own startup
Would you leave your well-paid job to chase your dream?
Microsoft cries UNINSTALL in the wake of Blue Screens of Death™
Cache crash causes contained choloric calamity
Eat up Martha! Microsoft slings handwriting recog into OneNote on Android
Freehand input on non-Windows kit for the first time
Linux kernel devs made to finger their dongles before contributing code
Two-factor auth enabled for Kernel.org repositories
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?