If your SSD sucks, blame Vista, says SSD vendor
SanDisk pledges next-gen Flash disks will be better
It's Windows Vista's fault that solid-state storage isn't performing as well as its proponents predicted. So said SanDisk CEO Eli Harari, but at least he didn't go as far as saying it's Microsoft's problem to fix.
SSDs are viewed as the heir apparent to the hard disk, particularly for laptops and other mobile computers. SSDs are way more shock-resistant and consume less power. Theoretically, they should deliver better performance.
Alas, many tests reveal that they don't.
SSD "performance in the Vista environment falls short of what the market really needs", admitted Harari at the company's earnings conference this week.
Why not? According to Harari, it's because "Vista is not optimised for Flash memory solid-state disks".
But isn't that the disk makers' problem? Despite pointing the finger at Vista, Harari tacitly admitted it was by signalling that what's needed are new Flash memory controllers that can be built into the SSDs and "compensate for Vista shortfalls".
We'd say they're the SSD's shortfalls. Vista works the way it does because of its long hard disk heritage. If SSD makers want their products to replace HDDs, it's up to them to develop drives that can be slotted into existing systems and deliver real benefits. Grumbling that it's Microsoft's fault isn't going to help.
The problem surely stems from Windows' use of hard disk space for memory caching, something all modern and not-so-modern operating systems do. So it's not like the SSD manufacturers didn't have any warning this could be an issue.
Small, Cheap Computer will continue to benefit from Flash storage, Harari said, because they have "relatively unsophisticated and demanding requirements" - they're either running very basic Linux apps or, when they come with Windows XP, have virtual memory disabled.
SCCs will provide a role of SanDisk's current SSDs while the company works on next-gen controllers better suited to Vista. They won't appear, however, until late 2008 or early 2009, and then only in sample quantities, Harari said.
Is it Jewson's fault that bricks make crappy wheels?
Flash based media was never intended to be used for swap, shouldn't be used for swap (Wear factors), so why are people surprised that it's a crappy swap media?
A far better engineered solution would be to build a non-flash RAM disc specifically for swap, or (God forbid anyone try doing it properly) place an interface in the next generation of PCs that incorporates REALLY slow memory into the Memory system architecture. Of course then you'd have the PC world sheep getting their nickers in a twist about CPU speed vs cache vs L2 cache vs Main memory vs Boost memory.
If an O/S had to compensate, update and change for each and every type of peripheral on or soon to be on the market the O/S would never get released. It is a lot easier for each company to work on it's product than it is for another company to work on everyone elses. Get real.
Nothing wrong with SSDs...with the right O/S!
I have an EEE 701, with 4GB and 1GB memory, added a 16GB SSD. I run full Ubuntu 7.10, a copy of Oracle 10g ( Enterprise edition ), a GNOME desktop and an Apache webserver for my Perl/CGI dev work!!! That my friends is the power of an O/S running efficiently, and Ubuntu isn't even the most well tuned of the distro's!
I run databases and my swap partition on a bog standard SSD, nothing wrong with them!
The ideal solution would be, in effect, a hybrid of swap and caching.
Here's how I envisage it working - Windows, when idle, would copy (not swap) pages from RAM to disk, effectively creating a swapped page which is pre-cached in RAM for next use.
Then, if those pages are needed again in the near future, you don't have to wait for them to be swapped back from disk and, if the pages were written to, Windows would delete the disk copies and mark the pages for re-copying next time it was idle.
If, on the other hand, Windows received a program request for more RAM than was free, it would free up RAM by trashing the RAM copies of the least recently used pages, which should already have been copied to the swap file. In caching terms, it would eliminate the least recently used data from the cache.
I don't know if this is how it works, but it is, to me, the logical solution.
As a refinement, Windows could allow programs to request that certain pages not be removed from RAM. For example, a game might do this with graphics textures that it has preloaded, knowing they are likely to be required in the next room. Again, it wouldn't surprise me if this has already been implemented.
SSDs aren't a lot oh things makers make us believe
Many of the current SSDs have MLCs, and this automatically means slower transfer speeds, higher power consumption and lower cell endurance than when single-level cell memory is used and then what SSDs "should" be like (what manufacturers try to make us think they are like).
SSDs can also be more sensitive to magnetic fields and electric or static charges than HDDs. And, unbelievable as it may seem, they can even cut the battery life of your laptop (http://www.tomshardware.com/reviews/ssd-hard-drive,1968-11.html).
So, while the way OSes use drives may have something to do with how they perform (though random writes are their weakest point and regular HDDs are also slower at performing random operations then when doing sequential ones), the cruel truth is that current SSDs don't live up to the hype simply because they can't.