Being jitter free is important
DataDirect discusses Apple's Final Cut, disses Isilon
He is explaining why his company's S2A9900 storage box is so great. Apparently other storage products can support as many as three Apple Final Cut Mac workstations doing uncompressed high-definition video editing work in real time. Not a lot really - these workstations are storage bandwidth hogs. The DataDirect S2A9900, on the other hand, can support 30 Final Cuts. It does this through sheer speed and with very scalable block-level storage of up to 1.2PB in one box (taking up two data center floor tiles.)
Isn't clustered Isilon kit used for this kind of thing? Yes and no, says another DataDirect man. In his experience broadcast and post-production customers use, at most, 8-10 Isilon nodes in production. These users say they couldn't keep anywhere near the theoretical Isilon maximum of 96 nodes in production, no matter what they tried. In any case the energy costs would be astronomical and possibly present a potential fire hazard, our DataDirect fella claims. Isilon's kit just isn't used, he says, to support Apple Final Cut Pro workstations.
He quotes a Wachovia analyst: Isilon IQ-series systems run at 100MB/sec read and 50MB/sec write levels with a 1GB/sec internal bandwidth. DataDirect's box puts out 6GB/sec of both read and write I/O and has 24GB/sec of internal bandwidth.
Fellinger sums it up: "When running multiple Final Cut Pro sessions in parallel, only DataDirect Networks ensures that users experience extremely low latency response times for jitter-free playback while concurrently performing full-bandwidth ingests.”
Isilon has a different take on this. Ninety six nodes is a real cluster node number and it has its own internal 96-node cluster known as the Whopper.
Apples and oranges
Jay Wampold, Isilon's senior marcomms director, thinks DataDirect (DDN) doesn't understand how Isilon clusters work: "Comparing the performance of one Isilon node to that of a DDN box is comparing apples to oranges. Isilon delivers a truly symmetric clustered storage architecture where all nodes in a cluster serve as simultaneous pathways out to the network, so when comparing Isilon to a DDN box, you must compare aggregate performance (the performance of all nodes in a cluster) to that of DDN. Isilon can deliver 10GB/sec of performance from a cluster (single file system, single volume) surpassing the 6GB/sec of DDN."
The DDN view of Isilon use in post-production environments is wrong too: "Isilon has hundreds of customers in the media and entertainment industry to power a number of production applications, including Apple Final Cut Pro. Depending on the application, a typical deployment in the production segment of media and entertainment will range from 15-25 nodes. Again, we have a number of customers in this arena who have deployed much larger clusters in (the) 40-70 node range."
He reckons Isilon delivers an integrated HW + SW system whereas DDN ships out HW, "and depends on third party file system integration ... essentially outsourcing the intelligence of their system, to ... their customers. ... (who) are left to manage an extremely complex, difficult to scale science project that requires constant care and feeding to continue to operate. ... Media IT managers don't want to hold a PHD in storage to manage a DDN build-your-own system."
This area of the market for very high-performance and extremely scalable storage kit is giving manufacturers the jitters. The market is set to boom as web 2.0-type unstructured file storage needs go through the roof. No-one wants to cede superiority to anyone else. DDN is chasing an IPO. Isilon is recovering from management misfortunes, and Atrato has announced product. Waiting in the wings are HP's ExDS9100, IBM's XIV, and the hulking presence of EMC. ®
Mountains out of molehills
I was dealing with the 3Gb/s video problem as long ago as 3 years ago. Before Final Cut Studio went HD. What I learned was that the video business likes to make headaches for themselves.
3Gb/s video is extremely stupid since there is no transport format for it. HD-CAM SR which is the transport format for high definition in HIGH END broadcast stores video at a peak of 880MBit/s AVC which is 110Megabytes per second. The decks cost WAY too much and suffer generational loss left and right since you can't read/write the 880Mbit/s stream, instead you have to read the stream as a 3Gb/s signal transmitted as uncompressed RGB frames for full quality. So, to copy from one deck to another 1:1, the video is first decompressed and later recompressed for storage.
In a professional video network, you would instead benefit from using 600MBit/s which fits perfectly over a gigabit ethernet adapter (use two bonded channels to guarantee quality). Then NAS storage become trivial. I used a HP wx8400 workstation loaded up with SAS controllers connected to large amounts for drives using SCSI expanders and communicated with the network over 10Gbe for network communication. This configuration provided me with more than sufficient bandwidth to handle 5 high definition workstations in full 4:4:4 1080p. Off the top of my head, I believe I could have expanded to an additional 20 workstations and up to 5 petabytes by adding another 10Gbe or two channels to the workstation.
On new technology... I've been experimenting at the house with a small 10 terabyte storage system I have to see how OpenFiler SAN scales to FinalCut Studio. So far, I'm under the impression that a single 8-core server with 32GBytes of RAM should be able to handle 20-40 machines alone. Using big iron from Sun or HP would stretch much further.