Clouds mass over data warehousing
Comment Suddenly the data warehousing sector seems to be hotting up. There's EMC's new competency centre and now Kognitio's in-memory data warehouse which threatens to brush server vendors aside if the idea gets adopted big time. How does that one work?
The story goes like this: Cluster lots of servers together in a shared-nothing architecture and use parallelising data-warehouse SW - WX2 in this case - to treat them as a single but very parallel resource. The servers all execute different threads of queries against the data that is stored in the servers' DRAM as an in-memory database. All other data, such as query results or a fraction of the data warehouse that is not in memory, is stored on disk - the servers' directly-attached disk and not in a networked disk resource such as a SAN or a NAS box.
Generally, with a disk-based data warehouse, only a fraction of the data is stored in memory, and query results executed against this are only looking at a data sample and not the full warehouse. Results from a full-warehouse query are statistically much more likely to be correct.
Roger Gaskell, the chief technology officer of Kognitio, says the firm is currently bidding for a 40TB data warehouse and its bid is less expensive than the installed DW system based on storage arrays and many servers. But how can 40TB memory-based system be cheaper?
It's cheaper in memory than on disk
The prospective customer, a large US business with a retail interest, currently has a 600TB data warehouse stored on a Fibre Channel-accessed modular drive-array resource, with queries processed by high-cost servers. Kognitio's bid is for 600 servers in a cluster - or, more accurately, a grid set-up - which collectively have 40TB of DRAM and 600TB of disk, but server direct-attached disk, and not modular arrays.
The servers are low-cost Dell or HP X86 servers and the cost of this set-up will be around $4,000,000, whereas the cost of the installed system was $5,000,000. Gaskell said that because the servers are so cheap, "The disk storage is almost free."
Gaskell told The Reg that the Kognitio system will be radically faster in answering queries - up to 80 times faster - than the disk-based system. The reason that the customer is looking to replace or augment the existing DW array-based system is because complex queries can now take up to four or more hours, and they'll be answered in three to six minutes on the in-memory Kognitio warehouse.
If this is true - that is, if the proposed system really is 80 times faster and a fifth less expensive - then it's a steal. Gaskell wouldn't identify the prospective customer because that company didn't want to upset its incumbent vendors. You can see why: Kognitio's technology renders DW use of storage arrays redundant. This customer still gets 600TB of disk but he'll be paying a much lower server-drive price rather than storage-array prices. Gaskell says, "You can get a terabyte of disk for about $400 on an HP rack-mount server."
Why not use flash storage instead of DRAM? Wouldn't it be cheaper still? Yes, it would, said Gaskell, but as a drive-array substitute it would only be two to three times faster than disk instead of 80 times faster, and the whole reason for going in-memory is to achieve the speed needed to get real-time response to queries.
Why not use a single big chunk of DRAM, like a TMS RamSan? "We have a shared-nothing architecture for reliability," said Gaskell. "If a server goes down we can work around that," meaning that if links to the RamSan or the RamSan itself goes down then, oops, your real-time response just went dead.
Next page: DAAS - Data warehousing as a service
Power and reliability?
This looks like 600 servers with 64GB of memory each. Assuming that these are dual chip machines and with disks and so on then allow say 500W per server (including air-con overheads etc.) That's 300KW (or perhaps £250K per year in electricity at UK prices).
I think I'd ask questions about resilience to. With 600 servers there are going to be lots of failures - software and hardware. Unless this solution is inherently resilient to both, then service levels will be appalling unless the type of access pattern allows for a high degree of data partitioning so that only some queries fail. Duplicating memory is going to be expensive, although I guess that other approaches could be taken.
Personally I would go along with flash being faster, more reliable and much more power efficient. Just to do a rule-of-thumb calculation, 45MBps read flash is available (retail) for about $10 per GB. 40TB of this stuff is approximately $400K. Double it up for reilience and add a bit more for luck and you can have $1TB for $1M. Potential bandwidth is massive as (based on available devices) 40TB would give a theoretical bandwidth of about 12GB per second and you could read the whole lot in about 7 minutes. If the queries don't require every byte to be read you can do better (reading and processing 40TB in a dtabase would take a lot of CPU power).
Of course this is all theory and the hardware and software to connect this many commodity Flash devices is maybe not there, but it does show the potential in the technology.
Incidentally, commodity flash is very poor at random small writes, but the 45MBps devices I refer to can write at that rate sequentially.
Interesting but a bit flawed.
Not really a green solution.