Feeds

Google forges BigTable-based NoSQL datastore

Takes out BigTable, thwacks Amazon DynamoDB over the head

Secure remote control for conventional and virtual desktops

Google I/O If you're Google, building cloud services for the public must be frustrating – after spending a decade crafting and stitching together software systems for use internally, when you try and sell them to the outside world you need to unpick them from one another.

It seems more like butchery than creation, but that's the name of the cloud game, and so on Wednesday Google further fragmented its services by ripping a scalable NoSQL datastore away from Google App Engine (GAE) and making it into a standalone service named Google Cloud Datastore.

This strategy of designing integrated products and fragmenting them for the general public runs throughout Google's cloud history. For example, its cloud portfolio started out with platform services via GAE, but after Amazon started raking in vast amounts of cash from IaaS services on AWS, Google separated out basic VM services into the Google Compute Engine infrastructure cloud.

With Datastore, Google has taken another bit of App Engine and stuck it on its own plinth. The service is a columnar datastore which supports ACID transactions, has high availability via multi-data center replication, and offers SQL-like queries.

It will compete with Amazon's DynamoDB NoSQL row-based datastore. Though roughly equivalent in terms of capability, the two services have some architectural differences that influence how they work: BigTable-based Datastore has strong consistency for reads and eventual consistency for queries, whereas DynamoDB offers people a choice of eventual or strong consistency, depending on pricing. Both systems are heavily optimized for writes.

The systems' storage substrates also differ. DynamoDB uses an SSD-backed set of hardware, but Google indicated its Datastore may use both flash and disk. "We do use them [SSDs], we sort of use them behind the scenes," Greg DeMichillie, a Google Cloud product manager, told The Register. "Frankly we think what people really want is a certain performance level but they really couldn't care whether it's this technology or that behind it. We don't surface inside the storage stack where we happen to be using SSDs and where we don't."

The base cost for Google storage is $0.24 per gigabyte per month, with writes charged at $0.10 per 100,000 operations and reads charged at $0.07 per 100,000. This compares favorably with DynamoDB, which costs $0.25 per GB per month, plus $0.0065 per hour for every 10 units of write capacity, or $0.0065 per hour for every 50 units of read capacity. Harmonizing these two pricing approaches is difficult due to the labyrinthian price structure Amazon uses.

For both services, transferring data in costs no charge, but moving it to other storage or services can sting you, with Google charging $0.12 per gigabyte on outgoing bandwidth and Amazon charging on a sliding scale from $0.12 to $0.05 – or even lower, if you have a ton of data.

"With Datastore we certainly will continue to evolve over time onto latest and greatest versions," DeMichillie said. "It's really just a matter of timing and sequencing." ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.