Google adds storage buckets to plug cloud gap
Plonk your data where you like
Google has introduced a feature that lets administrators locate their cloud data next to their rented servers to reduce latency, bringing the Chocolate Factory's cloud closer to its rivals in terms of capability.
The Regional Buckets feature was announced by Google on Monday along with Object Lifecycle Management and Automatic Parallel Composite Uploads, as the Chocolate Factory tries to give administrators greater control of their IT resources within Google's monolithic software stack.
"With a tiny bit of upfront configuration, you can take advantage of these improvements with no changes to your application code," Google developer programs engineer Brian Dorsey wrote on Monday.
The probable reason why no re-coding is required is that Google's cloud is a layer pasted over the Chocolate Factory's notoriously advanced monolithic software system, so the changes can be made by introducing new API commands for users that dovetail into the secret infrastructure.
With Regional Buckets, developers can co-locate data stored in Google's low-cost storage service (Durable Reduced Availability storage) inside the same region as their Google Compute Engine instances. Letting the data and compute sit on the same network fabric in this way can lead to more predictable performance, and Google says this is "particularly appropriate for data-intensive computations."
The new feature lets you choose from among five data centers to locate your data, whereas previously it only let you select a preference for "US" or "Europe" facilities.
Google also announced a new feature named Object Lifecycle Management, which lets admins create data retention and deletion policies to reduce the cost of storing data. "For example, you could configure a bucket so objects older than 365 days are deleted, or only keep the 3 most recent versions of objects in a versioned bucket," Dorsey writes.
A new tech for Google's
gsutil command line cloud controller called Automatic Parallel Composite Uploads also reduces the time it takes to load data into the Google cloud by splitting large files into component pieces that are uploaded in parallel across available network connections.
The three features see Google's cloud get closer to its key competitors Amazon and Microsoft in terms of capability, both of which have offered features around versioning, management, and granular regional selection for data for some time. ®
Sponsored: Network DDoS protection