Feeds

AWS fattens EC2 cloud for big data, in-memory munching

Data Pipeline service links data sources, automates chewing and spitting

Beginner's guide to SSL certificates

re:Invent Werner Vogels, the Amazon CTO who also knows a thing or two about the Amazon Web Services cloud, kicked off the second day of the re:Invent partner and customer conference in Las Vegas with a long lecture about how cloud computing enables people to think about infrastructure differently and code applications better. Interspersed among his umpteen Cloud Coding Commandments, Vogels actually sprinkled in some news about two new EC2 compute cloud instances and a new data service that will make it easier for AWS shops to both move data around inside the cloud, and to use said data.

The first new EC2 slice is called the Cluster High Memory instance, or cr1.8xlarge in the AWS lingo, and it comes with 240GB of virtual memory and two 120GB solid state drives associated with it.

Vogels did not reveal the amount of EC2 compute units (or ECUs) of relative performance that this Cluster High Memory instance has associated with it, and the online AWS EC2 specs have not yet been updated to reveal its pricing either. But given its name, it is probably just a Cluster Compute 8XL image, which has 88 ECUs of oomph, with extra memory and some SSDs attached to it.

"If you run any app with large-scale in-memory processing, this is the instance type you want to use," said Vogels of the high-memory virtual slice.

The second new instance is also an extreme one, but one that's heavy on the disk drives and moderately heavy on the virtual memory. The High Storage instance, or hs1.8xlarge by feature designation, has 117GB of virtual memory with two dozen 2TB drives for a total of 48TB of capacity associated with it. Given that it is an 8XL derivative, hs1.8xlarge probably has 88 ECUs of performance as well – but again, AWS has not yet said.

This one is obviously aimed at big data/analytics jobs, and Vogels said it was particularly well-suited as an instance on which to build a virtual cluster to run Amazon's Elastic MapReduce service, which runs the Hadoop data muncher as a service.

AWS did not say when these new EC2 instance types would be available, or at what price.

Save everything, waste nothing

One of Vogels' commandments is to put everything – and he means everything – into log files, and to save all of the data you generate.

You would expect someone who is peddling raw capacity and services to chew on it to say that, of course. But his point goes beyond self-interest since you can only analyze the data you keep and the things that you measure.

Amazon has its Relational Database Service (based on MySQL, Oracle, and SQL Server databases) for transactional data, the DynamoDB NoSQL data store for certain kinds of unstructured data, the Elastic MapReduce service and its Hadoop Distributed File System (HDFS) store, and the S3 object storage and EBS block storage services as well.

On Wednesday, AWS launched the Redshift data warehousing service, which will store data in yet another container on the AWS cloud. Companies have their own data sources beyond this. And moving all this data around, and converting it from one form to another, is a real pain in the neck.

And to that end, AWS is rolling out a new service called Data Pipeline to automagically move data between the various Amazon data services and various homegrown and third-party analytics tools.

The AWS Data Pipeline service automates data movement and reporting

The AWS Data Pipeline service automates data movement and reporting

The service will also connect to third-party data sources and link into data that is stored in your own systems if you run analytics out on the AWS cloud. The Data Pipeline details were a little sketchy, but Vogels said it was an orchestration service for data-driven workflows that would be pre-integrated with Amazon's various data services.

Here's what a workflow looked like that Matt Wood, chief data scientist at AWS, demonstrated working in a live demo:

Screenshot of AWS Data Pipeline using Hadoop to move data to S3

Screenshot of AWS Data Pipeline using Hadoop to move data to S3 (click to enlarge)

In this particular case, a slew of data that was stored in DynamoDB was extracted by the Elastic MapReduce service using the Hive query feature of Hadoop to dump summary data into the S3 object store. This data was then made available to analytics applications and could then kick out reports based on that summary data.

It is not clear how Data Pipeline will do format conversions between all of the data sources, or if Amazon intends to charge for the pipes. It could turn out that Amazon gives the service away for free or a nominal fee with the expectation that companies will make heavier use of EC2 and various data stores if the movement if data between these data stores is automated.

That leaves the Commandments of Cloud Computing, which Vogels brought down from on high in the Seattle Space Needle. They are:

  • Thou shalt use new concepts to build new applications
  • Decompose into small, loosely coupled, stateless building blocks
  • Automate your application and processes
  • Let business levers control the system
  • Architect with cost in mind
  • Protecting your customer is the first priority
  • In production, deploy to at least two availability zones
  • Build, test, integrate, and deploy continuously
  • Don't think in single failures
  • Don't treat failure as an exception
  • Entia non sunt multiplicanda praeter necessitate (aka Occam's razor)
  • Use late binding
  • Instrument everything, all the time
  • Inspect the whole distribution, not just the mean
  • Put everything in logs
  • Make no assumptions, and if you can, assume nothing

If you understood all of that, then you are ready to launch your startup on AWS and go chase some VC money. And even if you didn't understand it all, don't let that stop you. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

10 ways wire data helps conquer IT complexity
IT teams can automatically detect problems across the IT environment, spot data theft, select unique pieces of transaction payloads to send to a data source, and more.
The total economic impact of Druva inSync
Examining the ROI enterprises may realize by implementing inSync, as they look to improve backup and recovery of endpoint data in a cost-effective manner.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.