This article is more than 1 year old

AWS shovels compute smarts into Snowball Edge. How about piling it into a Stack, eh?

Transient use edge data collector and processor upgraded

Amazon has slipped some extra compute options into its Snowball Edge data transfer box.

The original Snowball proposition was simple: pick up all your flakes of internet edge data, squash them together into a 50TB snowball, roll in more to make 80TB, and then a little more to make a 100TB one. Toss the snowball to an AWS data centre and, hey presto, your data is uploaded.

Amazon Web Services <a href="http://www.shutterstock.com/gallery-762415p1.html?cr=00&pl=edit-00">Gil C</a> / <a href="http://www.shutterstock.com/editorial?cr=00&pl=edit-00">Shutterstock.com</a>

AWS launches on-premises EC2 instances for reverse hybrid cloud

READ MORE

The max you can send to AWS boxed up this way is currently 100TB per device and it takes about a week for the data to be despatched by truck to Amazon and made available online.

This is much faster than squirting it up through a network pipe. One table we have seen suggests it could take 50 days to transmit 1TB of data using a 2Mbit/s link, and five days with a 20Mbit/s line. That would be 50 days if you were sending 100TB up the link.

Amazon added compute facilities via Snowball Edge boxes with Lambda serverless functions and then added virtual CPUs (vCPUs) running EC2 Amazon Machine Image (AMI) instances. These executed on a Xeon D CPU running at 1.8GHz, and could run any instances combo that consumed up to 24 vCPUs and 32GiB of memory.

Here we are, four months later, and the Bezos machine has added yet more compute options via a Snowball Edge Compute Optimised product and a Snowball Edge Compute Optimised with GPU variant. Shall we use the SECO and SECOG acronyms? Er, maybe not.

Snowball_Edge_boxes

SnowBall Edge boxes ... Storage Optimised left, Compute Optimised right

The original Snowball Edge box is now called Snowball Edge Storage Optimised to unify the branding.

The two Compute Optimised boxes, which will be made available soon, have 42TB of S3-compatible storage and 7.68TB of NVMe SSD capacity. According to an AWS blog, you can run any combination of instances that consume up to 52 vCPUs and 208GiB of memory.

Snowball_Edge_Compute_instance_types

Snowball Edge Compute instance types

An sbe-g instance is used to gain access to the GPU. We don't have the physical CPU and GPU spec for the Compute Optimised boxes.

Amazon suggested using the GPU variant for real-time full-motion video analysis and processing, machine learning inferencing, and other highly parallel compute-intensive work.

In effect, these are composable server/storage systems but they are still meant for transient use. Load up the data, process it locally, then despatch the boxes to AWS for data upload and further processing there.

+Comment

These are not continuous use Azure Stack-like boxes, which extend the Azure public cloud to cover a portion of a customer's data centre.

Snowball Edge is more like a one-time vacuum cleaner: suck up the data dust, pre-process it, and send the thing back to its depot. Surely AWS will eventually build a Snowball Edge box with a removable disk magazine – like Quantum's Mobile Storage products for autonomous vehicle testing?

Then you could carry on doing your edge IT processing AWS-style without having to do it stop-start fashion punctuated by box despatch to AWS. Come on, Amazon: build an AWS Stack... you know it makes sense. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like