This article is more than 1 year old

Amazon's self-driving AI robo-car – THE TRUTH (it's a few inches in size)

Cloud cash cow expands its menu with accelerator chip, machine learning stuff, and more

re:Invent Rent-a-cloud biz AWS has cooked up a melange of still more AI-oriented bit bundles to serve pay-as-you-go customers, topped with the promise of AI-enhancing hardware and a throwable self-driving car.

It already has quite a few smart code confections: Rekognition, Lex, Polly, Transcribe, Comprehend, Translate, Sagemaker, and Greengrass, among others.

At its re:Invent gathering in Las Vegas today, AWS threw a handful of new flavors into the mix, among them: Elastic Inference, SageMaker GroundTruth, SageMaker RL, Amazon SageMaker Neo, Personalize, Forecast, Textract, and Comprehend Medical.

It also teased a machine-learning inference chip called Inferentia, and a small radio-controlled car called DeepRacer for executing autonomous driving models in the real-world and terrifying pets.

First, the car. It's a 1/18th scale race car that's ostensibly intended to help people understand and implement reinforcement learning. It may also help with customer acquisition, retention, and spending.

Girl lighting firecracker

It doesn't work with Docker, K8s right now, but everyone's going nuts anyway for AWS's Firecracker microVMs

READ MORE

The windowless vehicle sports an Intel Atom processor running Ubuntu Linux 16.04, Robot Operating System (ROS), and Intel's OpenVino computer vision kit supported by 4GB of RAM and 32GB of storage (expandable). It also carries a 4MP camera with 1080p resolution, multiple ports (4x USB-A, 1x USB-C, 1x Micro-USB, 1x HDMI), separate compute and drive batteries for about two hours of use, integrated sensors (accelerometer and gyroscope) and 802.11ac WiFi.

With a list price of $399, Amazon is offering the car for pre-order at $249. It's supposed to ship March 6 next year.

AWS expects would-be robo racers will create reinforcement learning models using Sagemaker, test them in its RoboMaker cloud-based simulator, and load them into the vehicle to collide with reality. What's more, it's coordinating a DeepRacer League for competitive races at future AWS events, which could also spark interest among hackers.

Amazon's back office biz also intends to offer a machine learning chip called Inferentia, so named because it's intended to make inferences – the predictions from machine learning models that follow the arduous training stage – more efficient and affordable.

Inferentia, not to be confused with Life of Brian character Incontinentia, will support the Apache MXNet, PyTorch, and TensorFlow deep learning frameworks, and models that rely on the ONNX format.

The hardware should go well with the newly introduced Elastic Inference service, which lets developers attach GPU-powered inference acceleration to any Amazon EC2 instance rather than renting a costly GPU instance. AWS claims it can reduce inference expenses by as much as 75 per cent compared to a dedicated GPU.

AWS Inferentia is intended for use cases where workloads require an entire GPU or demand low latency. The plan is to make it available in conjunction with SageMaker, EC2, and Elastic Inference.

Speaking of SageMaker, AWS' managed service for building, training, and deploying machine learning models saw some expansion.

AWS boss Andy Jassy speaking at AWS SFO Summit 2015

Amazon's homegrown 2.3GHz 64-bit Graviton processor was very nearly an AMD Arm CPU

READ MORE

SageMaker GroundTruth provides a way to automate the labeling of input data for data sets used in text and image classification, object detection, semantic segmentation, and user-defined tasks. The reason to do so, AWS contends, is that training and inference costs can be reduced as much as 70 per cent. The automation relies on input from Mechanical Turk workers, private corporate resources, or third-party contractors.

SageMaker RL augments SageMaker with pre-built reinforcement learning toolkits that can interface with simulation environments like Berkeley Ray RLLib, Intel Coach, Open AI Gym, Amazon's own RoboMaker and Sumerian, or environments created with other RL libraries.

SageMaker Neo (to be open sourced soon) represents another new trick for AWS SageMaker. It compiles models, with their framework specific instructions, into a common format so they can be executed on a device using an efficient runtime. The idea is to allow SageMaker machine learning models to be trained once and run anywhere, with performance optimizations tied to underlying hardware.

As an example of Neo's efficiency, AWS claims the Neo runtime takes up only 2.5MB of storage while a framework-dependent deployment might require as much as 1GB of storage. It works with several popular frameworks/algorithms – Apache MXNet, PyTorch, ONNX, TensorFlow and XGBoost – and Arm, Intel, and Nvidia hardware, with Cadence, Qualcomm, and Xilinx support planned.

AWS also introduced a managed service called Amazon Personalize, to allow developers, in their own apps, to provide the sort of baffling product recommendations presented to Amazon.com customers.

Personalize aims to simplify a process that could already be accomplished using SageMaker, appropriate algorithms, and lots of parameter fiddling. It relies on AutoML (a technique for selecting the optimal processing, not to be confused with Google's Cloud AutoML) to help build a recommendation model using a handful of API calls rather than the more arduous configuration process required when there's less handholding.

Amazon Forecast is a managed service for using machine learning to create time-series forecasts of future events, the sort of thing stock market analysts do all the time, with varying degrees of success. Also making their debut are Amazon Textract, which uses machine learning to read documents and extract text without manual review (for those who want to live dangerously), and Amazon Comprehend Medical, for siccing machine learning-powered natural language processing on medical records. You pay for only what you use, according Amazon.

The bill could also include court-awarded damages if there's a sufficiently grievous error. ®

In other re:Invent news... AWS is previewing a new Security Hub to manage alerts and compliance stuff.

More about

TIP US OFF

Send us news


Other stories you might like