What you need to know from re:Invent – FPGAs-as-a-service and more
Just give it to me straight
AWS re:Invent At its re:Invent conference in Las Vegas today, Amazon Web Services tipped its hand to reveal its battle plan for invading new markets.
The Jeff Bezos cash machine has kicked out a laundry list of new services and virtual machine instances for AI applications, databases, and software that requires specialized hardware acceleration – plus low-cost virtual private servers. These are set to come online in the coming months, using many of Amazon's in-house platforms.
"We have always tried to take whatever we can use ourselves and make it exploitable for you," said AWS CEO Andy Jassy. "We want every company to have the same access to services and infrastructure as you can achieve with the largest companies in the world."
Here's a run down of what was announced today.
Artificial intelligence on Bezos' servers
Amazon's Alexa personal assistant will be opened up to developers and organizations as a cloud service. The "Lex" platform will allow programmers to integrate speech-recognition features into their own apps, specifying commands and responses, all processed on Amazon's systems.
In addition to speech recognition, Amazon will offer AI services for image recognition and text-to-speech with the Rekognition and Amazon Polly services, respectively. Again, all the processing is done off-site in the cloud on Amazon's computers.
Jassy says the AI services were a natural move for Amazon, given the investments it has made internally through Alexa and other tools it uses for its retail operation.
"A lot of customers don't realize the heritage Amazon has in the AI space," the bigwig said. "We have thousands of people dedicated solely to AI."
Unlucky, Digital Ocean – and welcome aboard, VMware
Meanwhile, AWS is looking to take on the likes of Digital Ocean and Linode with a cheap service for virtual private servers. Dubbed Lightsail, it will allow companies to easily configure and spin up Linux servers for internal use with strict security and access permissions.
AWS also continued its victory lap over VMware in the hybrid cloud space by bringing the hypervisor giant's CEO Pat Gelsinger out to talk up the virtues of AWS before Jassy kicked out a series of on-premises offerings, including a massive 100PB scale-up of the Snowball office-to-cloud storage box so large that it needs to be delivered via a semi-truck named the "Snowmobile."
For those looking to operate on a smaller scale, AWS will be adding new instances to the T micro-servers, C scientific computing, R memory-intensive, and I I/O-intensive compute families of hosted virtual machines, offering larger memory capacities and improved CPU performance for each of the virtual machine configurations. AWS is also working on elastic GPU instances.
FPGAs – chips that can be reprogrammed to perform algorithms at high-speed in silicon – will also be offered by AWS as a service for the first time. The F1 instances will allow developers to upload their customized hardware acceleration code to an Amazon-hosted gate array attached to a compute machine, and test it all out, something Jassy believes will prove highly useful to companies looking to move into hybrid cloud setups.
Here's the spec of the FPGA instance – and there are up to eight gate arrays per instance:
- Xilinx UltraScale+ VU9P fabricated using a 16nm process.
- 64 GiB of ECC-protected memory on a 288-bit wide bus (four DDR4 channels).
- Dedicated PCIe x16 interface to the CPU.
- Approximately 2.5 million logic elements.
- Approximately 6,800 Digital Signal Processing (DSP) engines.
- Virtual JTAG interface for debugging.
- In the host connected to the FPGA, Intel Broadwell E5 2686 v4 processors (2.3GHz base speed, 2.7GHz Turbo mode on all cores, and 3GHz Turbo mode on one core), up to 976GB of RAM, and up to 4TB of NVMe SSD storage.
The F1 instances can be fired by up in "developer preview form" today in the US East (Northern Virginia) region, and will become generally available in 2017, we're told. Essentially, this means developers can spin up beefy compute nodes and then get attached FPGAs running code written in Verilog to process data at high-speed in silicon. That should accelerate encryption, machine learning and similar intensive operations.
You can read more about FPGAs in the cloud over here on our sister site, The Next Platform.
And so on
Oracle was among the most popular targets for Jassy, who used the introduction of a new analytics platform, dubbed Athena, and full PostgreSQL support in AWS Aurora to take a shot at Oracle, who he branded a "hostile" database vendor.
Last but not least was a push into the IoT space, with the introduction of Lambda functions for embedded devices. Designed for serverless application setups, Lambda functions allow for certain tasks and commands to be programmed and executed by Amazon systems without the need to spin up and manage dedicated backend EC2 server instances.
Basically, you write some code that reacts to an API, your devices or apps call that remote API, your code gets run, and you don't have to worry about maintaining the underlying server – just the functions that kick off on demand.
AWS really hopes this will be picked up by industrial equipment and embedded device makers which will choose to hook their products into Amazon-hosted apps.
"True to its history, AWS grows by a customer-driven process of evolving capabilities. This has meant that AWS constantly grows in capabilities and refinements of existing capabilities," IDC analyst Al Hilwa told The Register.
"This year, the AWS cloud gets richer with different ways to support more even diverse workloads. Each of the new instances has its uses. A clear focus this year is for compute workloads driven by the need to train machine learning models and other interesting streaming computations."
He added: "Developers will love having slices of GPU instead of paying for it all the time the app is not using it. The new capabilities for face recognition and speech processing and understanding are nicely illustrated by Alexa and will likely be popular in this important and revolutionary new style of workload. Machine learning will revolutionize over time the way we interact with devices providing joint use cases across general computing devices and the world of IoT.
"The FPGA instances will typically be used for highly customized compute workloads typically using floating point numbers. Gaming and other types of testing applications are the biggest examples today, but the change versus a couple of years ago is the increase in image, video and audio-stream processing, often done in the context of preparing data for machine learning." ®