Google rents out Nvidia Tesla GPUs in its cloud. If you ask nicely, that'll be 70 cents an hour, bud
AWS, GCE price war looming?
Google will this week start offering Nvidia Tesla K80 GPU-equipped virtual machines for its Compute Engine and Cloud Machine Learning hosted services.
Under a beta program launched on Tuesday, the Chocolate Factory will let customers spin up GPU-based instances out of the us-east1, asia-east1, and europe-west1 regions using the command-line tool. You must first request access to the GPUs before diving in, though. "If your project has an established billing history, it will receive [GPU] quota automatically after you submit the request," according to Google.
The Nvidia-equipped machines have been in the works since last November, when Google first said it would be incorporating the Nvidia graphics boards into its cloud compute centers. (Google will soon also tout AMD FirePro S9300 x2 GPUs in the cloud for remote-desktop workloads.)
The hope is that customers will opt to use the Nvidia-powered cloud instances for GPU-intensive tasks rather than spend the up-front cash needed to build an on-premises cluster.
"If you need extra computational power for deep learning, you can attach up to eight GPUs (four K80 boards) to any custom Google Compute Engine virtual machine," wrote Google senior product manager John Barrus.
"GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high-performance data analysis, computational chemistry, finance, fluid dynamics and visualization."
Transcoding and AI and yes, yes, all that stuff. Plus...
$5.60 an hour for a CUDA based 8xGPU cloud hash cracking rig from Google https://t.co/rabPtiKdep— Hacker Fantastic (@hackerfantastic) February 21, 2017
In addition to its Compute Engine service, the Mountain View ads giant says it will make the GPU-equipped VMs available to customers running the machine learning TensorFlow service. Like Compute Engine, it is hoped that the multi-thread crunching capabilities of Nvidia's GPU hardware will dramatically speed up the operation and efficiency of code running on the machine learning cloud service.
GPU hardware has long been seen as a key component for AI and machine learning projects, as the chips are particularly well-suited for managing certain parts of the increasingly complex neural networks being developed.
Google says the GPU processing for both services will be priced at $0.70 per GPU die per hour in the US and $0.77 per hour in Asia and Europe.
Meanwhile, Amazon's AWS service offers Nvidia GPU-powered instances: specifically, with its P2 and G2 virtual machines. A basic P2 instance with one Nvidia K80 GPU costs $0.90 per hour in selected regions. ®
Sponsored: Becoming a Pragmatic Security Leader