Cisco shoves more GPUs in AI server for deep learning, still doesn't play Crysis

More power and faster interconnect

Robot thinking.

Cisco has beefed up its C480 AI/machine learning server, adding a faster GPU interconnect and more GPU slots while losing two CPU sockets.

Fairy in the woods

If you've got $1m+ to blow on AI, meet Pure, Nvidia's AIRI fairy: A hyperconverged beast

READ MORE

The C480 M5 is a 4U rackmount 4-socket Xeon modular server with up to 6TB of memory, six AMD S7150 x2 graphics chips or Nvidia Tesla P40 and Tesla P100 GPUs, up to 32 drives and 12 NVMe SSDs – 44 drives in all. It was revealed in June 2017.

Since then NetApp and Pure have introduced all-flash storage systems with more GPUs inside NVidia's DGX-1 GPU server system, interconnected by NVlink. Pure's can have four DGX-1s.

Cisco has announced an updated C480 ML M5 – "ML" of course standing for machine learning – with 2 x Xeon SP CPUs, up to 3TB of memory, 8 x Nvidia Tesla V100-32G, up to 24 x SAS/SATA HDD/SSD (to 182TB), plus 6 x NVMe drives, and up to 4 x x100G VIC (Virtual Interface Card), and NVlink. That's a total of 30 drives.

Cisco_UCS_C480_M5_ML

Cisco UCS C480 M5 ML

Cisco said it is a server for deep learning – a compute-intensive form of machine learning that uses neural networks and large data sets to train computers for complex tasks.

It has been working with Hortonworks to validate Hadoop 3.1 in a design where the UCS C480 ML is part of the big data cluster, storing data on the C480 ML disk drives, and supporting Docker containers running analytic workloads such as Apache Spark and Google TensorFlow that require both CPUs and GPUs. There is also a project with Cloudera.

More C480/Cisco AI ecosystem work involves Cisco contributing code to the Google Kubeflow open-source project, which integrates TensorFlow with Kubernetes, announcing its DevNet AI Developer Center for developers, operators, and data scientists, a DevNet Ecosystem Exchange, and working with Anaconda so data scientists and developers can collaborate on machine learning using languages such as Python.

Unlike NetApp and Pure, Cisco did not provide Resnet or Alexnet test results, saying that these don't reflect what it expects the reality of real-world AI/machine learning projects to be. It pointed out that the storage drives in the C480 ML are inside the enclosure and not connected across a storage network.

Cisco envisions the C480 being used in harness with C280 Hadoop systems, as well as other UCS servers, and said all UCS servers can be managed by the cloud-based Intersight systems management platform.

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

READ MORE

Cisco expects the enterprise market for A to grow significantly. It cited a Gartner report* saying "only 4 per cent of CIOs worldwide report that they have AI projects in production" at present. However, it also quoted a 2017 McKinsey AI report that predicts:

  • 75 per cent of developer teams will include AI functionality in one or more applications in 2018
  • 40 per cent of all digital transformation initiatives will be enabled by AI by 2019
  • 100 per cent of all effective IoT efforts will be supported by AI capabilities by 2019

AI and machine learning looks set to be a huge new market for server, GPU and flash storage array vendors for the next few years.

The UCS C480 ML M5 Rack Server will be available from Cisco partners in Q4 2018, along with a range of AI and ML support capabilities from Cisco Services that span analytics, deep learning and automation. ®

* Gartner Inc. Hype Cycle for Artificial Intelligence, 2018, Svetlana Sinclair and Kenneth Brant, July 24, 2018.




Biting the hand that feeds IT © 1998–2018