This article is more than 1 year old

Democratic AI in the Hybrid Cloud

AI for everyone

Sponsored You’ve got an application or workflow that needs to do lots of repetitive work typically done by people in the past. Or you want to branch out into some new area of digital business or customer experience.

It might just be a perfect fit for an artificial intelligence algorithm, perhaps using a form of machine learning or a rules-based system.

AI has been a long-promised concept but has been held back by, among other factors, a lack of the kinds of raw computer power required to process vast amounts of data and to crunch complex algorithms.

It’s only now, thanks to cloud, these resources are becoming available, as Intel and service providers make available this kind of power available through their massive server farms. The barriers to entry and the cost of set up have never been lower.

Developers can now, for example, take advantage of cloud-based services that help you get up to speed with AI. In September, Intel launched its Nervana DevCloud, which gives about 200,000 developers access to the hardware and software that they need to create cloud-based AI applications. This service is based on the company’s Xeon processors, which it is promoting along with its FPGA devices .

With so many options Karl Freund, senior analyst for high performance computing and deep learning at analyst firm Moor Insights and Strategy, has identified two routes to AI in the cloud.

“The first thing someone has to do when they’re considering an AI project is to determine whether the scope of that project requires the development of a custom neural network or model, or whether you can use existing neural networks which have already been trained for you,” he says.

A custom neural network handles those times when you’ll need more flexible AI chops to handle your business case. For example, detecting damage on car door panels for insurance applications will probably need its own custom neural network.

The first step in creating a custom cloud-based neural network is selecting a framework. This is a collection of software libraries that enables you to describe what your neural network will do, and train it. “Each cloud service provider has their favourite,” says Freund.

Amazon released MXNet, while Microsoft has its Cognitive Toolkit (CNTK). Google open sourced its TensorFlow framework, and other vendors support it. IBM has its PowerAI framework for creating custom machine learning systems. Intel offers its own, fast, deep-learning framework - called neon - along with offering support for Theano, TensorFlow and Caffe and the Keras Library.

The underlying hardware used to train your cloud-based neural network matters here. While service providers have been active in supporting different tools and frameworks the role of Intel has been important. The goal is simple: to tune the software to Intel’s widely used processors. Google, for example, formed a strategic alliance with Intel that includes producing an optimized version of Google’s TensorFlow framework for Intel processors with Intel claiming performance gains of more than 70x on its different architectures.

Once you have selected a framework, the next step is to train the neural network using tagged data. This supervised learning involves loading the framework with examples of positive matches for the data you’re dealing with (pictures of damaged car doors, say) and negative matches (pictures of undamaged doors).

In the past, you’d have trained these models using your own equipment, installing racks of GPUs for that purpose. This quickly becomes cost prohibitive unless AI is your core competency and you’re constantly thrashing that silicon. The hyperscale cloud providers all offer the chance to train your models directly on their systems, usually with GPU power.

Cloud service providers provide the frameworks and infrastructure resources to get started. Amazon’s Deep Learning AMI, for example, supports MXNet, TensorFlow and Microsoft’s CNTK. Available through the AWS Marketplace, it gives you access to EC2 P2 instances with GPUs. Behind many of these cloud-based AI services is Intel. To pick on AWS, again, this particular service provider has optimized its deep learning engine using Intel’s Math Kernel Library (MKL) over Xeon Scalable for its C5 instances. Intel also seems to be underpinning many other cloud-based AI services. For example, Google was the first to launch Xeon Scalable-based cloud instances back in February 2017.

Training your neural net is an iterative process that may take continuous revision, and cloud providers will provide inference services. This lets you use your neural net to process data in a cloud-based format. You can pay for the inference processing, typically via the query, in the cloud as part of your account. Here, again speed counts, and service providers have worked to tune their services using Intel hardware.

Microsoft’s cloud-based AI service, for example, employs Intel Stratix 10 FPGA chips to accelerate its machine learning processes in a project that it is calling Brainwave. The project will push the company further toward real-time AI computation, it said.

Instead of training your own neural network in the cloud, you can also access canned APIs to access some of the most commonly used AI functions. Many of these services will use Intel-based Xeon chips under the hood.

Microsoft has APIs covering everything from image recognition through to voice, natural language, and video. The video processing API handles different tasks such as image stabilization and the ability to detect and track faces. There are other APIs to detect emotions and identify what is in a video frame. You can test out these APIs ad hoc (Microsoft includes an online console), or use an Azure subscription.

Amazon also provides an interactive console for its own API services; Lex is a speech processing and conversational API, giving poorly-trained designers the chance to land us all in chatbot hell. Polly does text to speech, almost but not quite getting us across the uncanny valley (we’d prefer our own version of Holly, but sadly, no).

Finally in this trinity of service categories, the Amazon Rekognition service has a several underlying APIs, ranging from facial matching, to identifying elements in images, and creating thumbnails from each person in a shot. You can do other things like blocking out adult content, estimate a person’s age, and even find celebrities. Access these by defining a function in Amazon’s Lambda serverless cloud service. It returns data about the image as a JSON file, putting it into ElasticSearch and enabling people to query those characteristics. Google’s offer a variety of APIs around video, image and text analysis, and translation. There is also a AI job search API that uses data points like location, job titles and skills to match jobs with candidates.

IBM’s Watson provides a set of APIs for users. In its conversational API, you’ll define your own entities (‘store’, ‘blouse’, etc) and your own locations (‘London’, ‘UK’). You can then craft set responses when users ask about these things in combination.

There are other APIs for visual recognition, personality insights, or training a knowledge base by studying uploaded content. The latter pulls out relationships and meanings from text, and uses them to understand new content that it sees. Watson provides APIs for these and other services.

AI and machine learning have been promised for the batter part of half a century. Their realisation had been frustrated by limitations in memory and processing power at that time.

Public cloud has put them within reach of the many, as service providers put at our disposal servers that can be spun up and down as needed to satisfy the workload-heavy requirements.

Underpinning this, however, have been break-through refinements from Intel that have seen whole families of languages, frameworks and workloads optimised.

We’ve seen this before in other fields, of course, with APIs and code for the web, browsers and open-source turned to the chip.

Those technologies now firmly established. It seems AI is next.

SPONSORED BY: Intel

More about

More about

More about

TIP US OFF

Send us news