This article is more than 1 year old

Artificial intelligence is good for at least one thing – making hardware important again

Latest compute craze turns the tide on system trends

Red Hat Summit If you're cynical about artificial intelligence, here's one ray of sunshine for you: it's got engineers around the globe focusing on improving number-crunching and computing performance right down to the silicon level.

Rather than throwing thousands upon thousands of generic boring servers at problems, now techies are doubling down on accelerating particular workloads – such training neural network models and AI inference – with faster processors, highly customized chips, FPGAs, GPUs, and similar technology.

Daniel Riek, senior director of AI for Red Hat, said on Wednesday that the machine-learning software explosion has forced developers, and hardware and system designers, to go back and look at boosting per-chip throughput rather than scaling out platforms over warehouses of boxes.

"Performance wasn't a key differentiator, it was scale that mattered. We traded off performance for scale," Riek said of the days before enthusiasm in AI was recently rekindled.

"With AI, we are back to where performance actually matters. Suddenly there is a direct business impact, and we have people talking about instruction sets and processor speeds again."

For Red Hat, the challenge with AI was two-fold: integrating machine-learning technology within its own development tools, while, at the same time, adding support for AI-based applications and platforms to its enterprise IT offerings.

One of the key findings, Riek told folks at the Red Hat Summit in San Francisco, is how AI projects start off as unconventional, but as they move closer to deployment, the needs and requirements become just like those of any other product or service. And those needs have to be met as per usual.

"We also observe that it is very loose and fast moving early on," he said. "When you get to the point where it is business critical, suddenly you have the same requirements for stability and security."

Riek said once artificially intelligent and trained programs are ready for deployment, they should be treated not as tools, but rather as something akin to a newbie colleague.

"Treat the AI as a team member," he recommended. "Look at it as junior team members we train. Obviously they are very special team members in narrow areas, but it goes beyond a tool."

In other words, keep teaching them, anticipate and deal with errors, and improve them over time, just like you'd coach and develop a coworker. ®

More about

TIP US OFF

Send us news


Other stories you might like