This article is more than 1 year old

No humans allowed: How would a machine-centric data centre look?

This isn't a sci-fi premise, it'll influence how we segment our kit a few years down the line

Even using the most conservative estimates, the number of connected devices has surpassed the number of humans. Machines are communicating more with other machines than they are with humans.

It's therefore reasonable to assume that, eventually, data centres will exist specifically to cater to machine communication, and those data centres are likely to look quite a bit different than data centres that cater to humans.

At present, data centres serve both machines and humans. As such, technology that provides services to machines look a lot like those designed to serve humans. This makes sense; we're only just beginning to understand the differences in machine needs versus human needs. As we learn more about machine use cases, however, this will change.

Being human

Barring a Neuralink-class technological leap, human/technology interface use cases are pretty well defined. Humans are massively parallel pattern matching systems with high latency data input and glacial data output. While machines aren't (yet) as good at the pattern matching part, they absolutely crush us on the rest.

Our limitations and abilities mean that, in order to be considered acceptable, certain workloads have upper and lower boundaries on capability. There are hard upper limits in how fast you can send us data. Our pattern matching also means that there are limits on things like audiovisual latency and graphic distortion before humans feel we're into the uncanny valley. We are hard programmed to reject anything we feel has strayed into uncanny valley territory.

This means, for example, your data centre is doing its job if it streams X audio streams with a max of Y latency and Z jitter.

More than human

Machine capabilities are far more diverse than those of humans. Some machines require latencies far lower than humans. This need is driving the creation of another layer of the internet called the edge. The edge involves making Bulk Data Computational Analysis tools closer to the devices that will call on them to reduce latency.

Further driving this trend is that we are not only asking machines to be as good at matching patterns as we are, we frequently ask that machines be better. We would like them to see more clearly, at resolutions we cannot achieve. Or we ask them to see around corners, or coordinate their activity with thousands of other machines.

The coordination of thousands or millions of machines will require data centres with memory at scales we haven't even dreamed of yet. Not storage, but memory. Exceptionally low-latency memory that can be acted on by CPUs without having to drag it off slow storage. We may even turn to memory that can perform calculations without a CPU.

So it's easy to imagine a world where machines need ultra-fast, ultra-low latency data centres that perform faster and retain more information than puny humans. But what about going in the other direction, where machines need data centres that provide less than they currently receive from us?

Batch jobs that can be done over night are one example in the workplace. On the home front, in the IoT world, think of jobs that can wait until there's enough spare power in the network, like scheduling the washing machine to run during a week day rather evening or a weekend.

Data centres for machines, then, will look different. We will likely have entire data centres of bleeding edge, ultra high-end equipment. Similarly, one could use bottom-bin components and technologies and find a market among the more sedate and laid-back machine consumers.

But what technologies are we talking about?

Event-driven computing – or "serverless" – is one movement that could put efficient, inexpensive and low-power servers to use. Event-driven computing posts a listener that can spend the majority of its time idle. Depending on the workload, the listener might experience intermittent low-demand events, bursty workloads, or just run at the red line all day long.

Even with the efficiencies containerization can bring, there's no need for listeners that experience intermittent low-demand events, or handle bursty workloads, to be on power-hungry CPUs.

ARM chips containerize just fine, and if you get the balance of sleepy listeners to bursty ones right you can run a lot more of these event-driven backends on a SeaMicro-like server, more efficiently, than you can on your standard dual-CPU Xeon.

Custom data centres

Machine-to-machine communication means we're going to see a lot more ASIC and FPGA in cloud workloads, both in the core and at the edge. Intel has already figured this out, investing heavily both in ASIC services and a wide range of FPGAs. Microsoft is already getting in on the FPGA action.

ASICs will be useful to accelerate specific BDCA tools. If you have a mature algorithm that's been trained enough to be commercially viable then making it some custom silicon can help a great deal. FPGAs, however, are far more interesting.

With FPGAs-as-a-service machines could "choose their own brain". Imagine a low-power listener running on a tiny ARM CPU that serves as a load balancer for a select set of machine clients. A machine client connects, specifies the kind of work it wants to perform, and thousands of FPGAs are reconfigured to meet requirements.

Because the workloads handled by this cluster aren't time sensitive, the ARM-based listener informs any pending clients that the FPGA array is busy, and could they please wait their turn. When the current client gets their results the listener accepts the next client and reconfigures the cluster to suit. I suspect we've only scratched the surface of what can be done here, especially when efficiency is the focus instead of real-time results.

Ultimately, the design of machine-centric data centres is something of a guessing game. Still, it's an interesting thought exercise, and an one that will likely have implications in how we segment our on-premises data centres before the decade is finished. ®

More about

TIP US OFF

Send us news


Other stories you might like