This article is more than 1 year old

C'mon, edgelords: The APIs are ours to command – do we do good or evil?

Edge computing is awesome and scary

Edge computing is the pendulum swinging away from the idea of big, centralised servers back to distributed systems. It's the idea that instead of centralising all of our workloads in big clouds we bring the computing closer to the devices requesting that compute power.

The idea is that edge computing solves whole new classes of workloads for which the latency involved with cloud is just too high. The power of the edge is greater than the sum of its parts.

The driverless car is held up as a good use case for edge computing: not just the vehicle as the device, but the vehicles festooned with devices. These cars will scan their surroundings and communicate with one another as well as incalculable other machines that will emit beacons of various types.

While driverless cars will have a limited capacity to analyse problems in real time, there are real-world limits to how much compute power we can practicably and efficiently cram into them.

If all cars in a given area had the ability to stream some or all of their data to a physically proximate server farm then that server farm could greatly enhance the decision-making capabilities of those vehicles.

Let's say we placed a server farm with enough oomph to handle all the cars within a square kilometre in a difficult patch of urban landscape. It might have narrow roads, lots of blind spots, sharp corners and pedestrians randomly darting into traffic: the sort of high-collision area that regularly poses a challenge for humans.

A driverless car isn't going to do much better than a human; it can't see around corners all that much better than we can. The driverless car around the corner, however, can see what's going on in its vicinity. And multiple cars around multiple corners provide enough data to know what's what, what's where and start making predictions about the vectors of all the moving pieces. Maybe we even throw in some extra sensors on lampposts and the like to make life easier.

Cloud data centres can be tens or even hundreds of milliseconds away. At 60kmph, 100ms is 1.67 metres. That's more than enough to kill someone. The speed of light is unforgiving that way. Place a local number cruncher in there and your 100ms round trip becomes 5-10ms.

There are fewer accidents. This is a good thing.

Of course, edge computing isn't just about cars. There's a strong Big Brother camp, too. They've been popping up at conferences promising to track patients at hospitals, children in schools, and prisoners in jails. I've even seen a stealth-mode edge startup proposing to build a rapidly deployable fleet of peekaboo drones. These drones can see through walls using Wi-Fi signals, mobile signals and will likely even emit some signals of their own.

With enough time to build and train Bulk Data Computational Analysis (BDCA) algorithms, those drones might well be able to start making predictions about what those distorted reflections in the various radio signals we call people are doing.

Enter developers

Cloud is becoming the way of doing business. When you disregard the sysadmin-facing Infrastructure as a Service (IaaS) and the user-facing Software as a Service (SaaS) portions of cloud computing today, what you are left with is Platform as a Service (PaaS). PaaS provides a pre-canned environment for developers to code in with no need for sysadmins. Alongside PaaS we have proprietary public cloud services ranging from serverless to BDCA tools like machine learning, artificial intelligence and so forth.

Today's modern applications work something like this: a script instantiates a series of PaaS environments. Application code is injected into these environments, creating a series of microservices. These microservices will listen for input data and then either farm that data directly out to a BDCA tool and then store the results or store the data and then run it through BDCA tools. Or both. The results are then made available to humans or machines to act on.

These BDCA tools are essentially filling a similar role to code libraries. Except instead of providing a simple function to convert XML to JSON they provide voice recognition, computer vision as a service, or join together a dozen different enterprise identity services to provide Single Sign On (SSO).

The edge builds on this same idea. Instead of having a workload sitting in the cloud that calls BDCA tools there would be a series of latency-sensitive services posting APIs for consumption by proximate devices. Your driverless car, on-premises patient tracking system or peekaboo drone would take the place of that collection of cloudy PaaS-based microservices.

The point of this all, however, is that the edge is envisioned to consist of a series of API-delivered services. Most edge services are likely to be so heinously complex that by the time they're ready to go even the developers who wrote and trained the algorithms won't truly know how they work. This is already a problem.

These services will be designed by experts. They will take years to train to commercially viability. In the short term, the big money is going to be in mostly benign applications. And the infrastructure on which those services live will be built out and owned by existing tech companies.

Yes, the peekaboo drones will be creepy and intrusive, but getting the services to support them added to existing edge infrastructure is relatively easy to envision. Google, Facebook and the other cloudy titans are already under immense pressure to use their technology to help root out extremism and hate.

But given the fight the tech industry is putting up over the blurry line between combating extremism and suppressing dissent it's a bit of a stretch to believe any of them would enable an edge service that lets peekaboo drones hunt down whatever delinquent.

We are already standing on the edge

The edge is already in our workplaces and our homes. I mentioned vehicles and drones, but we also have Google's Nest and even Amazon's Alexa as early manifestations. With Nest, various Internet of Things devices report back to a central device. This device does some local decision making where real-time decisions matter and it farms the rest of the number crunching out to Google's cloud.

Alexa is a little bit more primitive. It doesn't (yet) control a fleet of IoT devices in our homes, but instead contains just enough intelligence to make decisions about how to interact with us. The real-time decision Alexa makes might be nothing more complicated than "send these sounds to the mothership for processing", but it's still very much an extrusion of Amazon's proprietary cloud services.

The API presented to us is the voice interface. The latency sensitive portion of that API may consist of nothing more than "Hello, Alexa" but for it to be useful to humans and not fall into some uncanny valley of response times it can't stream everything we say all the time to the cloud and wait for Amazon's cloud to make a decision about whether or not we're trying to address it.

In Alexa's case, humans are the machines it is receiving input from. Instead of helping us see around corners and avoid pedestrians, Alexa helps us order take out or Rickroll our friends. Because of course that would be one of the first things we did with this technology.

I have become

Some argue that edge computing is the true beginning of The Singularity. Machines already know things we'll never understand. These individuals view the edge as a missing intermediate link in distributed machine learning. One that bridges the gap between low-powered, real-time, decision-making capability and the big number crunching capacity that centralised batch-job style clouds can offer.

These are the sorts of people who talk a lot about Elon Musk's Neuralink, a concept of which I've only seen one decent explanation.

The truth is, we don't know what the edge will become, because we are the ones who will make that choice. The edge could enable machines to make our societies more efficient and capable that we can even imagine today.

Developers hold all the cards here. The services that make up the edge can't be made without developers, and for the time being those services will also require developers to take advantage of them. While progress is being made on BCDA tools that can write BCDA algorithms on their own, that's a while from practical utility yet.

We can no more prevent the creation and exploitation of edge computing than we can unsplit the atom. But we're the ones who make the choice about what to do with it. The APIs are yours to command. What will you ask them to do? ®

More about

TIP US OFF

Send us news


Other stories you might like