This article is more than 1 year old

We sent a Reg vulture to RSA to learn about the future of AI and security. And it's no use. It's bots all the way down

We'd call this blue-sky thinking but the sky is thick with swarms of drones

RSA AI algorithms will in the future form and direct swarms of physical and virtual bots that will live among us... according to this chap speaking at 2019's RSA conference in San Francisco on Tuesday.

Thomas Caldwell, founder and president of the League of AI, a consultancy focused on security and robots, talked about the rapid rise of machine learning tools that can be used to develop systems with so-called swarm intelligence. He described this as a group of computer-controlled agents with the ability to “collaborate, communicate, and reach a consensus” in order to complete a specific task.

This digital collective smartness could be used to launch a wide offensive attack, or defend strategically against an incoming assault, physical or virtual: the agents might exist as real, physical robots, or virtual entities within devices.

“AI bots can live inside a robot, drone, be virtual in a cloud, or be resident on an edge device like a Raspberry Pi or an Nvidia TX2,” he gushed. “They could be the brains of new-age security management tools or they could apply attack vectors.”

Bender the robot, Futurama

Don't rage against the machine – wage against the master, says McAfee amid AI havoc fear hype

READ MORE

He went on to describe a future in which robots could band together and work alongside humans. Those pack droids could patrol schools, or work alongside police officers. They could be decked out with cameras that identify suspicious items including guns and knives, or include microphones to listen out for gunshots and tell-tale screams. Sending in a swarm of these machines to tackle grim events like school shootings, locating or perhaps even protecting kids, might be a safer option than using humans.

When it comes to virtual bots, Caldwell pictured scenarios in which they could pervade systems by hiding as non-malicious software to evade detection, and springing into action when necessary. They could, as a group, attack AI models by poisoning input data, or training datasets, to confuse the neural networks into performing the wrong actions. The agents could band together to fuzz code to find exploitable vulnerabilities, working in parallel and at scale.

(You can, of course, do this right now with regular software: it doesn't need to operate as a swarm, unless you're seeking to achieve some kind of huge parallel scale, perhaps.)

Researchers, we were told, are interested in creating roving network-inspecting bots that can study past cyber-attacks, identify patterns in the intruders' data accesses and methods, and use that knowledge to detect future network compromises. The agents would also, presumably, learn what's normal on the network to avoid falsely flagging up harmless connections, users, and applications.

Caldwell described the idea of “an oracle, able to look into the past, that analyzes the present to predict the future.” Swarms of virtual bots patrolling computer systems could look out for unexpected or previously seen behavioral to clock and thwart miscreants sneaking in, or rogue employees seeking to cause damage.

These swarms of software agents might, one day, even be able to communicate coherently with humans via Slack, Alexa, or other mobile apps.

Your imagination is the limit, really. Just use your imagination. Because it's, right now, admittedly, mostly imagination. ®

More about

TIP US OFF

Send us news


Other stories you might like