This article is more than 1 year old

Military brainboxes ponder 'UK needs you' list of AI boffins

We're falling behind, shout Shrivenham sorts

Rise of the Machines The Ministry of Defence wants to compile a list of AI boffins with UK security clearance that can be hired to help build Britain's inevitable robotic military future.

The ministry's latest publication on artificial intelligence and the armed forces, titled Human-Machine Teaming sets out its vision for what the Rise of the Machines could look like in practice.

While it ruled out weaponised AI, if only because of perceived problems with agreeing "a common definition" for lethal autonomous weapon systems*, the "concept note" did set out the MoD's view of how Britain is falling behind in the race to militarise self-thinking robots.

"The impact is a shift in the relative rates of innovation from defence to commercial firms with the best systems already, and remaining, in the civilian sector. Military access to the best technologies will become a challenge, except in national crisis situations," wailed the note's authors, who hail from the MoD's Development, Concepts and Doctrine Centre at Shrivenham, the Wiltshire location where the armed forces' future strategic thinkers are hothoused.

Drone shooting

Boffins urge Google to drop military deal after Googlers storm out over AI-based super-drones

READ MORE

Worse, from the military point of view, the AI industry as a whole is already very wary of comparisons with the Terminator film franchise by being seen to get into bed with the military, as the note’s authors noted:

"Some Western commercial entities have publicly declared policies stating they will not contract with defence or security agencies which may compound the challenges facing the MoD. This is in stark contrast to other states which have enshrined access rights to expertise, technology and data in their national legislation."

The solution to this problem is to be "innovative to secure access to subject matter expertise," as well as nurturing "sufficient in-house knowledge and understanding to generate intelligent customer capabilities".

Translated into English, this means making sure the MoD's people understand what AI can realistically do before signing away large chunks of the defence budget on an impossible (but commercially lucrative) contract.

More explicitly, the report suggested: "[T]he MoD could maintain a register of security cleared UK nationals with AI and robotics skills." One can easily imagine such people then being gently leaned on in times of national crisis, in much the same way that certain cybersecurity folk in the commercial world also hold clearances from GCHQ for working on things that the spy agency deems to be in its interests.

Otherwise, the main benefits of AI to the military that the MoD is prepared to publicly admit, for now, are in information processing and highlighting of relevant things to personnel who need to act on them – surveillance imagery analysis is the obvious use case. Rather than waste man-hours gazing at CCTV footage, one can set off a suitably trained Machine Learning tool or 'AI' to pore through and flag up anything of interest – a commercial capability that already exists today in companies such as the US's Insitu, makers of surveillance drones.

Additionally, the technically minded authors set out a proposed high-level system architecture for a future AI-enabled army battalion headquarters:

"The whole system could be built on a federated, disaggregated and self-organising peer-to-peer command, control, communications, computers and intelligence (C4I) network – effectively a combat cloud. Such a system should be able to draw on reachback access to cloud-based servers, but be capable of resilient operation provided by command and control applications across a variety of in-theatre platforms. From an operator’s perspective such a system will handle user requests for information and data passage as an intelligent assistant service."

The full note can be read on the government website (PDF, 70 pages). ®

Botnote

* Take the Phalanx point-defence system. It's a sextuple-barrelled machine gun with a built-in radar and processing unit. You turn it on and it blasts away at anything that moves within the radar and guns' range, the general idea being that you bolt the complete unit to things like warships and watchtowers. Does that count as AI weaponry? What about if you connected the ship’s main computers to the Phalanx so the system had access to target classification data, i.e. it picked out which ones to blast into eternity rather than making like an attack dog on steroids? Such a scenario, while unrealistic, are indicative of the types of conversations being had around the banning of weaponised AI.

More about

TIP US OFF

Send us news


Other stories you might like