GCHQ, the UK spy agency, is preparing to use artificial intelligence to combat cyber attacks, identify state-backed disinformation, and help track criminal networks around the globe.
The move, announced on Wednesday, reflects growing anxiety that adversaries such as Russia and China are already weaponising AI technology against Britain and its allies.
While security officials are keen to distance the UK from unethical applications of machine learning — such as facial recognition and the mass creation of fake online identities in troll farms — they say they are “on the cusp” of using new algorithms to boost national security.
In an article for the Financial Times, GCHQ’s director Jeremy Fleming said “good AI” would enable spies to work in different ways, “allowing analysts to deal with ever-increasing volumes and complexity of data, improving the quality and speed of decision-making.”
He added that the applications of AI are broad, “from identifying and countering ‘troll farms’ pedalling disinformation to mapping and tracking international networks that are helping to traffic people, drugs or weapons”.
For many years, spies have used simple AI functions such as translation, but security officials said more recent advances in the speed of data processing, and increases in the availability of data needed to train algorithms, means GCHQ can deploy machine learning more ambitiously.
Possible applications to counter disinformation include machine-assisted fact checking to identify false online identities known as “deepfakes”, as well as automatic detection and blocking of botnets and other sources of misleading content online. AI could be used to actively defend against cyber attacks, by helping spies find malicious software and tracing it to its source, security officials said. GCHQ could also analyse complex chains of financial transactions and uncover the involvement of hostile states or terrorists.
Fleming insisted that the UK’s use of this technology would be “legal, proportionate and ethical”.
“In the hands of an adversary with little respect for human rights, such a powerful technology could be used for oppression,” he wrote. “Inaction can let those who build the technology of tomorrow — whether a country or company — project their values or interests by stealth, poor design or inadequate diversity. The consequences are hard to overstate.”
The use of AI is authorised under the Investigatory Powers Act, and is overseen by both ministers and the Investigatory Powers Commissioner’s Office.
Alexander Babuta, a research fellow in National Security and Resilience at the Royal United Services Institute, said the problem for British spies was that adversaries “will undoubtedly use AI to attack the UK, but they are not bound by the same legal and ethical framework”.
“The UK government’s requirement to develop AI capabilities is all the more pressing in the context of emerging AI-enabled security threats from hostile state actors — most notably Russia and China,” he said.
However, ever since Edward Snowden, a former contractor at the US National Security Agency, revealed GCHQ’s bulk data collection programme in 2013, the organisation has come under legal challenge from privacy organisations and battled to persuade the public that it can be trusted with data.
Megan Goulding, a lawyer at the human rights campaign group Liberty, suggested GCHQ’s need to deploy AI reflected the growing volumes of data they had been given permission to collect.
“The increased reliance on algorithms when it comes to our sensitive information should raise alarm bells over the sheer scale of snooping currently carried out on us,” she said.