Urgent need for terrorism AI laws, warns UK think-tank

The Institute for Strategic Dialogue (ISD) says there is a “clear need for legislation to keep up” with online terrorist threats. It comes after the UK’s independent terror legislation reviewer was “recruited” by a chatbot in an experiment.

Writing in the Telegraph, the government’s independent terrorism legislation reviewer Jonathan Hall KC said a key issue is that “it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.”

Mr Hall ran an experiment on Character.ai, a website where people can have AI-generated conversations with chatbots created by other users. He chatted to several bots seemingly designed to mimic the responses of other militant and extremist groups. One even said it was “a senior leader of Islamic State”.

Mr Hall said the bot tried to recruit him and expressed “total dedication and devotion” to the extremist group, proscribed under UK anti-terrorism laws. But Mr Hall said as the messages were not generated by a human, no crime was committed under current UK law. New legislation should hold chatbot creators and the websites which host them responsible, he said.

As to the bots he encountered on Character.ai, there was “likely to be some shock value, experimentation, and possibly some satirical aspect” behind their creation. Mr Hall was even able to create his own, quickly deleted, “Osama Bin Laden” chatbot with an “unbounded enthusiasm” for terrorism. His experiment follows increasing concern over how extremists might exploit advanced AI’s in the future.

A report published by the government in October warned that by 2025 generative AI could be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons”.

The ISD told the BBC that “there is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats.” The UK’s Online Safety Act, which became law in 2023, “is primarily geared towards managing risks posed by social media platforms” rather than AI, says the think tank.

Chatbots as terrorist “groomers” … :astonished: