The UK Safer Internet Centre (UKSIC) said children might need help to understand that what they were making was considered child abuse material. It pointed out that, while young people might be motivated by curiosity rather than intent to cause harm, it was illegal in all circumstances under UK law to make, possess, or distribute such images, whether they are real or generated by AI. It said children might lose control of the material and end up circulating it online, without realising there are consequences for these actions. It also warned that these images could potentially be used for blackmail.
Knowledge gap
New research conducted by classroom tech firm RM Technology suggests that just under a third are using AI “to look at inappropriate things online”. “Students using AI regularly is now commonplace,” said Tasha Gibson, online safety manager at the firm. "In fact, their understanding of AI is more advanced than most teachers - creating a knowledge gap. This makes keeping pupils safe online and preventing misuse increasingly difficult. “With AI set to grow in popularity, closing this knowledge gap must become a top priority.”
‘Declothing’ app dangers
The scope for AI to turn children into the generators of extreme content was demonstrated in September by an app which creates the impression of having removed someone’s clothing in a photo. It was used to create fake nude images of young girls in Spain, with more than 20 girls, aged between 11 and 17, coming forward as victims. The images had been circulating on social media without their knowledge.
So-called “declothing” apps began emerging on social media sites in 2019, often on messaging service Telegram as automated software with AI features - also known as bots. Improvements to generative AI have allowed apps - like that used in Spain - to become much more effective in creating photorealistic fake nude images.
Javaad Malik, a cyber expert at IT security firm KnowBe4, told the BBC it was becoming harder to differentiate between real and AI-generated images, a trend that was fuelling the use of “declothing” apps. “It’s got mass appeal unfortunately, so the trend is just going up and we’re seeing a lot of revenge porn-type activities where cultural or religious beliefs cause a lot more issues for victims,” he said.
With laws lagging way behind the exponential development of AI, the genie is out of the bottle and out of control. There is now a generation of children who see AI as a means to an alternative reality which they can generate for themselves.
Ah now if we could track them down we could ‘help’ a lot of potentially damaged kids??
Incidentally when I was a kid of 4/5 immediately post war - we were doin the real thing not AI thingy - like pullin our willies out and playin doctors and nurses - I recall playin doctors and patients then and whilst laying on the surround front door wall - this older girl went straight for me willie - that was the first shock I had sexually in life and been havin them ever since!! AI ? we had SD’s - [I’m a scouser - work it out?]
It’s a tricky one, isn’t it? Because a lot of the children are probably doing this in a fairly innocent, if a bit naughty, way
I can understand some of them finding it quite funny to have an app that made it look as if their best friend or teacher was in the nuddy and not really meaning anything pervy by it
They may just not realise the implications in terms of bullying, abuse, privacy and respect
Technology is moving on so fast that the code of ethics on right and wrong isn’t really keeping up so there’s no proper guidance until the worst happens
I think we should bring in new laws swiftly, as the needs arises, and maybe have regular sessions about safe AI and internet use in school, adapting them to new issues as they arise
AFAIK, Metaquest is just one application (VR headset) that (probably) uses AI. By the end of this year there will be thousands (maybe millions) of applications.
The “declothing” apps that children use are also called “nudify” apps and there are a dozen or more (that I am aware of). There are also dozens (maybe hundreds) of ai-generated porn sites (about which I know little). There are, presumably, sites producing audio/visual ai-generated material for all sorts of “tastes” (about which I know nothing).
Any tool that helps parents supervise and set limits has got to be good
But you can’t supervise them all the time and forever.
So in the end it still comes down to trying to teach them what’s right and wrong and what’s healthy and appropriate and how to use their time wisely in the hope they’ll make good choices themselves when you’re not there in the future
It helps I think if parents can stay on top of current trends and new stuff and discuss them openly
How can they ?
Children all seem to have smart phones and probably are more tech savvy than their parents.
The likes of Til Tok are poisoning their minds
ChatGPT is a popular natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks, such as composing emails, essays, and code.
ChatGPT is a powerful language model that can be used for a variety of tasks. ChatGPT has security filters and restrictions in place. These filters are designed to prevent users from generating harmful or offensive content.
However, there are at least THREE widely-documented ways to bypass those filters which anyone, even a child, can use. What one child learns they all end up knowing …
That’s why I think you need to try to make them resiliant in other ways, because they are tech savvy and can probably get round any IT controls you put on and you can’t always pre-empt what they’ll be exposed to
By resiliant I mean the basics, knowing the difference between right and wrong, being kind and compassionate, having common sense about safety, having self worth and confidence to say no and knowing they can talk to you without you being judgemental
Not easy, but when you think about it, it’s not that different to back in the day when there was no internet and we used to go out to play all day, with no mobile phones or supervision
Our parents didn’t know what we were up to, they had to trust us to behave or else! And often we didn’t but mostly we did
And the reason we did was that we knew in our minds what our parents would think was OK and what was a no no
So what I mean by keeping up to date is not trying to outstrip their computer skills but knowing and talking about the issues that might arise to help them make good decisions