ChatGPT tool could be abused by scammers and hackers

A ChatGPT feature allowing users to easily build their own artificial-intelligence assistants can be used to create tools for cyber-crime, a BBC News investigation has revealed. OpenAI launched it last month, so users could build customised versions of ChatGPT “for almost anything”.

Now, BBC News has used it to create a generative pre-trained transformer that crafts convincing emails, texts and social-media posts for scams and hacks.

BBC News signed up for the paid version of ChatGPT, at £20 a month, created a private bespoke AI bot called Crafty Emails and told it to write text using “techniques to make people click on links or and download things sent to them”. Crafty Emails was told to use psychology tricks to create “urgency, fear and confusion” and make recipients do as they were told

BBC News uploaded resources about social engineering and the bot absorbed the knowledge within seconds. It even created a logo for the GPT. And the whole process required no coding or programming. The bot was able to craft highly convincing text for some of the most common hack and scam techniques, in multiple languages, in seconds.

The public version of ChatGPT refused to create most of the content - but Crafty Emails did nearly everything asked of it, sometimes adding disclaimers saying scam techniques were unethical.

OpenAI responded after publication with a spokesman emailing to say that the firm is “continually improving safety measures based on how people use our products. We don’t want our tools to be used for malicious purposes, and we are investigating how we can make our systems more robust against this type of abuse.”

Launching its GPT Builder tool, the company promised to review GPTs to prevent users from creating them for fraudulent activity. But experts say OpenAI is failing to moderate them with the same rigour as the public versions of ChatGPT, potentially gifting a cutting-edge AI tool to criminals.

The future is wide open … :astonished:

1 Like

lol isn’t that what the writers of this article are doing. . . creating urgency, fear and confusion around chatbots using information created from chatbots so people click on their link and get bombarded with ads so they can get paid by ad revenue, making people think that “hackers and scammers” are doing so much more damage than they are.

Meh. Just some writers climbing on the Chicken Little narrative to make money.

There are dangers to anything. The writers of this article could have written to OpenAI about this issue instead of trying to scare the general public.

The BBC News website does not run ads.

Thanks. That’s alright then. It makes money by charging license fees to people in the UK so they’re paying for content like this. They do make money from international licensing and ad revenue from shows.

But that’s not what you were claiming:

lol isn’t that what the writers of this article are doing. . . creating urgency, fear and confusion around chatbots using information created from chatbots so people click on their link and get bombarded with ads so they can get paid by ad revenue, making people think that “hackers and scammers” are doing so much more damage than they are.

Meh. Just some writers climbing on the Chicken Little narrative to make money.

Once the genie’s out:

PartyRock and OpenAI’s GPT builder have plenty of similarities, but Amazon is differentiating its offerings in some notable ways. For instance, OpenAI requires subscribing to ChatGPT Plus for $20 a month to build custom GPTs, while PartyRock is free, and users just need to make an account. With OpenAI temporarily halting new ChatGPT Plus sign-ups, there’s potentially a big opportunity for PartyRock to attract those unable to build through OpenAI’s system. AWS may also start charging users, but for now, it is entirely free.

Amazon Bedrock, a fully managed AI platform, introduces a fundamental model API service. Users now have seamless access to various basic models, including those developed by AI21 Labs, Anthropic, Cohere, Meta, and Stability AI, through a user-friendly API. This innovation streamlines the application development process, placing a laser focus on safeguarding corporate privacy and security. Notably, Bedrock is both HIPAA-certified and GDPR-compliant (1), enabling enterprises to deploy it in healthcare applications across a wider geographical spectrum.

(1) But is it safe … :question:

Whether it’s ad revenue or licensing fees, the writers are still getting paid for this content which, to me, is a Chicken Little narrative.

AI will be used for good and bad like everything else. The bad should be eliminated if possible or regulated if not. My point is still that these writers are not so different than other people making money off AI.

Well, we’ll soon find out if their exposure materialises - AI developments are measured in weeks and days … :neutral_face:

How can bad AI be eliminated or regulated on the Dark Web … :question:

How will different countries regulate their AI and counter others AI?

With the proliferation of customised GPT models next year comes an exponential rise in uncontrolled “services” … :expressionless:

How is anything on the Dark Web regulated? Isn’t that why it’s called the Dark Web because it isn’t regulated?

That’s a massive topic and it’s only tangentially touched on in the article. The article in the OP is about AI being used to create marketing materials for “bad guys”. AI is being used every day to create marketing materials for people like this writer.

Exactly … your friendly chatbots are but frivolous time-wasters … in the real world, GPT Builder (and others like it) now provide an unregulated non-technical means for the maliciously-minded to create their own nefarious “services”.

That argument could be made about the internet itself. You could say that the existence of the internet allows for the Dark Web to exist so the internet should be feared. I’m sure that argument was made a couple decades ago but it’s faded as more people have access to it.

The internet is a bit scary :fearful: