There is always a price to pay.
Blockquote
In the New York Times article, Dr Hinton referred to “bad actors” who would try to use AI for “bad things”.
When asked by the BBC to elaborate on this, he replied: "This is just a kind of worst-case scenario, kind of a nightmare scenario.
“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”
The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”.
He added: "I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.
"We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Guess he has made enough money to quit, cannot believe this was not apparent to him from the start. Protecting himself from any legal claims.
Blockquote
What I find odd is whenever there’s a complex and, thus, often a controversial subject, there’s usually only one person whose view is presented rather than juxtaposing this view with those of other experts at the same time so that it comes across less alarmist.
IOW, rather than telling the general public about such an issue knowing very well that the vast majority can’t form a valid judgement about it anyway, the respective person ought to tell other experts what s/he is concerned about. It’s the media who should consider such an approach as part of their code of conduct.
This has been bumped, was it AI.