[ad_1]
The so-called “godfather of AI” continues to warn about the dangers of artificial intelligence weeks after he quit his job at Google.
In a recent interview with NPR, Geoffrey Hinton said there was a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control.”
He asserted that politicians and industry leaders need to think about what to do regarding that issue right now.
No longer science fiction, Hinton cautioned that technological advancements are a serious problem that is probably going to arrive very soon.
BIDEN EDUCATION DEPARTMENT WORRIED AI IN THE CLASSROOM MIGHT BE USED TO SPY ON TEACHERS
For example, he told the outlet the world might not be far away from artificial general intelligence, which has the ability to understand or learn any intellectual task that a human can.
“And, I thought for a long time that we were like 30 to 50 years away from that,” he noted. “Now, I think we may be much closer. Maybe only five years away from that.”
While some people have compared chatbots like OpenAI’s ChatGPT to autocomplete, Hinton said the AI was trained to understand – and it does.
“Well, I’m not saying it’s sentient. But, I’m not saying it’s not sentient either,” he told NPR.
“They can certainly think and they can certainly understand things,” he continued. “And, some people by sentient mean, ‘Does it have subjective experience?’ I think if we bring in the issue of subjective experience, it just clouds the whole issue and you get involved in all sorts of things that are sort of semi-religious about what people are like. So, let’s avoid that.”
SCIENTISTS USE AI TO FIND DRUG THAT KILLS BACTERIA RESPONSIBLE FOR MANY DRUG-RESISTANT INFECTIONS
He said he was “unnerved” by how smart Google’s PaLM model had gotten, noting that it understood jokes and why they were funny.
Google has since released PaLM 2, the next-generation large language model with “improved multilingual, reasoning and coding capabilities.”
With the release of such AI swirls fears regarding job replacement, political disputes and the spread of disinformation due to AI.
While some leaders – including Elon Musk, who has his own stake in the AI sphere – had signed an open letter to “immediately pause for at least six months the training of AI systems more powerful than GPT-4,” Hinton does not think it’s feasible to stop the research.
“The research will happen in China if it doesn’t happen here,” he explained.
CLICK HERE TO GET THE FOX NEWS APP
He highlighted that there would be many benefits to AI and asserted that leaders need to put a lot of resources and effort into seeing if it’s possible to “keep control even when they’re smarter than us.”
“All I want to do is just sound the alarm about the existential threat,” he said, noting that others had been written off “as being slightly crazy.”
[ad_2]
Source link