4 dangers that most worry AI pioneer Geoffrey Hinton
Comment of the Day

May 03 2023

Commentary by Eoin Treacy

4 dangers that most worry AI pioneer Geoffrey Hinton

This article from Associated Press may be of interest. Here is a section: 

Researchers have long noted that artificial neural networks take much more time to absorb and apply new knowledge than people do, since training them requires tremendous amounts of both energy and data. That's no longer the case, Hinton argues, noting that systems like GPT-4 can learn new things very quickly once properly trained by researchers. That's not unlike the way a trained professional physicist can wrap her brain around new experimental findings much more quickly than a typical high school science student could.

That leads Hinton to the conclusion that AI systems might already be outsmarting us. Not only can AI systems learn things faster, he notes, they can also share copies of their knowledge with each other almost instantly.

“It’s a completely different form of intelligence,” he told the publication. “A new and better form of intelligence.”

Eoin Treacy's view

I attended Hinton’s presentation at the EmTech Digital conference today. He expressed a great deal of caution about the evolution of AI while also saying that the reasoning capability necessary for outright competition with human thought is not yet in place. He is worried about the fact that when one model learns, every model can transfer the knowledge instantly and therefore the rate of improvement can only accelerate. Taken to a logical conclusion, the ability of AI models will eventually exceed the ability of the human mind.

The fears of an omniscient AI taking over are largely based on applying human intention to a computer system. I’m not sure that is the correct conclusion. If a dog were to suddenly achieve an IQ of 200, would it stop sniffing around trees for example.

The big difference for a computer program is it does not have innate intention and therefore depends on what it is programmed to want. Of course that is the danger as AI systems are weaponized. “Destroy this list of targets” is one instruction, “kill everything that resembles a human within this geographic area” is another.

For now companies like Nvidia and IBM are very excited about the promise of deploying big data in materials science. This is not new news but the excitement around large language models has given the sector a new lease on life.

There is significant potential for innovation in the chemical and raw materials sectors because they contain large well-defined data sets which are optimal for AI deployment. 

Back to top

You need to be logged in to comment.

New members registration