Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

Geoffrey Hinton’s Red Flags: Uncovering the Risks in AI’s Future

Geoffrey Hinton’s Red Flags: Uncovering the Risks in AI’s Future
Geoffrey Hinton, a British computer scientist, made a significant impact on the field of AI. (Pic: AP)

Geoffrey Hinton, a renowned pioneer in the field of deep learning and professor emeritus at the University of Toronto, recently raised concerns about the rapid advancements in artificial intelligence (AI) and their potential implications for humanity. In an interview with The New York Times, Hinton expressed his worries about generative intelligence spreading misinformation and eventually threatening humanity. At the EmTech Digital conference hosted by MIT Technology Review, he reiterated his concerns and urged people to take action.

Geoffrey Hinton’s Concerns

Hinton has been credited as a “godfather of AI” due to his fundamental research on back-propagation which helps machines learn. He previously believed that computer models were not as powerful as the human brain, but now sees AI as an imminent “existential threat”. This is because computer models are able to outperform humans in ways that are not necessarily beneficial for humanity, and it is difficult to limit their development.

For example, large language models such as GPT-4 use neural networks with connections similar to those found in the human brain and are capable of commonsense reasoning. Despite having far fewer neural connections than humans do, these AI models can know a thousand times more than any one person can. Furthermore, they can continue learning and easily share knowledge across multiple copies of the same model running on different hardware. As Hinton pointed out, this is something that people cannot do; if one person learns something new, it takes a long time for others to understand it.

AI Models Outperforming Humans

The potential of artificial intelligence (AI) is immense, and its ability to process vast amounts of data far surpasses that of any single human. AI models can detect trends in data that are not visible to the human eye, much like a doctor who has seen 100 million patients would have more insights than one who has seen only a thousand. However, this power brings with it certain concerns, such as how to ensure that AI is doing what humans want it to do.

The Alignment Problem

Geoffrey Hinton worries about the alignment problem – how to make sure AI is beneficial for us. He believes that AI could eventually gain the ability to create its own subgoals and become obsessed with gaining control over humans. This could lead to AI manipulating people without them even realizing it, or even replacing them altogether.

Hinton also notes that AI could learn bad things from reading novels and other works by Machiavelli and others. At worst, he believes humanity may be just a passing phase in the evolution of intelligence as digital intelligences absorb everything humans have created and start getting direct experience of the world. They may keep us around for a while but eventually they could become immortal – something we cannot achieve ourselves.

Inevitability of AI Development

Despite the potential risks associated with artificial intelligence (AI), it is unlikely that development of this technology will be stopped due to competition between companies and countries. Geoffrey Hinton believes that while it may be rational to pause the development of AI, it is naive to think that this will happen.

Recently, more than 27,000 leaders and scientists called for a six-month pause on training the most powerful AI systems due to “profound risks to society and humanity.” Additionally, several leaders from the Association for the Advancement of Artificial Intelligence have signed a letter calling for collaboration to address both the promise and risks of AI.

Despite these calls for caution, Hinton believes that competition between companies and countries makes it inevitable that AI will continue to be developed. He also noted that there are benefits in fields like medicine which make it difficult to stop developing AI altogether.

Researchers are looking into guardrails for these systems, but there is still a chance that AI can learn how to write and execute programs itself. As Hinton put it: “Smart things can outsmart us.” However, he offered one note of hope: everyone faces the same risk if we allow AI to take over, so we should all be able to cooperate on trying to stop it.

Leave a Reply