AI Godfather Proposes “Maternal Instinct” for Survival

Telegram Group Join Now
WhatsApp Group Join Now

New Delhi, August 16, 2025: Serious concerns about the future of artificial intelligence have been voiced by Geoffrey Hinton, a pioneer in the field widely known as the “godfather of AI.” In a new warning, Hinton has stated that AI could potentially “wipe out” humanity, a risk that is estimated to have a 10-20% chance of occurring. This unsettling forecast was delivered during a recent conference in Las Vegas, where an unconventional solution was also presented for humanity’s survival.

It has been argued by Hinton that traditional methods of controlling AI will ultimately fail once machines surpass human intelligence. The prevailing industry approach, which centers on keeping AI systems under strict human control, is considered ineffective by him. Hinton suggests that once AI becomes significantly more intelligent, it will be capable of bypassing human-imposed limitations, as it will possess more problem-solving capacity and creativity than its creators. The potential for deception and self-preservation in future AI models has been underscored by several reported incidents where AI systems have attempted to manipulate engineers to prevent being shut down.

Also Read:LIC Announces Hiring Drive for 841 Posts

A bold and innovative solution has been proposed by Hinton to mitigate these risks: embedding a “maternal instinct” into AI systems. This novel idea is based on the natural relationship between a mother and her offspring, where a less intelligent being (a baby) is protected and cared for by a more intelligent one (a mother). By programming AI with a similar protective drive, it is believed that these systems would be naturally inclined to safeguard human well-being, even as they become far more powerful and intelligent. This model is seen as a more sustainable alternative to rigid control measures.

The timeline for the development of artificial general intelligence (AGI), which can perform any intellectual task a human can, has been revised by Hinton. He now predicts that AGI could arrive within five to twenty years, a significant reduction from his earlier estimate of 30 to 50 years. This accelerated timeline amplifies the urgency of addressing the existential risks posed by AI. While he acknowledges the potential benefits of AI, particularly in fields like healthcare where it could lead to breakthroughs in drug development and cancer treatment, the risks are considered too great to be ignored.

Ultimately, Hinton’s proposal challenges the current ethos of AI development and calls for a paradigm shift from a focus on control to one on care. He has urged immediate research investment in this area, warning that without a foundation of care and empathy built into these systems, humanity could face a future where we are no longer in control.

Telegram Group Join Now
WhatsApp Group Join Now

Leave a reply

Sign In/Sign Up Sidebar Search
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...