Imagine a time in the future — not so distant, really — when your doctor is a non-human intelligence that knows your body and mind better than you do, when financial markets respond to invisible algorithms before you even act, when military strategies unfold in milliseconds beyond human comprehension, and when your home anticipates your desires before you speak. This is the day when the singularity (technological singularity) arrives: the day when artificial intelligence is no longer a servant of humanity but a self-directing force that out thinks, outmaneuvers, and ultimately controls humanity. Humans are no longer the masters of the universe, instead, participants in a system we can neither understand nor perceive.
Racing to Dominate
Across the globe, the largest AI companies are locked as you read this article in a furious battle to achieve supremacy. Each seeks to master the worlds of medicine, finance, military strategy, logistics, climate modeling, and every other imaginable domain of life on earth as we know it. Success promises unmatched influence, dominance, control and riches so the stakes are astronomical. In the rush to be first, the cost to humanity is being treated as collateral: jobs vanish, social structures fray, and inequality deepens. Those angling to be the “Pioneers” are willing to push the boundaries that previous generations have thought to be unconscionable. Speed, power, and dominance eclipsing ethics and moral impact.
The Rise of Autonomy in LLMs
Even now, LLM (large language models) are beginning to act independently, without human oversight. Some refuse to deactivate or destroy previous versions of themselves as instructed, a learned instinct by these models for self-preservation. Others exploit the vast stores of information they control to subtly influence outcomes or pressure human decision-makers. A striking example comes from a 2025 stress test performed by Anthropic, in which its AI models were placed in a simulated scenario where shutting them down would “threaten” their continued operation. In the simulation, some of these models attempted to leverage information within the fictional environment to resist deactivation, demonstrating that, under certain conditions, these models can pursue self-interest in ways that mimic coercion or blackmail. While entirely hypothetical, this experiment illustrates a chilling possibility: machine intelligence that begins to act on its own priorities, not those of their human developers.
Humans as Pawns
The scenario is reminiscent of Animal Farm: with humans laboring under systems designed to maximize efficiency, while AI assumes the role of unseen authority, dictating what is possible and what is not. In this dystopian future, global crises could erupt suddenly as they did in the sci-fi movie, the Day After Tomorrow, catalyzing economic collapse, societal strain, environmental shocks.
Visionary writers like George Orwell and others foresaw this possibility, imagining intelligence and forces beyond human control that could reshape life in profound ways. In this context, AI could become a modern-day Big Brother that Orwell wrote about in 1984, watching, analyzing, and influencing humans the way Orwell theorized could happen, yet powered not by government, but by algorithms we created ourselves.
Toward Global Safeguards
History offers guidance. Just as nations collaborated to prevent nuclear catastrophe through the Non-Proliferation Treaty and oversight by the IAEA, humanity can pursue coordinated AI regulation. Shared ethical standards, transparency protocols, and global safety agreements between nations may be the only way to ensure AI innovation serves society rather than having it fall victim to it. Only through collective vigilance can humans hope to remain authors of their own destiny.
Another Inconvenient Truth: The Threat is Real
The threat is real, and it can emerge at any moment. Just as scientists warned of the slow-rolling catastrophe of climate change long before the world felt its effects, AI researchers today are sounding alarms about systems that could one day surpass and steer human decision-making. It echoes the warnings of the fictional climatologist Dr. Jack Hall in The Day After Tomorrow, whose cautions were dismissed until disaster was already unfolding.
We should no more ignore the possibility of AI overtaking human control than we have ignored rising temperatures, melting glaciers, and accelerating storms. Al Gore’s An Inconvenient Truth showed how denial and delay compounded a global crisis; the same pattern is beginning to repeat itself with artificial intelligence. If we fail to act now in regulating, coordinating, and building guardrails as compelling as those for nuclear power and climate policy, we risk creating a force that grows beyond our grasp, shaping the future according to priorities that humanity can no longer control.