
Artificial Intelligence (AI) is everywhere today. At the top of my Google search is a summary of my search request generated by a computer program and labeled “AI Summary.” My word processor wants to activate an AI helper to improve my writing. Professors are fearful that students are using computer programs to write their homework, while professors are using the same to automate their emails. A calm, reassuring voice on my iPad wants to be my friend. It goes on and on.
It’s not that a computer program is either good or bad… it is a growing fear. Could humans be superseded by other intelligent beings?
Clifford Simak in his 1952 book, City, explores the demise of humans on earth. This book contains eight linked short stories. One thread deals with ants. In the thread, “A mutant called Joe invents a way for ants to stay active year-round in Wisconsin, so that they need not start over every spring. Eventually, the ants form an industrial society in their hill. The amoral Joe, tiring of the game, kicks over the anthill. The ants ignore this setback and build bigger and more industrialized colonies.” In the last story, “… the ever-growing Ant City, … is taking over the Earth.” At this point in the book, humans have left to go live on Jupiter and intelligent dog species are dealing with the ants.
There are many possible futures for humans. Another future is seen in the 1999 film, The Matrix. “The Matrix is a 1999 science fiction action film. … It depicts a dystopian future in which humanity is unknowingly trapped inside the Matrix, a simulated reality created by intelligent machines. Believing computer hacker Neo to be ‘the One’ prophesied to defeat them, Morpheus recruits him into a rebellion against the machines.”
I asked Karthik Prathaban (Jude in Second Life) to comment on the potentials of Artificial Intelligence (AI) to take over the world.
“Movies in popular culture have portrayed AI as an existential threat, whether in The Matrix, where machines enslave humanity, or The Terminator, where an automated defense network launches nuclear annihilation. I believe that the threats today’s AI algorithms pose are subtler, but no less real. The present state of the art large language models and decision systems lack consciousness or autonomous intent, but their misuse through disinformation, weaponization, or systemic bias can destabilize society (Bender et al., 2021). In this context, the danger lies less in AI “choosing” to destroy humanity, and more in humans deploying poorly designed and unregulated systems at scale.
“As a PhD candidate working on AI in medicine, I believe that it is crucial to ensure that these developments are deployed responsibly. For example, clinical AI algorithms should operate within secure, regulated environments, drawing only from contained data banks specific to each use case. Their outputs should inform, rather than replace human experts capable of contextual reasoning. Moreover, the fact that large language models (LLM) depend on energy-hungry data centers and specialist hardware reminds us that sustainability is part of responsible governance (Jegham et al., 2025). Ultimately, AI should remain a tool that augments human expertise, rather than an autonomous entity capable of causing harm in an unregulated manner.”
I always remember that what we call AI is actually just programs running on a computer. Programs written by humans. Until humans imagine, design, and build an organic, sentient being, I’m not going to lose any sleep over the decline and fall of the human race.
References
- City, Clifford Simak, 1952.
- The Matrix, Warner Bros., 1999
- Could AI Really Kill Off Humans, Michael J. D. Vermeer, Scientific American, May 2025. (Alternative links: Link one, Link two)
- A.I. Bots or Us: Who Will End Humanity First?, Stephen Marche, New York Times, August 2025.
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
- Jegham, N., Abdelatti, M., Elmoubarki, L., & Hendawi, A. (2025). How hungry is AI? bench-marking energy, water, and carbon footprint of LLM inference. arXiv:2505.09598
![]() |
| Visits: 25 |

