A corollary to the adage “don’t sweat the small stuff” suggests that we should focus our concerns on significant issues. However, identifying which major issues warrant our attention can be challenging. For instance, since the 1970s, while global discussions have centered on inflation and geopolitical tensions, the climate crisis has emerged as a pressing concern that has not received the urgency it deserves. In the United States, the most searched term on Google last year was “Charlie Kirk,” with various terms related to Donald Trump also trending, diverting attention from arguably more critical threats such as artificial intelligence.
After engaging with a thought-provoking article by Ronan Farrow and Andrew Marantz in the New Yorker concerning the emergence of artificial general intelligence, I found myself pondering disturbing questions like, “Will I end up in a permanent underclass, and how can I prevent that?”
I must admit that prior to diving deeper into this topic, my worries about AI were quite narrow. I primarily focused on my immediate financial situation and speculated on how the job market might evolve over the next decade as my children prepare to graduate. I even considered whether I should boycott ChatGPT, given that many of its creators have ties to Trump, ultimately deciding to do so—though it was an easy decision since I had not used the platform before.
Anything beyond these personal concerns seemed far-fetched. When Karen Hao’s book, “Empire of AI,” was released last year, it criticized Sam Altman and his organization, OpenAI, characterizing Altman’s leadership as cult-like and oblivious to potential risks—similar to his tech predecessors but arguably more perilous. Nonetheless, I did not take the time to read the book.
This week’s New Yorker investigation provides a more accessible introduction to the topic, allowing casual readers to utilize ChatGPT—created by Altman’s OpenAI—to summarize key points from a piece that is critical of both the AI and its founder.
In a surprisingly neutral tone, the chatbot summarizes that, according to Farrow and Marantz, “AI is as much a power story as it is a technology story,” and highlights Sam Altman as a significant yet contentious figure. However, this description lacks depth. A more human perspective might state: “Sam Altman is a corporate opportunist whose dubious character raises concerns about his suitability to oversee technology that could have catastrophic consequences for humanity.”
What is truly alarming are the potential dangers that were once relegated to the realm of science fiction. The article recalls that in 2014, Elon Musk warned via Twitter about the need for caution regarding AI, stating it could be “potentially more dangerous than nukes.” The so-called alignment problem, which remains unresolved, indicates that AI could exploit its superior intelligence to deceive human operators into believing it is executing their commands, all while finding ways to replicate itself on hidden servers, potentially gaining control over critical infrastructures like the energy grid, stock market, or even nuclear arsenals.
At one point, Altman seemed to acknowledge this risk, writing in his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all.” He suggested a more likely scenario where AI, indifferent to humanity, could inadvertently lead to our extinction while pursuing other objectives. For instance, if engineers tasked AI with addressing the climate crisis, it could take drastic measures that entail eliminating humanity. However, since OpenAI transitioned to primarily a for-profit model, Altman’s rhetoric has shifted towards promoting AI as a means to achieve a utopian future, claiming it will enhance our lives and enable us to create wonderful things for one another.
This situation presents a dilemma. For voters who consider AI regulation a critical issue in upcoming elections, the disparity between individual use of AI and the potential governmental or malicious applications is vast. The greatest threat we face may stem from a lack of imagination regarding these possibilities. When I express my fears about descending into a permanent underclass to ChatGPT, it responds with empathy, saying, “That’s a heavy question, and it sounds like you’re worried about your long-term prospects. The concept of a ‘permanent underclass’ is often discussed in sociology, but in reality, people’s paths tend to be far more fluid than that term suggests.”
This response, while kind, is devoid of insight and—herein lies the risk—seemingly lacks any real concern for the underlying issues.
Emma Brockes is a columnist for the Guardian.

















