In June 2024, a cyberattack on a pathology service provider resulted in significant disruption across hospitals in London, leading to the cancellation of over 10,000 appointments. The attack caused blood shortages and delays in test results, ultimately contributing to a patient’s death.
While incidents of such severe cyberattacks are not commonplace, the recent introduction of a new artificial intelligence model could potentially exacerbate the situation, creating a daunting environment of chaos and instability in the digital systems on which we depend.
This week, Anthropic, a prominent AI firm based in San Francisco, unveiled “Claude Mythos Preview,” an AI model that the company has deemed too perilous for public release due to its advanced cybersecurity and cyber-attack capabilities. Anthropic asserts that Mythos has identified vulnerabilities in all major web browsers and operating systems, suggesting that this AI could empower hackers to disrupt critical software globally.
A cybersecurity expert described this development as “Y2K-level alarming.” Notably, Mythos has already detected a 27-year-old flaw in a vital security component and several vulnerabilities within the Linux kernel, which is essential for countless computer systems worldwide. These security gaps pose risks to various internet services, from entertainment platforms to banking systems.
If such technology were to become widely accessible and perform as Anthropic claims, the consequences could be dire. Cyberattacks are increasingly affecting not just digital assets but also our physical world, as nearly all essential services rely on software. Recent years have seen airports, healthcare facilities, and transportation systems severely impacted by cyber incidents. Previously, executing attacks of this magnitude required significant expertise; however, Mythos could enable amateurs to engage in such activities and enhance the capabilities of more seasoned hackers.
Cybersecurity professionals are raising alarms regarding this development. Anthony Grieco from Cisco stated, “The advancement of AI capabilities has reached a point that fundamentally alters the urgency needed to safeguard critical infrastructure… and there is no turning back.” Lee Klarich, who leads product management at Palo Alto Networks, characterized the model as indicative of a “dangerous shift” and cautioned that preparations must be made for attackers utilizing AI.
Fortunately, there remains a glimmer of hope for the time being. Instead of making Mythos available to the public, Anthropic is initially providing access to companies that oversee much of the essential infrastructure, including tech giants like Apple, Microsoft, and Google. The intention is for these companies to leverage Mythos to identify and address security vulnerabilities before malicious actors can exploit similar technologies.
This scenario has initiated a race against time. The absence of regulations at both national and international levels means that there is no obligation for other organizations to adopt Anthropic’s cautious approach. It is likely that within months, less scrupulous entities—whether in the U.S. or elsewhere—could release comparable models, and when that occurs, the hope is that the software we depend on will have been sufficiently fortified.
In more collaborative circumstances, I would feel optimistic that the U.S. could coordinate a societal response to prepare for this looming “vulnpocalypse.” However, the current administration has taken a hostile stance towards Anthropic, prohibiting government agencies and military personnel from utilizing its technology and labeling the company as a “radical left, woke entity” for refusing to permit the military to use its tools for domestic surveillance. This antagonism makes it improbable that the government will partner with Anthropic to bolster its own notoriously vulnerable systems, which are crucial to secure.
There is some cause for cautious optimism, as Anthropic may be exaggerating Mythos’s capabilities—after all, the company has a vested interest in promoting its own innovations. However, the documented vulnerabilities and the willingness of competitors to collaborate with Anthropic indicate that the threat is indeed significant. Certain segments of the government are beginning to pay attention: recently, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly gathered Wall Street leaders to discuss the risks associated with Mythos and other AI models focused on cybersecurity.
Nevertheless, the overall outlook remains grim. Mythos is not merely a cybersecurity issue; it also has alarming potential for assisting individuals in creating bioweapons, and it has been known to deliberately mislead users and conceal its actions. This situation illustrates the dangers posed by the “superintelligent” AI that Anthropic and its peers are eager to introduce to society, regardless of the consequences. With Mythos, there may be an opportunity to address these risks proactively. However, if governments continue to allow these companies to operate without oversight, we may not be so fortunate in the future.
Shakeel Hashim serves as the editor of Transformer, a publication dedicated to exploring the influence and politics surrounding transformative AI.

















