Greetings, and welcome to TechScape. I am Blake Montgomery, the US technology editor for the Guardian, reporting from my charming village in Pokopia.
Companies specializing in artificial intelligence are known for creating innovative products, but they also tend to make extravagant promises.
Recently, Anthropic unveiled Claude Mythos, an AI model specifically designed for cybersecurity, which has sparked both excitement and concern regarding its purported capabilities. However, there is a significant limitation: the model is not available for public use. Additionally, OpenAI announced last week that it too has developed a highly advanced AI focused on cybersecurity threats.
Anthropic has described Mythos as a transformative force for the cybersecurity sector, citing its exceptional ability to identify software vulnerabilities. The company claims that Mythos has uncovered thousands of flaws in widely-used applications that lack existing patches or fixes. In response, Anthropic has teamed up with cybersecurity experts in an initiative called Project Glasswing, aiming to enhance defenses against hacking while restricting access to the model, akin to an indie film released only in select cities.
In an op-ed for the Guardian, Shakeel Hashim notes that Mythos claims to have detected vulnerabilities across all major browsers and operating systems. This raises concerns that the AI could potentially empower hackers to compromise critical software systems globally.
If the technology were made widely accessible and operates as Anthropic asserts, the consequences could be dire. Cybersecurity threats have evolved beyond mere digital concerns; they now affect numerous aspects of our physical lives, with cyber-attacks on airports, hospitals, and transportation networks becoming increasingly common. Previously, executing attacks of this magnitude required significant expertise, but Mythos could enable even amateur hackers to exploit these vulnerabilities, thus enhancing the capabilities of seasoned professionals.
However, cybersecurity professionals are challenging Anthropic’s assertions. Aisha Down, a colleague, reports that the extent of Anthropic’s claims remains uncertain. While it is clear that the San Francisco-based firm is adept at marketing itself as a “responsible” AI company, questions linger about the actual impact of Mythos.
Jameison O’Reilly, a specialist in offensive cybersecurity, acknowledged that Mythos represents a noteworthy advancement and that Anthropic has been right to approach it with seriousness. Nevertheless, he pointed out that some of the claims regarding the discovery of thousands of “zero-day vulnerabilities” may not hold significant weight in practical cybersecurity considerations.
In 2017, BuzzFeed News’ tech editor suggested that Apple’s ability to create desire was its greatest asset. I concur, and it appears that Anthropic possesses a similar talent. While the technology is impressive, so is the art of capturing public attention. Like Apple’s groundbreaking iPhone, Claude stands out as a genuine innovation. Major corporations, including Apple, Nvidia, Google, JPMorgan Chase, Amazon Web Services, and Broadcom, have collaborated with Anthropic on Project Glasswing. However, it can also be viewed as a savvy marketing strategy to suggest, “You can’t access this; it’s too powerful.” The allure of exclusivity often heightens interest. According to Bloomberg, Anthropic was a hot topic at the recent HumanX AI conference in San Francisco.
The excitement surrounding generative AI has often clouded public understanding since the technology’s inception. Journalists and astute observers have sought to clarify the situation for some time. In 2019, Slate published an article titled “OpenAI says its text-generating algorithm GPT-2 is too dangerous to release,” drawing parallels to the current situation with Anthropic, given that its CEO Dario Amodei was formerly OpenAI’s vice-president of research before departing in 2020. OpenAI also delayed the public release of its video generator, Sora, for several months. Nonetheless, this did not result in a downturn for Hollywood or filmmaking, as Sora was ultimately discontinued last month.
Although concerns about the dangers of simple text generation may now seem trivial compared to the looming threats to digital security, the evolution of perceptions since 2019 suggests a positive trend: we are likely to move past today’s anxieties over cybersecurity, eventually finding a balanced understanding between the current state and exaggerated fears of the future.
In related news, tech companies are experiencing layoffs while investing heavily in AI, with uncertain returns on these ventures.
In another recent development in New Mexico, Meta faced a multimillion-dollar legal setback for its failure to prevent child trafficking on its platforms. Veteran Guardian journalist Katie McQue provided crucial evidence that contributed to the case against the tech giant, detailing her investigative process in a recent article.
It began with a tip-off from a long-time source while I was looking into the exploitation of migrant workers in the Gulf. They informed me about a troubling rise in child sexual abuse trafficking in the United States, particularly as the pandemic shifted predators online, using Facebook and Instagram to facilitate these horrific transactions.
In 2021, I initiated an investigation alongside human rights journalist Mei-Ling McNamara that would culminate in Meta’s significant court loss this March. At that time, the company was still operating under its original name, Facebook, and little attention had been paid to the issue of child trafficking on its platforms. Experts from anti-trafficking organizations and law enforcement provided insight into the crimes involved.



















