David Solomon, the CEO of Goldman Sachs, has expressed a heightened awareness of the Anthropic Mythos AI model’s capabilities and is collaborating closely with the technology firm following warnings regarding its cybersecurity implications.
The American financial institution has been closely observing the rapid developments in artificial intelligence, particularly large language models (LLMs), as part of its broader strategy to safeguard against hacking threats.
During an earnings call on Monday, Solomon remarked, “The progress of LLMs is significant, and we are acutely aware of these advanced models, which have been aided by the U.S. government and the developers.”
This acknowledgment includes Anthropic, the organization responsible for the Claude AI toolset. Recently, Anthropic indicated that its latest model, Mythos, presents unprecedented risks due to its capacity to identify vulnerabilities in IT infrastructures.
In a blog post published last Wednesday, Anthropic stated, “AI models have now reached a coding proficiency that allows them to outpace all but the most proficient humans in detecting and exploiting software vulnerabilities. The implications for economies, public safety, and national security could be dire.”
On the same call, Solomon added, “We are conscious of Mythos and its capabilities… We possess the model and are actively collaborating with Anthropic and our security vendors to leverage cutting-edge capabilities whenever feasible. This will remain a key area of focus for us.”
He emphasized, “Our commitment to enhancing our cyber and infrastructure resilience is strong. This is part of our ongoing investment strategy, which we are accelerating.”
This development follows a recent meeting in Washington, where U.S. Treasury Secretary Scott Bessent convened Solomon and other prominent bankers to discuss the implications of the Mythos model. The discussions were centered around leaders of systemically important banks, whose operational disruptions could threaten financial stability.
On Monday, the UK government’s AI Security Institute (AISI) issued a warning that Mythos represents a significant escalation in terms of cybersecurity threats compared to earlier models.
AISI noted that Mythos is capable of executing multi-step attacks and identifying weaknesses in IT systems without human intervention, tasks that would typically require extensive effort from human experts. Remarkably, Mythos became the first AI model to successfully navigate a 32-step cyber-attack simulation designed by AISI, achieving this in three out of ten trials.
The AISI report suggested that while Mythos seems adept at autonomously targeting smaller, poorly defended IT systems, it could not definitively assess its effectiveness against more robustly protected systems due to the absence of security measures in its tests.
The AISI concluded with a cautionary note that future iterations of advanced AI models will likely enhance these capabilities, making immediate investments in cybersecurity crucial.
In the coming weeks, UK regulators are expected to address the risks posed by Mythos in discussions with executives from British banks and government officials. The Cross Market Operational Resilience Group (CMorg), which includes CEOs and representatives from the Treasury, Bank of England, Financial Conduct Authority, and National Cyber Security Centre, is scheduled to convene within the next two weeks.
The Bank of England, which is managing communications related to CMorg, has opted not to comment further.

















