Anthropic's Claude Mythos AI Release Raises EU Regulatory Concerns After NCSC Alert
Anthropic released Claude Mythos, a cybersecurity AI model, raising EU regulatory concerns due to limited engagement with agencies. Ireland's NCSC highlighted the model's impact, while the EU AI Act faces challenges amid US opposition to strict regulation. Pro-AI groups have $300 million to campaign against stronger rules.
The regulation of AI is a wedge issue between the US and the EU, with significant consequences. Last week, Anthropic, a US AI firm, released Claude Mythos, touted as the most advanced model for detecting cybersecurity risks.
The issue is the lack of engagement with regulatory agencies during Claude Mythos's development. Ireland's National Cyber Security Centre (NCSC) told the Oireachtas Communications Committee last Tuesday that Anthropic's published technical material represents a significant change in vulnerability identification. EU national regulators received a preview, but there was no wider engagement. Anthropic claims this is because Claude Mythos is limited to about 40 companies, bypassing normal regulatory processes.
This is causing concern in the EU. The European Commission published the EU AI Act in 2024, but its effectiveness has been undermined. US Vice-President JD Vance criticized the European Commission's regulatory approach in Budapest. The White House supports self-regulation for US tech firms, arguing against stifling AI growth.
Pro-AI groups, funded by tech companies, have $300 million to campaign against candidates favoring stronger regulation. Critics argue that self-regulation has historically failed, citing the 2008 financial crisis, and that AI risks outweigh those of an unregulated financial sector, necessitating a globally coordinated system.