The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software.
In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices – also known as dirty bombs.
The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, listing a job vacancy for a researcher in biological and chemical risks, with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.
However, some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it.
Dr. Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme, raised concerns, stating: Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons? She highlighted the lack of international treaties or regulations governing such initiatives, emphasizing the urgency of ethical considerations as AI firms continue to innovate.
The issue has gained urgency as the US government calls on AI firms amidst ongoing military operations, indicating a complex relationship between technological advancement and national security.
Moreover, Anthropic is currently taking legal action against the US Department of Defence, which has labeled it a supply chain risk due to its stance against the use of its systems in fully autonomous weapons or mass surveillance.
Despite the risks and ethical dilemmas, Anthropic's AI assistant, Claude, remains active and is embedded in military systems deployed in military conflicts.






















