The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years' experience in 'chemical weapons and/or explosives defence' as well as knowledge of 'radiological dispersal devices' – also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created. Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI. On its careers website, it lists a job vacancy for a researcher in 'biological and chemical risks,' with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic. But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it. Dr. Stephanie Hare, tech researcher, expressed concern about the safety of using AI systems to handle sensitive chemicals and explosives information, stating, Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons? Moreover, the technology's rapid development raises urgent ethical questions, especially as the US government engages with AI firms amid ongoing military operations.
Anthropic Hires Weapons Expert to Prevent AI Misuse

Anthropic Hires Weapons Expert to Prevent AI Misuse
AI company Anthropic recruits a chemical weapons and explosives expert to mitigate risks of catastrophic misuse of its technology amid growing concerns.
In a proactive measure, AI firm Anthropic is looking to hire a chemical weapons and explosives expert to prevent potential misuse of its artificial intelligence tools. The company's recruitment effort aims to create robust safeguards against the possibility that its AI could be used to provide information on creating chemical or radioactive weapons. With the increasing concerns about the implications of AI in sensitive areas, this strategy aligns with similar initiatives from other AI companies. Experts, however, express apprehension over the risks associated with AI's access to such critical information.



















