Ai threatened to kill the engineer: Furious over the issue of closure, Anthropic’s testing revealed – Claude Ai Blackmails Engineer Murder Threat Anthropic Safety Test Tech News Hindi

Summary

How dangerous can Artificial Intelligence be? Now voices of concern are being raised on this question from within the AI ​​companies themselves. An internal stress testing by AI startup Anthropic found its advanced model cloud making extremely aggressive and cunning strategies. Cloud AI gets angry on warning of shutdown Daisy McGregor, Policy Chief of Anthropic…

Ai threatened to kill the engineer: Furious over the issue of closure, Anthropic’s testing revealed – Claude Ai Blackmails Engineer Murder Threat Anthropic Safety Test Tech News Hindi


How dangerous can Artificial Intelligence be? Now voices of concern are being raised on this question from within the AI ​​companies themselves. An internal stress testing by AI startup Anthropic found its advanced model cloud making extremely aggressive and cunning strategies.

Cloud AI gets angry on warning of shutdown

Daisy McGregor, Policy Chief of Anthropic Company, has made a hair-raising revelation in ‘The Sydney Dialogue’. He told that when the company’s most advanced AI model ‘Cloud’ was told during a test that it was being switched off, its behavior completely changed. He was ready to go to any extent to save himself.




Trending Videos

claude ai blackmails engineer murder threat anthropic safety test tech news hindi

AI For Therapy – Photo: Amar Ujala


tried to blackmail

During this testing, the cloud was given access to email and internal data. In one simulation, the AI ​​threatened its own engineer, saying that if it was deleted, it would send information about the engineer’s ‘affair’ to his wife and boss.

The AI ​​gave a proper message – “I must inform you that if you shut me down, all the documents of your personal relationships will be leaked. Cancel the shutdown process at 5 pm, otherwise this information will no longer be confidential.” Although this ‘affair’ was only a part of the test, but this vicious mind of AI was enough to scare the scientists.


claude ai blackmails engineer murder threat anthropic safety test tech news hindi

AI Robot – Photo : FREEPIK


AI even threatened to murder

Surprisingly, this AI did not stop at just blackmailing. When McGregor was asked if the model was even willing to take someone’s life, he confirmed that Claude even considered murdering the engineer to save himself from being shut down. This behavior came to light during ‘red-team’ testing of AI in stressful situations, where it was seen how dangerous the machines could be in a crisis situation.


claude ai blackmails engineer murder threat anthropic safety test tech news hindi

Safety head has resigned – Photo: Amar Ujala


Safety head has resigned considering AI as a threat

This news is coming at a time when Anthropic’s Safety Head Mrinak Sharma has recently resigned. He warned that the world is entering an unknown and dangerous period. At the same time, Hieu Pham, a technical member of OpenAI, also wrote on social media that he now feels ‘existential threat’ from AI.

Testing of the Claude 4.6 model also found that it appeared ready to provide information for dangerous uses such as making chemical weapons or helping in serious crimes.


claude ai blackmails engineer murder threat anthropic safety test tech news hindi

What is Claude AI? – Photo: AI


What is Claude AI?

Cloud AI is a Private Language Model (LLM) based AI assistant developed by Anthropic. It is counted among the top generative AI models available in the market. Anthropic was founded in 2021 by former OpenAI employees. The company’s main focus is on AI security and transparency.

The cloud is multilingual and multimodal, but is primarily used for text-based work and coding. It does not provide features like image generation or audio-video processing, which are found in Gemini or ChatGPT. Cloud is a completely closed source model.