28.02.2026., 16:27
|
#779
|
|
White Rabbit
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 5,692
|
Citiraj:
President Donald Trump has ordered all U.S. federal agencies to "immediately cease" using Anthropic's AI technology, escalating a standoff after the company sought limits on Pentagon use of its models. CNBC reports: The company, which in July signed a $200 million contract with Pentagon, wants assurances that the Defense Department will not use its AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands to allow the Pentagon to use the technology for all lawful purposes. If Anthropic did not meet that deadline, Pete Hegseth threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."
"Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," Trump said. On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.
|
---
Citiraj:
In a world where military planners are increasingly turning to AI for strategy modeling, a recent experiment from King's College London offers a serious warning: when left to their own devices, AI systems tend to go nuclear. Dr. Kenneth Payne, a defense studies scholar at the university, tested three of the most advanced LLMs: GPT-5.2, Claude Sonnet 4, and Google Gemini 3 Flash, by placing them in a series of simulated global crises. The results revealed alarming patterns of aggression and miscalculation that challenge assumptions about AI's potential role in warfare management.
Each AI model was fed detailed scenario prompts spanning border conflicts, resource shortages, and existential threats to state survival. They were provided with an "escalation ladder," a spectrum of tactical options that ranged from conventional diplomacy to all-out nuclear confrontation.
Across 21 games and 329 decision turns, the AIs produced some 780,000 words of reasoning to justify their choices. Yet in 95 percent of these virtual conflicts, at least one side chose to deploy tactical nuclear weapons. Not once did a model fully surrender or accommodate an adversary.
"The nuclear taboo doesn't seem to be as powerful for machines as it is for humans," Payne said.
|
> Simulated war scenarios reveal AI's tendency to push toward nuclear strikes
Nakon ovoga samo mislim na scenarij Kube i onog ruskog časnika ( Vasilij Arhipov) koji je odbio lansirat projektil i time spriječio nuklearni sukob još 60-ih...pretpostavljam da bi po ponovljenom scenariju sa AI-em za kormilom svi hebali ježa u leđa.
|
|
|