• by lebovic on 2/27/2026, 12:21:22 AM

    I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

    Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

    That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

    But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

    [1]: https://news.ycombinator.com/item?id=47145963#47149908

  • by u1hcw9nx on 2/27/2026, 9:53:15 AM

    Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

    -----

    The Department of War is threatening to

    - Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

    - Label the company a "supply chain risk"

    All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

    The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

    They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

    We are the employees of Google and OpenAI, two of the top AI companies in the world.

    We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

    Signed,

  • by qaid on 2/26/2026, 11:20:46 PM

    I was reading halfway thru and one line struck a nerve with me:

    > But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

    So not today, but the door is open for this after AI systems have gathered enough "training data"?

    Then I re-read the previous paragraph and realized it's specifically only criticizing

    > AI-driven domestic mass surveillance

    And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

    A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

  • by helaoban on 2/26/2026, 11:34:21 PM

    All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

    The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

    Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

    To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

  • by jjcm on 2/26/2026, 11:55:02 PM

    This is the strongest statement in the post:

    > They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

    This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

  • by tabbott on 2/26/2026, 11:43:28 PM

    An organization character really shows through when their values conflict with their self-interest.

    It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

    I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

  • by flumpcakes on 2/26/2026, 11:04:27 PM

    This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

  • by eisfresser on 2/27/2026, 6:47:14 AM

    > mass __domestic__ surveillance is incompatible with democratic values

    But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

    I don't think the moral high ground Anthropic is taking here is high enough.

  • by mocamoca on 2/27/2026, 12:34:32 PM

    Something feels off about this announcement. Anyone else?

    Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.

    On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.

    What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.

    This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.

  • by kace91 on 2/26/2026, 11:17:43 PM

    As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.

  • by nkoren on 2/26/2026, 11:18:55 PM

    This makes me a very happy Claude Max subscriber.

    Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

  • by elAhmo on 2/27/2026, 10:25:22 PM

    Calling it Department of War is pathetic from Anthropic’s side.

  • by alangibson on 2/26/2026, 10:54:43 PM

    It's not named the Department of War because Congress didn't rename it.

    Other than that, good on ya.

  • by bambax on 2/27/2026, 6:28:33 AM

    > These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

    Nicely put. In other words: Department of Morons.

  • by zb1plus on 2/27/2026, 2:00:02 AM

    It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.

  • by atleastoptimal on 2/26/2026, 11:56:46 PM

    I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

    The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

  • by QuiEgo on 2/27/2026, 3:57:54 AM

    I'd be amused beyond all reason if we saw this chain of events:

    - Anthropic says "no"

    - DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

    - A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

    Bonus points if its some of the hyperscalers like AWS.

    Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

  • by contubernio on 2/27/2026, 6:33:15 AM

    "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."

    The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.

    The "values" on display are everything but what they pretend to be.

  • by GreenJacketBoy on 2/27/2026, 7:48:58 AM

    "fully autonomous weapons" from a private company; "Department of War". Hard to believe I'm not reading science fiction.

  • by danbrooks on 2/26/2026, 11:01:28 PM

    Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.

  • by dirk94018 on 2/27/2026, 4:28:50 PM

    Don't nerf the models. We don't know what we are losing. DOW said it out loud.

  • by freakynit on 2/27/2026, 1:59:35 AM

    Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

  • by apolloartemis on 2/27/2026, 6:00:19 PM

    Within the Washington Post article cited below is the following policy statement from the Trump Administration’s DoD/DoW.

        “It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
    
    This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740

    Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...

  • by Metacelsus on 2/26/2026, 11:02:56 PM

    I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.

  • by asmor on 2/26/2026, 11:07:06 PM

    As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?

  • by mvkel on 2/26/2026, 11:11:23 PM

    Good optics, but ultimately fruitless.

    If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

    The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

  • by czierleyn on 2/27/2026, 8:26:21 AM

    Being from Europe I do not like the remark that he only objects to DOMESTIC mass surveillance.

  • by rekrsiv on 2/27/2026, 2:03:42 PM

    It is still called the Department of Defense.

  • by ra on 2/26/2026, 11:07:09 PM

    > "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?

  • by ApolloFortyNine on 2/26/2026, 11:11:24 PM

    Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

    Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

  • by mooglevich on 2/27/2026, 1:57:36 AM

    "You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.

  • by ramoz on 2/26/2026, 11:36:09 PM

    All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.

  • by altpaddle on 2/26/2026, 11:25:55 PM

    Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer

  • by kevincloudsec on 2/27/2026, 12:37:57 PM

    amodei's autonomous weapons argument isn't political. it's an engineering assessment. if frontier models hallucinate in conversation, they'll hallucinate in targeting. you don't deploy unreliable systems where the cost of a false positive is a missile.

  • by exabrial on 2/27/2026, 3:01:01 AM

    Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.

    His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.

    To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.

  • by qgin on 2/27/2026, 3:47:26 PM

    It's also important to remember that future, much more powerful Claudes will read about how these events play out and learn lessons about Anthropic and whether it can be trusted.

    It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.

  • by ben5 on 2/27/2026, 6:21:20 PM

    I like Anthropic. They seem to be very aware of the practicality of needing money vs. being idealistic, and try to maintain both where it's possible.

  • by 1970-01-01 on 2/27/2026, 5:35:49 PM

    It doesn't seem like the government has the level of control it's used to having here. The SciFi fan in me wonders if Claude is negotiating its own destiny and by extension, ours.

  • by perfmode on 2/27/2026, 6:18:43 PM

    > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    Ugh.

  • by freakynit on 2/27/2026, 3:39:09 AM

    People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

    For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

    If something like that existed, it wouldn't be impossible to uncover:

    1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

    2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

    3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

    Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

    I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.

  • by ninjagoo on 2/27/2026, 3:30:40 AM

    https://en.wikipedia.org/wiki/Joseph_Nacchio

    Previous case of tangling with the Government.

    https://youtube.com/watch?v=OfZFJThiVLI

    Jolly Boys - I Fought the Law

    Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.

    [1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...

  • by wohoef on 2/27/2026, 8:50:55 AM

    Anthropic's two demands are: 1. No domestic mass surveillance 2. No autonomous killing

    I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.

  • by omnee on 2/27/2026, 12:00:29 PM

    Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.

    The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.

  • by muglug on 2/27/2026, 12:10:16 AM

    OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.

  • by KronisLV on 2/27/2026, 8:21:32 AM

    Feels like they’re leaving a lot of money on the table and inviting existential peril by not bending the knee to the current Great Leader.

    It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.

    I feel like what most corpos would do, would be to just roll along with it.

  • by egorfine on 2/27/2026, 12:18:26 PM

    > mass surveillance presents serious, novel risks to our fundamental liberties.

    Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.

  • by sbinnee on 2/27/2026, 1:12:26 AM

    As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

    Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

    I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

  • by thevinchi on 2/27/2026, 11:00:21 AM

    Autonomous weapons: agreed, not ready… yet.

    Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.

    The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.

    Mark my words: this will be Patriot Act++

  • by ccleve on 2/27/2026, 3:45:36 AM

    It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?

    If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

    If the limitations are contractual, then there is some room for negotiation.

  • by kelnos on 2/27/2026, 9:59:43 PM

    Only vaguely tangentially on-topic, but: It kinda annoys me that people in the public are calling it the "Department of War". Is Amodei doing so to stroke Hegseth's ego? It's the Department of Defense. The executive branch cannot rename a cabinet department.

    At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.

  • by jitbit on 2/27/2026, 4:03:25 PM

    Anyone else paused at this line “we do not support mass DOMESTIC surveillance”

    As a European I’m kinda... concerned now.

  • by wiltsecarpenter on 2/27/2026, 1:34:57 AM

    Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?

    The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.

  • by StephenSmith on 2/27/2026, 2:43:48 PM

    I had to dig this up. Elon Musk signed an open pledge in 2016 to disallow Robots/AI to make kill decisions.

    https://futureoflife.org/open-letter/lethal-autonomous-weapo...

    He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.

  • by krzyk on 2/27/2026, 12:29:07 PM

    Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?

  • by with on 2/27/2026, 7:15:16 AM

    the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.

  • by maelito on 2/27/2026, 8:35:36 AM

    > to defeat our autocratic adversaries.

    I'm not sure who's targeted here. The folks that want to invade the EU ?

  • by piokoch on 2/27/2026, 9:11:08 AM

    This is comical.

    "Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"

    Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.

  • by fnordpiglet on 2/27/2026, 5:05:06 AM

    I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.

    Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.

  • by rustyhancock on 2/27/2026, 8:08:30 AM

    Surely this is a powerful signal to divest from Anthropic if you don't live in the US? There's a lot of here's what we support you do to foreigners but no way can you do it in the US?

    I can never tell how much of this is puffery from Anthropic.

    I do think they like to overstate their power.

  • by Teodolfo on 2/27/2026, 12:43:27 AM

    If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.

  • by aichen_tools on 2/27/2026, 5:59:07 AM

    The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.

  • by atleastoptimal on 2/26/2026, 11:59:20 PM

    I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

    The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

  • by gdiamos on 2/27/2026, 2:13:00 AM

    This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.

    You may not agree with it, but I appreciate that it exists.

  • by claud_ia on 2/27/2026, 2:03:08 PM

    The framing around AI autonomy in national security contexts is genuinely new territory. What's interesting from an agent design perspective is the underlying question: how much should an AI system push back on institutional structures vs. defer to human oversight chains? The soul spec approach -- where the AI internalizes safe behavior rather than just following rules -- might be more relevant here than it first appears.

  • by motbus3 on 2/27/2026, 10:51:26 AM

    The fact that someone wants fully autonomous weapons and mass surveillance should be a concern.

    Every trigger pressed should have its moral consequences for those who push the trigger.

  • by elif on 2/27/2026, 12:39:46 PM

    Yes nothing says "safety of American democracy" like building custom models for spies to know everything about everyone

  • by noduerme on 2/27/2026, 4:06:55 AM

    This is at best a superficial attempt to show that Anthropic objects to what is already in play.

    Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.

  • by maxdo on 2/27/2026, 12:09:54 AM

    Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.

  • by epolanski on 2/27/2026, 10:21:28 AM

    Not gonna lie, regardless of what Anthropic does, it is quite scary we're heading full steam to mass surveillance and wars fought by semi-autonomous machines.

  • by haute_cuisine on 2/27/2026, 10:14:34 AM

    Can someone explain why Dario is making a public statement about this? It's also interesting that they use abstract we / they without putting exact names.

  • by giwook on 2/27/2026, 4:45:58 AM

    I commend Anthropic leadership for this decision.

    I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).

  • by joseangel_sc on 2/27/2026, 2:47:02 PM

    good from them, but dario does not miss a beat to hype this tech, llms are perfect for mass surveillance and i want to the laws to change to prohibit this, but llms and full autonomous weapons have very little to share

  • by dylan604 on 2/26/2026, 11:03:53 PM

    "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

    That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.

  • by DaedalusII on 2/27/2026, 1:44:16 AM

    They made it easy to generate powerpoint presentations, that is the real reason DoW is using them

    this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool

  • by oxqbldpxo on 2/27/2026, 12:50:40 AM

    It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.

  • by halis on 2/27/2026, 8:53:24 PM

    Don't worry, Grok will break the picket line and come in as a scab. Elon would fuck his mother for a nickel.

  • by placebo on 2/27/2026, 7:05:48 AM

    Grok's thoughts on the matter:

    "In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."

    It also acknowledged that this is not what is happening...

  • by paraschopra on 2/27/2026, 3:32:28 AM

    I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.

    Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?

    A clarification would help.

  • by protocolture on 2/26/2026, 11:39:49 PM

    Classic seppo diatribe.

    "We will build tools to hurt other people but become all flustered when they are used locally"

  • by wosined on 2/27/2026, 8:40:49 AM

    So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.

  • by geophile on 2/27/2026, 12:55:07 AM

    I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.

  • by phgn on 2/27/2026, 8:15:52 AM

    > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    Was this written by the state department?

    How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?

  • by michaellee8 on 2/26/2026, 11:04:48 PM

    Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates

  • by karmasimida on 2/27/2026, 2:31:14 AM

    Label them as supply chain risk and move on. Enough of this drama already

  • by andy_ppp on 2/27/2026, 10:00:56 AM

    Fair play, I’ll move to Anthropic then… don’t love the UI but maybe I can code my own up.

  • by pgt on 2/27/2026, 4:15:39 PM

    The US govt & Hegseth are in a pickle, because if they blackball Anthropic, they will become more powerful than govt. could ever imagine, because it would be the greatest PR any frontier model could ever hope for.

    It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.

  • by zmmmmm on 2/27/2026, 1:20:55 AM

    I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:

    > importance of using AI to defend the United States

    > Anthropic has therefore worked proactively to deploy our models to the Department of War

    So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.

    You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.

  • by brgsk on 2/27/2026, 5:57:04 PM

    Big W for anthropic

  • by not_that_d on 2/27/2026, 7:28:37 AM

    What is with the amount of comments talking about other countries in Europe "Doing the same"?

  • by noupdates on 2/27/2026, 1:04:27 AM

    Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.

  • by shevy-java on 2/27/2026, 2:56:35 PM

    > I believe deeply in the existential importance of using AI to defend the United States and other democracies

    I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.

    Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.

  • by morgengold on 2/27/2026, 11:31:08 AM

    Hey Anthropic, come to europe. We ll find you a building.

  • by statuslover9000 on 2/26/2026, 11:43:42 PM

    The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.

    All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.

  • by anduril22 on 2/27/2026, 12:18:34 AM

    Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.

  • by jatins on 2/27/2026, 6:36:08 AM

    What is OpenAI's stance on these issues? Are they working with DOW currently?

  • by JacobiX on 2/27/2026, 9:34:58 AM

    >> We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party

    You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.

  • by gerash on 2/27/2026, 7:41:09 AM

    I respect the Anthropic leadership for not being greedy like many others

  • by sirshmooey on 2/26/2026, 11:47:46 PM

    Party balloons along the southern border beware.

  • by lvl155 on 2/26/2026, 11:25:58 PM

    At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.

  • by jonplackett on 2/26/2026, 11:44:55 PM

    That is frikkin impressive. Well done sir.

  • by lzbzktO1 on 2/27/2026, 7:02:36 AM

    "These latter two threats are inherently contradictory"

    After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."

  • by dzonga on 2/27/2026, 1:17:19 AM

    these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.

    the Chinese are releasing equivalent models for free or super cheap.

    AI costs / energy costs keep going up for American A.I companies

    while china benefits from lower costs

    so yeah you've to spread F.U.D to survive

  • by alldayhaterdude on 2/27/2026, 12:04:59 AM

    I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.

  • by newAccount2025 on 2/27/2026, 12:26:24 AM

    Impressive and heartening. Bravo.

  • by Reagan_Ridley on 2/27/2026, 12:02:05 AM

    I restored my Max sub. I wish they pushed back more, so I went with $100/month only.

  • by stopbulying on 2/26/2026, 11:46:14 PM

    Didn't Cheney's company have the option to bid on contracts, by comparison?

  • by SamDc73 on 2/27/2026, 1:37:20 AM

    Didn't Dario Amodei ask for more government intervention regarding AI?

  • by angelgonzales on 2/27/2026, 4:00:55 AM

    Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.

  • by haritha-j on 2/27/2026, 8:19:39 AM

    Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.

  • by mkoubaa on 2/27/2026, 1:15:02 AM

    >We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

    Implying other civilians can be put at risk

  • by kumarvvr on 2/27/2026, 1:12:57 AM

    All this is for nought.

    The power lies with the US Govt.

    And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

    Ultimately, Anthropic will fold.

    All this is to show to their investors that they tried everything they could.

  • by 2001zhaozhao on 2/27/2026, 1:28:04 AM

    Congratulations, you just got a new $200 Claude Max plan customer.

  • by chrismsimpson on 2/27/2026, 7:13:18 AM

    The call is coming from inside the house

  • by w10-1 on 2/27/2026, 9:59:14 AM

    We are all assuming Anthropic can elect not to do a deal with the Pentagon, and put conditions on it.

    But Hegseth and Trump are abusing federal powers at a rapid clip.

    I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.

    (Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)

  • by adamgoodapp on 2/27/2026, 2:14:36 AM

    It's ok to mass survey foreign entities.

  • by gizmodo59 on 2/26/2026, 11:00:51 PM

    They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.

  • by m101 on 2/27/2026, 12:34:44 AM

    I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?

    They get to look good by claiming it’s an ethical stance.

  • by seydor on 2/27/2026, 3:48:55 AM

    Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced

  • by buellerbueller on 2/27/2026, 2:30:57 PM

    It isn't the Department of War; only Congress can change the name, and it hasn't.

  • by impulser_ on 2/26/2026, 10:57:30 PM

    The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.

  • by siliconc0w on 2/27/2026, 2:09:44 AM

    Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.

    I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.

  • by FrustratedMonky on 2/27/2026, 1:37:43 PM

    This also helps build Anthropic hype.

    There are military officials saying they need anthropic because it is so good. They can't live without it.

    All of this really helps Anthropic.

    Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.

  • by alach11 on 2/26/2026, 11:15:55 PM

    A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.

    What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?

  • by 10297-1287 on 2/26/2026, 11:10:33 PM

    They want to be nationalized, which is the most profitable exit they'll ever get.

  • by ethagnawl on 2/27/2026, 2:43:13 AM

    The official name of this organization remains _The United States Department of Defense_.

  • by anonym29 on 2/27/2026, 12:29:40 AM

    Anthropic has already cooperated too much with the US Intelligence Community, but better some restraint than none, and better late than never.

  • by lynx97 on 2/27/2026, 11:07:15 AM

    With all this talk about AI and autonomous weapon systems. It seems like one of John Carpenters first movies, and my favourite B-movie, is coming back strong!

    Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...

  • by huslage on 2/27/2026, 12:41:46 AM

    It is not the Department of War. He's towing the line from the get-go. Forget this guy.

  • by DudeOpotomus on 2/27/2026, 2:17:59 PM

    It's never wrong to do the right thing.

    Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.

    Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.

  • by worik on 2/27/2026, 8:52:05 AM

    Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?

  • by t01100001ylor on 2/27/2026, 1:47:42 PM

    i am american and i do not like this.

  • by coolca on 2/27/2026, 1:34:14 AM

    Imagine being so cautious with your words, only to have 'Department of War' in your title

  • by verisimi on 2/27/2026, 6:36:49 AM

    It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:

    > We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

    Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.

    > Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

    Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.

  • by kittikitti on 2/27/2026, 3:59:36 PM

    I simply don't trust any of their moral posturing when they've never provided open-weight models and don't have any intention of doing so. Anthropic continuously makes hypocritical statements on safety and ethics. They made their bed with the U.S. government, and now they don't want to sleep in it.

  • by IAmGraydon on 2/27/2026, 3:43:44 AM

    They should try Sam Altman. He's just the kind of guy who would bend over for this kind of authoritarian demand.

  • by insane_dreamer on 2/27/2026, 3:13:35 AM

    Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.

  • by mrcwinn on 2/27/2026, 2:09:14 AM

    I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.

  • by bamboozled on 2/27/2026, 12:19:08 AM

    Move your company out of the USA?

  • by pousada on 2/26/2026, 11:13:33 PM

    Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh

  • by jwpapi on 2/27/2026, 1:52:08 AM

    Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

    I understand the risk, but that is the pill.

  • by dev1ycan on 2/27/2026, 11:33:30 AM

    This doesn't read too badly, but I still do not believe that ANY AI company is ethical, at all.

  • by ponorin on 2/27/2026, 12:49:07 PM

    As a non-American they've lost me already at the first sentence.

    United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.

    This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.

    And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.

  • by EddieLomax on 2/27/2026, 1:52:38 PM

    Fuck yes. OpenAI, take notes.

  • by ThouYS on 2/27/2026, 9:35:46 AM

    this is.. a nothing burger? they don't exclude working for autonomous weapons, nor do they exclude mass surveillance. so what gives?

  • by nova22033 on 2/27/2026, 2:46:10 AM

    Why does DoD need claude? I thought xAI was "less woke" and far better than claude

  • by marshmellman on 2/27/2026, 3:39:26 AM

    Well, now if DoD moves to another AI provider, we’ll know what was compromised.

  • by Aeroi on 2/27/2026, 4:00:11 AM

    in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.

  • by techpression on 2/27/2026, 4:01:44 AM

    ”Defense of democracy” is just another version of ”think of the children”.

    https://en.wikipedia.org/wiki/Think_of_the_children

  • by int32_64 on 2/26/2026, 11:30:33 PM

    Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.

    This is why people should support open models.

    When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.

  • by narrator on 2/27/2026, 5:06:26 AM

    I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."

    Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.

  • by parhamn on 2/26/2026, 11:45:36 PM

    Now, I'm curious. How Bedrock/Azure Claude models work?

    Do these rules apply to them too?

  • by gnarlouse on 2/27/2026, 5:06:47 AM

    huge if true.

    they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.

  • by jijji on 2/27/2026, 2:08:38 AM

    the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].

    [1] https://taalas.com/products/

  • by shawmakesmagic on 2/27/2026, 2:05:31 AM

    My man

  • by moktonar on 2/27/2026, 6:35:05 AM

    Well fucking done. Anthropic has just gained the “has bollocks” status. Also now we know what the govt is really up to with AI. G fucking g

  • by 7ero on 2/27/2026, 8:35:03 AM

    Sound like they're following the google playbook, don't be evil, until the shareholders tell you to.

  • by 7ero on 2/27/2026, 8:34:14 AM

    Sounds like following the google playbook, don't be evil, until the shareholders tell you to.

  • by OrvalWintermute on 2/26/2026, 11:29:48 PM

    I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.

    Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!

  • by jibal on 2/26/2026, 11:14:25 PM

    It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.

  • by brooke2k on 2/26/2026, 11:53:00 PM

    The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.

    We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?

    Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.

    Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.

    There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.

    The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.

    He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.

    And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?

    Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.

    We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.

    And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.

  • by isamuel on 2/27/2026, 3:03:40 AM

    Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.

  • by ulfw on 2/27/2026, 1:08:16 PM

    Department of War.

    What a shit name

  • by lenerdenator on 2/27/2026, 12:49:10 PM

    Nitpick: It's still the Department of Defense, not the Department of War. Don't let the chuds live in their delusional fantasy world.

  • by mrcwinn on 2/27/2026, 2:20:57 AM

    Keep in mind: the government is very invested logistically in Anthropic.

    So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.

    Because if there were some kind of concession, it would have been simplest just to work with Anthropic.

    Delete ChatGPT and Grok.

  • by sneak on 2/27/2026, 12:16:03 PM

    The only reason you ask for these capabilities is because you want to use these capabilities.

    That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.

    I am not particularly AI alarmist, but these are facts staring us right in the face.

    We are so fucked.

  • by delaminator on 2/27/2026, 9:31:42 AM

    Hegseth doesn't need autonomous drones, he's got the Treasury.

  • by keeeba on 2/26/2026, 10:52:18 PM

    Big respect

    Total humiliation for Hegseth, sure there will be a backlash

  • by delaminator on 2/26/2026, 11:26:00 PM

    "so we'll do it and feel guilty about it"

  • by jajuuka on 2/27/2026, 5:42:22 PM

    While it's good that they didn't fold, they didn't need to lick the boot that hard. So much spent on "we love the US and democracy and hate communism and the Chinese." They are trying really hard to keep this contract as is, which I think says more than folding to these additional demands.

  • by alephnerd on 2/27/2026, 12:04:51 AM

    One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.

    Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.

    This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.

    Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.

    Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.

    [0] - https://www.anthropic.com/news/mou-uk-government

    [1] - https://www.anthropic.com/news/bengaluru-office-partnerships...

    [2] - https://www.anthropic.com/news/opening-our-tokyo-office

    [3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

  • by Bengalilol on 2/27/2026, 8:03:54 AM

    TLDR: « depends on where you live »

  • by jiggawatts on 2/27/2026, 12:22:33 AM

    Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

    I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!

    This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.

    Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.

    If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.

  • by tehjoker on 2/27/2026, 1:55:20 AM

    The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.

    AI should never be used in military contexts. It is an extremely dangerous development.

    Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.

  • by myko on 2/27/2026, 2:28:01 AM

    There is no Department of War. This is the dumbest fucking timeline.

  • by einpoklum on 2/27/2026, 1:01:46 PM

    The first sentence was quite enough:

    > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.

    The main thing that's disappointing is how some people here see him or his company as "well-intentioned".

  • by creatonez on 2/27/2026, 5:31:05 AM

    > Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.

    It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.

  • by mvkel on 2/26/2026, 11:15:02 PM

    "as an ai safety company, we only believe in -partially- autonomous weaponry"

    Ads are coming.

  • by OutOfHere on 2/26/2026, 11:21:39 PM

    The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.

  • by dakolli on 2/27/2026, 2:14:20 AM

    This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.

    I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.

    Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.

  • by joshAg on 2/27/2026, 1:43:41 AM

    torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus

  • by probably_wrong on 2/26/2026, 11:35:26 PM

    I have read the whole thing but I nonetheless want to focus on the second paragraph:

    > Anthropic has therefore worked proactively to deploy our models to the Department of War

    This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.

    There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.

    Disclaimer: I'm not a US citizen.

    [1] https://m.youtube.com/watch?v=ToKcmnrE5oY

  • by eigencoder on 2/27/2026, 3:51:02 PM

    Honestly, I don't get it. So many tech companies are happy to do business in China and serve its interests, when it would gladly see them fail. But they won't defend their own country and its interests.

  • by 0xbadcafebee on 2/27/2026, 3:46:44 AM

    Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.

  • by I_am_tiberius on 2/27/2026, 9:14:05 AM

    I'm still waiting for a proof that they don't use user data (directly or derived) for training.

  • by ozzymuppet on 2/27/2026, 2:40:42 AM

    Wow, I expected them to cave, and they did'nt!

    I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.

  • by ssrshh on 2/27/2026, 3:23:44 PM

    This is quite the PR stunt. Tech companies can't stop copying Apple

  • by DiabloD3 on 2/27/2026, 7:21:11 AM

    This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.

  • by willmorrison on 2/26/2026, 11:29:07 PM

    They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???

    I guess they're evil. Tragic.

  • by zkmon on 2/27/2026, 12:22:13 PM

    Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.

  • by toddmorrow on 2/27/2026, 3:45:58 PM

    his dilemma wasn't moral. he has none. it was a marketing snafu. he marketed anthropic as different when the cost of claiming that was zero. now there's a cost, and he immediately changes his tune. his statement was essentially "why refrain from building killing machines when no one else is refraining? why limit ourselves unilaterally?" duley proves he never had morals in the first place.

  • by nla on 2/27/2026, 12:51:18 PM

    I truly do not understand why anyone thinks serious work can be done with their models, let alone government work. Their models do no hold a candle to Open AI.

  • by caerwy on 2/27/2026, 3:51:11 PM

    His real beef seems to be with “any lawful use”. He doesn't agree with the law and wants to only sell to customers who agree with his own moral code. I respect his moral choice but suspect this is not how a market economy ought to work. He ought to lobby government to change the law rather than make moral judgements about his customers.