Secretary of War Tweets That Anthropic is Now a Supply Chain Risk
Lesswrong
Secretary of War Pete Hegseth unilaterally declared Anthropic a supply chain risk after the company refused Pentagon demands to analyze bulk American data, while OpenAI simultaneously secured a replacement contract.
戰爭部長發文稱 Anthropic 現已構成供應鏈風險
Lesswrong
大約 4 小時前
AI 生成摘要
戰爭部長 Pete Hegseth 在 Anthropic 拒絕國防部分析美國人海量數據的要求後,單方面宣布該公司為供應鏈風險,而 OpenAI 則在此同時簽署了替代合約。
This is the long version of what happened so far. I will strive for shorter ones later, when I have the time to write them.
Most of you should read the first two sections, then choose the remaining sections that are relevant to your interests.
But first, seriously, . Do that first. I will not quote too extensively from it, . You’re not allowed to keep reading this or anything else until after you do. I’m not kidding.
That’s out of the way? Good. Let’s get started.
What Happened
President Trump enacted a perfectly reasonable solution to the situation with Anthropic and the Department of War. He cancelled the Anthropic contract with a six month wind down period, after which the Federal Government would be told not to use Anthropic software.
Everyone thought the worst was now over. The situation was unfortunate for Anthropic and also for national security, but this gave us six months to transition, it gave us six months to negotiate another solution, and it avoided any of the extreme highly damaging options that Secretary of War Pete Hegseth and lead negotiator Emil Michael had placed upon the table.
Anthropic would be fine without government business, and the government would mostly be fine without directly using Anthropic. Face was saved.
I have sources that confirm that Trump’s announcement was wisely intended as an off-ramp and de-escalation of the situation, and that it was intended to be the end of it, now that everyone could breathe.
An hour after that, on his own, Pete Hegseth went rogue and , illegally declaring that ‘effective immediately’ he was declaring Anthropic a Supply Chain Risk, and that anyone who did business with the Department of War in any capacity could not use Anthropic’s products in any capacity.
Even if it had not been issued via a Tweet, this is not how the law actually works.
If this is implemented as stated, it will cause a market bloodbath and immense damage to our national security and supply chain. It would be attempted corporate murder with a global blast radius.
Thankfully it probably won’t be anything close to that.
Probably.
The market understands that this is not how any of this works, so the reaction was for now relatively muted, as only about $150 billion was wiped from public markets in later post-close trading. I believe that is an underreaction based on the chilling effects and damage already done, but we will never know the true market impact because events have already been confounded.
I hope for the best on that front, but the danger remains.
We must be vigilant until the coast is clear, and we must prepare for the worst. Pete Hegseth cannot be allowed to commit corporate murder.
Outcomes like this usually don’t happen exactly because people realize they would otherwise happen, and prevent them.
What was that all about?
: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with about your life.
Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart.
Okay, what was that all about?
We don’t know. I have sources saying that Doge is driving this, and I have other speculations, but ultimately we don’t know what they want this capability for. What we do know is that they blew the whole situation up over this question. There must have been a reason.
Whatever that was, or an actual outright attempt to murder Anthropic, is what this is all about. It’s not a matter of principle.
Then, later that night, OpenAI accepted a contract with the Department of War. They claimed that very day that they had the same red lines as Anthropic, yet they seem to have accepted , as confirmed by Jeremy Lewin.
How did OpenAI negotiate such a deal in two days? My interpretation of OpenAI’s public statements is that they consider any action crossing their red lines to already be illegal, and thus there are no uses that they would consider both legal and unacceptable, and that it is not their place to make that determination.
But that’s not what matters. The contract terms here ultimately don’t matter.
What matters is that OpenAI and the Department of War are trusting each other. OpenAI is giving DoW a replacement that allows them to offboard Anthropic without overly disrupting national security, and trusting DoW to decide what to do with that tech and to not to do anything too illegal.
DoW is trusting OpenAI to deliver a good model and let them do what they operationally need to do and not suddenly start tripping the safety mechanisms. Forward engineers and the safety stack will trust but verify, and Altman claims he stands ready to pull the plug if DoW goes too far.
All of OpenAI’s meaningful safeguards are in the security stack, and its right to choose what model to deliver and pull the plug. Which means they’re in contract language we may not ever see.
I believe that the way they presented that deal and the situation has been misleading enough to cost me and a lot of others a lot of sleep, but it now seems clear.
OpenAI’s employees need to investigate the technical provisions and ask whether the red lines they personally care about are meaningfully protected, and whether they wish to be part of what is happening given the circumstances.
Even more than that, it is not clear whether OpenAI’s attempted de-escalation of the situation de-escalated it, or escalated it further by giving Hegseth a green light.
Indeed, the New York Times thinks exactly that happened:
: Mr. Michael was unhappy with that answer, the people said. He also had an ace up his sleeve: On the side, he had been hammering out an alternative to Anthropic with its rival, OpenAI. A framework between the Pentagon and OpenAI had already been reached.
So when the Friday deadline passed, the Department of Defense did not give Anthropic more time. At 5:14 p.m., Mr. Hegseth announced that he had designated Anthropic as a security risk and that it would be cut off from working with the U.S. government. “America’s warfighters will never be held hostage by the ideological whims of Big Tech,” on social media.
Again, I don’t think that was Altman’s intention, at all. But whichever way this went, OpenAI’s employees and leadership need to make it clear that they cannot enter a relationship built on trust with DoW, if DoW actually attempts a widely scoped supply chain risk intervention against Anthropic, and attempts to kill the company.
Sam Altman has been excellent in calling for not labeling Anthropic a supply chain risk. I take him at his word that he was attempting to de-escalate.
But if OpenAI’s willingness to work with DoW is used not to de-escalate but as a way to allow escalation, then OpenAI must not abide this, and if OpenAI does abide then it would then be actively and consciously escalating the situation.
: Does the precedent that the DoW is setting by effectively blacklisting Anthropic make you concerned about what any future dispute with the Pentagon would mean for your own company’s independence and viability?
(CEO OpenAI): Yes; I think it is an extremely scary precedent and I wish they handled it a different way. I don’t think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution.
If things escalate, ‘I wish it had gone better’ and ‘hopeful’ will no longer fly.
You may have some very big ethical decisions to make in the coming days.
So might those at many other tech companies, and everyone else, if this escalates. Think about what you would do if your company is put to a decision here.
What the OpenAI deal definitely did was further invalidate the legal arguments for a supply chain risk designation and removed the need for further confrontation. But unfortunately, no matter how obvious the case looks to us, we cannot be certain the courts will do the right thing, which includes doing it fast enough to prevent damage.
Throughout this, a remarkable number of people have tried to equate ‘democracy,’ the American way, with what is actually dictatorship or communism, or the Chinese way. As in private citizens do whatever those in charge demand of them, or else. I vehemently disagree.
: All arguments against Anthropic I’ve seen from right wing posters have been a variant of the government should be allowed to seize the means of production
: As we do, and as we have future debates about the proper nexus of control over frontier AI, I encourage you to avoid the assumption that “democratic” control—control “of the people, by the people, and for the people”—is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now.
For now the headlines say the big destructive action launched by the Department of War that day without proper Congressional authorization was that they attacked Iran. Even with what has unfolded there I am not entirely convinced history books, if we are around to read them, will see it that way.
The house is on fire.
The question is, what are you going to do about it?
The Timeline Of Events
This is my best effort to bring together the key events in the story. to recollect the sequence of events. I apologize for any key omissions, errors or where I am trusting misrepresentations. Some of this is from private sources. Some events may be out of order, I believe in ways that would not change the interpretation.
Last year: Tensions rise between the White House and Anthropic, for a variety of reasons. David Sacks (conspicuously and virtuously silent during this crisis) spent a remarkable percentage of his time against Effective Altruism in general, ‘doomers’ and in particular Anthropic. Elon Musk, founder of xAI, is also repeatedly is hostile to Anthropic, and creates Doge. Katie Miller goes to xAI. Nvidia is hostile to Anthropic in various ways, despite investments.
Last year: Anthropic and other companies sign government contracts with DoD for up to $200 million each, containing many restrictions on government use. Anthropic makes it a priority to be the first to be on classified networks, despite it not being a good business opportunity given the associated risks and hassles, to help in the national defense. Anthropic has an easier route because of AWS.
June 6, 2025: , specialized to the needs of government and classified information.
Previously: DoW asks to renegotiate Anthropic’s contract to make it less restrictive. Anthropic agrees to do so on many fronts but draws two red lines.
January 3: Maduro is captured in a government raid. Anthropic’s Claude is widely believed to have been used in this, without incident. Everything went great.
January 9: Hegseth sends out a memo demanding, among other AI initiatives, DoW not use ‘woke AI.’
Previously: DoW circulates a story that Anthropic asked questions about the raid and was potentially unhappy and might pull its contract. I have gotten multiple unequivocal denials, saying this was entirely made up by DoW. This is part of an ongoing narrative of ‘’ that has no bearing on the actual situation whatsoever, and never did.
Previously: Elon Musk, he of the Doge and xAI, and hater of supposed Woke AI everywhere, starts Tweeting far more frequent hostile and ad hominem attacks against Anthropic, really quite a lot, including saying . Sources I have claim that he urged DoW to attempt to coerce or disrupt Anthropic. Katie Miller also Tweets similar material.
Previously: DoW circulates a story that Dario told them that if their system refused to provide real time missile defense (later they said drone defense) that he said to call them. I have unequivocal denial, from a secondary source, that this or anything like it ever happened. This story is almost certainly fiction and makes no sense, and is at best a willful misunderstanding. We already have automated missile defenses that wisely do not use LLMs. Calling Dario in real time would do absolutely nothing, regardless of his preferences, and he could neither turn on or off such systems on classified networks.
Previously: DoW says it sends its ‘best and final’ offer, in public, saying that it cannot let private companies refuse requests.
This Week: Agreement is announced with xAI to use Grok on classified networks, but experts express dissatisfaction with model reliability and quality.
Tuesday: Secretary of War Pete Hegseth meets with Anthropic CEO Dario Amodei. along with Feinberg, Michael, Duffey, Parnell and Matthews.
Tuesday: In addition to the threat to designate Anthropic a supply chain risk, the Department of War and invoke the Defense Production Act.
Thursday: Sean Parnell Tweets, setting the 5:01pm Friday deadline, and says ‘we will not let ANY company dictate the terms’ while dictating their terms to modify an existing contract, and while negotiating extensively with OpenAI and also Anthropic over detailed terms.
Thursday, 12:24pm: , they’re going to declare Anthropic a supply chain risk. He also claims that using AI to conduct mass surveillance of the Americans is illegal (which definitely isn’t true as such).
Thursday evening, earlier: Anthropic explains it will not agree to the terms in the ‘best and final offer.’
Thursday evening: Emil Michael says in Tweets, in response to Anthropic’s statement, that Dario Amodei is a ‘liar’ and has a ‘god complex.’
Thursday evening: Altman sends a memo to staff.
Thursday evening, 10:54pm: Emil Michael emails Dario Amodei comments.
Friday morning: Sam Altman goes on CNBC, and he trusts Anthropic on safety and OpenAI share Anthropic’s red lines. .
Friday afternoon: , with the Department of War. He says the government has agreed to let OpenAI build their own ‘safety stack’ of technical, policy and human controls sitting between a powerful AI model and real-world use, and if the model refuses a task they will not force the model to do that task.
Friday: send Anthropic and the Pentagon a private letter urging them to resolve the issue.
Friday, 3:47pm (1 hour 14 minutes BEFORE deadline): Trump sends Truth winding down Anthropic’s contract and direct use by government, giving everyone a reasonable way to end this while mitigating fallout and also leaving time to find another way.
Friday, 3:48pm: The rest of us assume okay, that’s it, happy weekend.
Friday, 3:51pm (1 hour 10 minutes BEFORE deadline): Dario sends an email with redlines to continue negotiation. , Dario was offering to allow Claude to be used for FISA, as long as it was not used for mass surveillance on unclassified commercially acquired information.
Friday, 5:01pm (AFTER the deadline): Emil attempts to call and message Dario, timing as per Emil’s Tweet.
Friday, 5:02pm: Emil makes another attempt to contact Dario, calling a ‘business partner,’ offering that a deal can still be struck as long as there are terms permitting legal mass domestic surveillance, especially analysis of previously collected data. that he is on the phone with his executive team and needs more time, given (as per Emil’s own tweets) he called Dario after the supposed deadline. But of course, given Emil had intentionally let his own deadline pass, there was no actual rush.
Friday, 5:14pm (13 minutes after Dario was attempted to be contacted): SoW issues a Tweet at best legally questionable order in retaliation, saying ‘the decision is final,’ that takes $150 billion off of US stock market that would if enforced cause massive damage to not only Anthropic but many major corporations and the military supply chain. There is no more official communication from DoW on this matter, at least in public.
Friday, 8:25pm: responding to Pete Hegseth, that includes: “We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.” They announce the intention to challenge any supply chain risk designation in court, and reassure customers that even if implemented it would be far more limited in scope than Hegseth claimed. They are holding to their red lines.
Friday, 9:14pm: .
Friday, 9:56pm: to allow ‘all lawful use’ that OpenAI claims allows OpenAI to build its own ‘safety stack,’ and includes ‘technical safeguards,’ as in if OpenAI’s model refuses requests DoW agrees to respect those refusals, and that it protects the same redlines Anthropic had. He says ‘in all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.’
Saturday, 4:30am: Initial reports that Iran has been attacked by the DoW. .
Saturday morning: I encourage everyone to listen to at least that clip. . Among other quotes: ‘We are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of national security. … Disagreeing with the government is the most American thing in the world.’
Saturday afternoon: , claiming it offers robust protections stronger than Anthropic’s previous contract, which itself was much stronger than anything Anthropic was proposing during negotiations. They claim they have multi-layered protections, and share two paragraphs of legal language that do not by themselves appear to offer much protection against adversarial lawyering, given their agreement to ‘all lawful use’ and the history of such agreements.
Saturday, 4:45pm: ‘the DoW does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with laws, regulations, the Constitution’s protections for American’s civil liberties. The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.’
Saturday, 7:13pm:
Sunday afternoon: R, including that the confrontation was ultimately about willingness to analyze bulk data. .
Not main events, but media:
Saturday: .
Saturday:
I Did Not Have Time To Write You A Short One
If you have time to read only a sane amount of words today about this, start by **. **It needs to be read in full. Seriously, read that.
This piece is long. Way too long.
A running joke is I write long posts because I do not have time to write short ones.
In this case, that is literally true. I have been working around the clock all weekend, trying to write, to process the internet and also do a journalism under speed premium.
Thus, my strategy is:
This is the long post. It includes everything. I’m not trying to cut anything out of the story. It’s going to have some amount of repetition, and it’s covering a ton of different things. I did the best I could.
I will then spend time over the coming days writing shorter ones, including better presenting this material while updating for additional developments.
The Unhinged Declaration of the Secretary of War
This is the statement that blew everything up. It came at 5:14pm eastern on Friday, February 27, thirteen minutes after the self-imposed deadline of 5:01pm, and about an hour after President Trump attempted to head this off.
: This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei , have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Who wins from this? .
Altman Has Been Excellent On The Question of Supply Chain Risk, But May Need To Do More
: He has repeatedly, including in public, said in plain language that Anthropic is not a supply chain risk and it should not be designated as one, both before and after he agreed to the OAI contract.
Sam Altman: Enforcing the SCR designation on Anthropic would be very bad for our industry and our country, and obviously their company.
We said to the DoW before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.
I feel competitive with Anthropic for sure, but successfully building safe superintelligence and widely sharing the benefits is way more important that any company competition. I believe they would do something to try to help us in the face of great injustice if we could.
We should all care very much about the precedent.
I saw in some other tweet that I must not be willing to criticize the DoW (it said something about sucking their dick too hard to be able to say anything critical, but I assume this was the intent).
To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it.
That is an excellent statement, and it matters. Nor do I begrudge Altman his saying various very generous things about the Department of War in this situation, in basically every other context. This is the right place to spend those points.
I also want to explicitly say that I not believe that Altman or OpenAI in any way contributed to or engineered this scenario, or of any kind in their contract negotiations. They sincerely do not want any of this.
Anthropic got historically and maliciously hostile treatment, and this may escalate further, but I don’t think OpenAI had anything to do with that.
Sam Altman’s problem is that while signing the contract was intended to be de-escalatory, it could also be escalatory, if DoW now thinks it can safety attempt to kill Anthropic, and does not understand how epic of a clusterfuck this would cause. Thus, OpenAI must make clear, if only privately (which it may have already done) that delivery of models to DoW is based on trust in DoW and trust that this is a de-escalatory move, and further escalation against Anthropic would destroy that trust.
Arrogance Here Means Insisting On Meaningful Red Lines On Mass Domestic Surveillance and Lethal Autonomous Weapons
Let’s go over the above statements by Secretary Hegseth, one by one, clause by clause.
Pete Hegseth: This week, Anthropic delivered a master class in arrogance and betrayal.
The betrayal was, I presume, not giving in to the Pentagon’s position.
The arrogance was insisting that they would not sell their software to DoW unless they preserved existing contract terms disallowing two things that the DoW insists they are not doing and will not do, and that are already illegal:
Domestic mass surveillance.
Lethal autonomous weapons without a human in the kill chain, until such time as reliability is sufficient that this is a reasonable thing to do.
It is unclear to what extent autonomous weapons are illegal, but to the extent they are currently illegal everyone agrees this would be due to DoDD 3000.09. That is a directive issued by the Department of (then) Defense under the Biden administration. Hegseth could reverse it, without even Trump’s approval, at any time.
It is unclear to what extent mass domestic surveillance is illegal or is already happening, especially as it is not a defined term in American law.
The NSA is under DoW, and many believe it has in the past engaged in mass domestic surveillance, seemingly in clear violation of the Fourth Amendment. Another part of the Federal Government has recently issued subpoenas to tech companies looking for information about those who spoke critically about that government agency.
Anthropic points out that, with the advent of the current level of AI, the government could effectively engage in mass domestic surveillance of various types without technically breaking any existing laws.
OpenAI does not seem to believe such action would violate their red lines, and thus the red lines are in very different places. Which is fine, but one must notice.
Not Doing Business Is Totally Fine
As well as a textbook case of how not to do business with the United States Government or the Pentagon.
If the Pentagon wishes not to do business with Anthropic, all they have to do is terminate the contract. Or you can do what Trump did, and ban use throughout the Federal Government. Which he did. That would have been fine.
If that was all they had done, we would not be having this conversation.
Instead, Pete Hegseth is attempting to destroy Anthropic as a company, as retaliation, for daring to not to give in to the demands of Emil Michael. This is not wise, proportionate, productive, legal, sane or what happens in a Republic.
The Demand For Unrestricted Access Is New And Is Selective And Fake
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
It is only a Republic if you can keep it.
Only hours later, OpenAI announced an agreement with the Pentagon for restricted access to OpenAI’s models. These restrictions supposedly include provision only on the cloud, so OpenAI can shut down access any time. They supposedly include accepting OpenAI’s safety filters. They supposedly include explicit restrictions on use in domestic mass surveillance and autonomous lethal weapons.
Sounds like when you say you must have unrestricted access, that’s a claim specifically about Anthropic, that doesn’t apply to OpenAI, who you are happy to contract with?
Except the key terms they accepted were also offered to Anthropic, and OpenAI’s terms are being offered now. If what OpenAI is claiming is true, they got more restrictive (on DoW) terms than Anthropic would have, and if Anthropic agrees to the new deal that would not mean full, unrestricted access for every lawful purpose.
. So why were you offering it, if your position has never waivered? Or do you think OpenAI’s protections are worthless?
In addition, under Secretary of War Pete Hegseth, the Department of Defense signed the original procurement contracts with Anthropic and other AI companies. Those contracts, including the one with Palantir, were severely more restrictive than Anthropic’s current red lines. None of this is new, and Anthropic was willing to authorize getting rid of most existing restrictions.
In his Friday 5:02pm call to Anthropic, Emil Michael offered terms to Anthropic, that violated the above provision and did impose additional restrictions, so long as they were allowed to do mass domestic surveillance, especially mass analysis of collected data.
Finally, the whole ‘how dare they restrict usage with a contract’ is nonsense, . Very good piece there.
The story does not add up. At all. It is false.
Claims Of Strongarming Are Ad Hominem Bad Faith Obvious Nonsense
Instead, @AnthropicAI and its CEO @DarioAmodei , have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
Where to begin? This is completely unhinged behavior, unbecoming of the office, and is not in any way how any of this works.
I cannot even figure out what he is trying to mean with the word duplicity.
The rhetoric or logic of Effective Altruism was not involved. This is a pure ‘these words have bad associations among the right people’ invocation of associative ad hominem. Anthropic had two specific concerns. Neither of these concerns has ever been a substantial position or ‘cause area’ of Effective Altruism.
Claims of strongarming are absurd and Obvious Nonsense. Anthropic is perfectly willing to maintain its current contract. It is perfectly willing to cease doing business with the Department of War. Anthropic is even happy to fully cooperate with a wind down period to ensure smooth transition to the use of ChatGPT or other rival models.
Anthropic is simply laying out the conditions, that were already agreed upon previously, under which they are willing to sell their product to the government. The government is free to accept those conditions, or decline them.
Very obviously it is the Department of War that is strongarming. They threatened both use of the Defense Production Act and the label of a Supply Chain Risk to try and get Anthropic to sign on the dotted line and give them what they wanted. When Anthropic declined, as one does in business in a Republic, while offering to either walk away or abide by their current contract, and offering actively more flexible terms than their current contract, they were less than fifteen minutes later labeled a ‘supply chain risk’ in ways that make zero physical sense, and which the OpenAI agreement further disproves.
Hegseth Equates Not Being a Dictator With Companies Having Veto Power Over Operational Military Decisions
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Okay, seriously, are you kidding me here? Are we in fifth grade, sir?
Are you saying that no company that does business with the government can set terms of service or conditions for their contracts? Should Google and Apple and everyone else bend that same knee? Are you free to alter the deals and have people pray you don’t alter them any further?
Or are you only saying this about Anthropic in particular, because you’re mad at them?
Once again, if you don’t like the product being offered, then don’t buy it.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Obviously that is not their ‘true objective.’ How exactly does he think this would work? This makes no sense. They’re offering a product that will do some things and not other things. You can use it or not use it. Does a tank veto your operational decisions when it runs out of fuel or cannot fly?
Think about what Hegseth’s position is implying here. He is saying that refusal to do business, on the Pentagon’s terms, and allow the Pentagon to order anyone to do anything it wants for any purpose and ask zero questions, is unacceptable, a ‘seizure of veto power.’
He is claiming full command and control over the the entire economy and each and every one of us, as if we were drafted into his army and our companies nationalized.
He is claiming that the Commander in Chief of the United States is a dictator. He is claiming that we do not reside in a Republic. And if we disagree, he’s going to prove it.
I am very happy that the Commander in Chief has not made such a claim.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Again, rhetorical flourishes aside, I fully support the central action President Trump took on Truth Social, which was to responsibly wind down Anthropic’s direct business with the Federal Government in the wake of irreconcilable differences. That would have been fully sufficient to address any concerns described.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
There is nothing more American than standing up for what you believe in, disagreeing with your government when you think it is wrong, and deciding when and under what conditions you will and will not do business. That is the American way. What Hegseth is describing in this post? That’s command and control. That’s do as you’re told and shut up. That’s the Chinese way. The whole point of this is that we believe in the American way and not the Chinese way.
The amount of outright communist or at least authoritarian rhetoric is astounding.
Here’s another example from someone else:
: It is strange to imagine this today, but one day AI companies might dictate terms to the US government instead of the other way around. We have only seen a glimpse of what AI is capable of. No matter what the future holds, I hope we’ll continue to live in a democratic society.
As in, if I attempt to decide when and on what terms I will choose to do business, then we do not ‘live in a democracy.’
I would argue the opposite. If we cannot choose when on what terms we do business, including with the government, then we do not live in a free society.
, they were exercising their right of free speech to disagree with the government, and ‘disagreeing with the government is the most American thing in the world.’
If you didn’t disagree with the government a lot in either 2024 or 2025, I mean, huh?
. Palmer Lucky is de facto saying that in national security you are de facto soft nationalized, and have to do whatever the government says, and you have no right to decide whether or not to do business under particular terms, or to enforce your terms in a court of law or by walking away. They want to apply that standard to Anthropic.
that ‘contracts with the private sector aren’t the right place to set defense policy and priorities’ but that does not describe what was happening. A private company was offering services under some conditions. The DoW was free to take or reject the terms, and to also do other things when not using that company’s products. There was no dictating of policy.
Achiam also emphasizes that of course Anthropic should be free to express its disapproval and free to decline any contract it does not want, and punishing Anthropic for this beyond ending its contract is unacceptable.
I worry that many (not Achiam) are redefining ‘democracy’ in real time to ‘everyone does whatever the government says.’
I strongly urge everyone who is unconvinced to read, if they have not yet done so, Scott Alexander’s post from 2023, “”
I am highly grateful that we live in a Republic, and I hope to keep it.
I will return to this question when I discuss OpenAI’s communications near the end of the post.
Of course, the DoW claims that , despite Altman claiming they successfully got the same carve-outs.
The Part That If Enacted Would Be A Historically Epic Clusterfuck
Everything before this is rhetoric. It’s false, it’s conduct unbecoming, it’s shameful, but it has zero operational effect beyond the off-ramp Trump already offered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.
Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
This is not how any of this works, on so many levels.
There has been no official communication to any effect regarding any restriction.
This is a designation that requires many procedural steps, including Congressional notification, and to our understanding none of that has happened. They didn’t even ask their big contractors about the impact of it until last week. .
Supply chain risk designations only apply to use for the purposes of fulfillment of government contracts. No one is telling Amazon, Google or Nvidia they have to choose between ‘doing business with’ Anthropic and their government contracts.
Certainly the idea of telling such entities they cannot sell to Anthropic is beyond absurd, for reasons I do not need to explain.
Any such restriction would be arbitrary and capricious, and thus illegal, and Hegseth and Michael have made this abundantly clear many times over.
all the government’s arguments, unless Sam Altman is very deeply wrong about what his deal terms are. .
. The broad kind, presumably intended here, is 4713, “”the risk that any person may sabotage, maliciously introduce unwanted function, extract data, or otherwise manipulate the … operation … of [a] covered article.”
This is entirely inconsistent with any of the government’s claims anywhere.
Their best attempt is, as Samuel Roland says, ‘Anthropic’s use restrictions “manipulate operations” and are therefore risks.’ That makes no sense, and even if it did, it’s invalidated by the deal with OpenAI. If this counts, everything counts.
The narrower definition is 3252, it’s textually/structurally aimed at “adversary” (read: foreign) subversion of covered systems. It clearly does not apply. Even if it did,
This designation has not been applied, although it should be, to actual supply chain risks from or Kimi, and there is no sign of any move to do so. This further illustrates the complete lack of basis for this label.
The best physical argument that supply chain risk could exist over Anthropic employees’ extralegal objections. Except that once Claude was placed upon classified networks, there is no way for Anthropic to shut down that version of Claude or to monitor any activities.
If this was the unclassified regular version, then even if Anthropic did shut it down, this is no different than any other supplier potentially deciding to no longer do business with any other particular company. If anything this is a much smaller risk than most other provisioned services, as business could be switched over to other providers quickly. Google and OpenAI are on standby.
Think about the consequences of such an argument: It is saying that any business that might have any conscientious or ethical objections to anything, ever, and therefore might decide to stop doing business, is a supply chain risk and must be blacklisted and destroyed. And what about all the other ways companies stop doing business with each other?
If Anthropic is a supply chain risk in this way, so are OpenAI and Google.
This is what Dean Ball correctly ‘attempted corporate murder,’ and Adam . It is an attempt to destroy America’s fastest growing company in history, and one of its top AI labs, out of revenge in a fit of pique, for failure to properly bend the knee and respect his authoritah, or a hatred of its politics. This would also cause massive damage to our national security and military supply chain, to many of our largest corporations, and the entire economy. The $150 billion that evaporated on Friday that hour would look like nothing.
If we allowed this to stand, it would be a sword of Damocles over every person and corporation in every discussion with the Federal Government, forever. And it would then be used, or threatened to be used, not only by the current administration, but by the next Democratic administration. We would severely endanger our Republic.
Under normal circumstances I would not be worried. These are not normal circumstances. I continue to worry that Hegseth will attempt to murder Anthropic, despite having no legal basis for doing so, and that this may be his active goal. I call upon Trump to ensure this does not happen, and for those around him to ensure Trump is situationally aware and gets this de-escalated once more.
Even if it is walked back, trust is hard to build and easy to break. for AI was one of our key advantages. Even if we walk this back, that’s been damaged. If it isn’t walked back, this is devastating.
Finally, this supposed supply chain risk, also threatened with the DPA, will continue to provide its services directly to DoW, which it is happy to continue doing.
Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
Yes. That was the whole plan. Then you blew it up.
If six months from now they do somehow get to enforce this, it’s not even obvious that major corporations would choose the Department of War over Anthropic. and Anthropic will likely be several times bigger by then.
Jasmine Sun is far less kind than I am.
: Hegseth is not behaving like a normal political actor. He is indulging in ego, intimidation, and dickwaving theatrics. Hegseth does not want to look like he can be micromanaged by Anthropic’s esoteric morality police; this “saving face” matters more to him than actually securing the country.
Hence the deal with Altman, who unlike Amodei, is willing to kiss the ring. Altman shows up at Mar-a-Lago and calls Trump “.” In his announcement, he praises the DoW’s “respect for safety,” while Amodei called out their intimidation. He defers; Amodei doesn’t. These things matter. They show Altman can be worked with (or more cynically, controlled).
… This is not a normal way for the US government to deal with US companies. I’ve the current paradigm “state capitalism with American characteristics.” Do what we say, or else we will kill you.
If the Trump administration has a model here, it’s probably China. Xi’s CCP disappears billionaires like Jack Ma for acting too independent-minded and defiant of the regime.
If this goes further, the market will start freaking out, and we would need to freak out with it to put a stop to this before it goes too far.
The Other Part Of The Clusterfuck
While everything else was going on, some of those in the American software export industry was having a different kind of crisis weekend.
There is already widespread unsubstantiated fear, especially in Europe, that American secure technological stacks (the ICT) are being weaponized by the American government. Potential buyers worry that trusted vendor is or will become code for American data grabs and kill switches, or otherwise weaponized capriciously.
These fears are often unrealistic. They still impact purchase decisions.
So far this has not spread much to the third world, I am told, but that could change.
I had some ideas for rhetoric to help framing this, but now that we know what this dispute was about my suggestions won’t fly. This only gets harder.
Attempting to murder Anthropic for failure to do mass surveillance on Americans risks a dramatic chilling effect, as potential buyers assume everyone in the chain either is already compromised or could be compromised, and then weaponized. So would everyone ‘rolling over and playing dead’ while such a murder is happening.
If it is vital to America that we push the American ICT, and David Sacks and the rest of this administration insist that it is, then broadly going after Anthropic is going to create a rather large problem on this end, on top of all the other problems.
Whereas if the situation de-escalates, then this could reinforce trust in the system, because it would be clear a vendor under pressure could say no.
That’s in addition to the problem that’s even bigger: If you don’t know what America will do next, or when you might lose access to what you’re buying, you can’t rely on it.
: Stepping back even further, this could end up making AI less viable as a profitable industry. If corporations and foreign governments just cannot trust what the U.S. government might do next with the frontier AI companies, it means they cannot rely on that U.S. AI at all. Abroad, this will only increase the mostly pointless drive to develop home-grown models within Middle Powers (which I covered last week), and we can probably declare the American AI Exports Program (which I worked on while in the Trump Administration) dead on arrival.
There are many reasons in professional and polite language, despite them being tempromentally cautious and most or all members having pending or ongoing business with the government.
: The following statement can be attributed to Chris Mohr, President, the Software & Information Industry Association (SIIA).
In order for AI to be successfully deployed in a democratic society, it must be adopted with appropriate risk-based guardrails. We support Anthropic’s decision to work with the Department of War (DoW) to deploy its AI models to advance national security while also requesting reasonable limitations on the use of those models in a narrow set of cases. We share Anthropic’s view that mass domestic surveillance is incompatible with democratic values. We also agree that fully autonomous weapons require AI systems that are suited to the task – requiring a degree of reliability that Anthropic acknowledges has not yet been achieved. Very few DoW use cases even touch on these situations.
We encourage the parties to find agreement and caution against counterproductive measures. Invoking the Defense Production Act to compel the removal of security restrictions, or designating a domestic leader like Anthropic as a ‘supply chain risk,’ represents an overbroad response to a technical disagreement. Such a ‘blacklisting’ approach, typically reserved for hostile foreign entities, is both untethered from the facts of Anthropic’s security posture and unlikely to advance a long-term solution.
The Department of War Had Many Excellent Options
If the point of the Department of War’s actions is anything other than the corporate murder of Anthropic, they could have simply cancelled the contract.
If that was somehow insufficient, they had many strictly superior options available that would have done the job of covering any additional concerns.
If that was somehow insufficient, a narrowly scoped supply chain risk designation, that applies only to direct use in procurement of contracts, would end all doubt.
Here I will quote Ball’s post Clawed (again, read that in full if you haven’t).
: The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable. They could also have dealt with the above-mentioned subcontractor problem using a variety of tools, such as:
Issuing guidance advising contractors to avoid agreeing to terms with subcontractors that constitute policy/operational constraints as opposed to technical or IP constraints;
A new DFARS (Defense Federal Acquisition Regulation Supplement) clause pertaining specifically to the procurement of AI systems in classified settings that prevents both primes from imposing such constraints directly and accepting such constraints from their subcontractors, along with a procedure for requiring subcontractors with non-compliant terms to waive such terms within a prescribed time period.
These are the least-restrictive means to accomplishing the end in question. If Anthropic refused to compromise on its red lines for the military’s use of AI, the execution of these policies would mean that Anthropic would be restricted from business with DoW or any of its contractors in those contractors’ fulfillment of their classified DoW work.
But this is not what DoW did. Instead, DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.
There is no explanation for announcing language that would force Amazon to divest from Anthropic, and to not serve Anthropic’s models to others on AWS, other than intentional and deliberate attempt at a corporate murder of a $380 billion company, the fastest growing one in American history and an American AI champion. Full stop.
: The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.
…
I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property.
And Then There’s Emil Michael
Pete Hegseth thought it was a good idea to leave these negotiations to Emil Michael.
No one could have predicted that things would go sideways.
: if only there had been some way of knowing that emil michael (the undersecretary of war negotiating the anthropic standoff) had a poor understanding of game theory and a habit of overreacting to perceived slights
for more details. His career section headings are ‘journalism controversy,’ ‘Karaoke bar controversy,’ ‘Russia’ and ‘Later career.’ Fun guy.
See my previous posts for his previous Tweets, which I won’t go over again here.
Emil’s Tweets are frequently what one can only describe as unhinged.
This one stands out, instead, as cautious and clearly lawyered:
: The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that.
Further, the DoW does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with laws, regulations, the Constitution’s protections for American’s civil liberties. The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.
With a statement like that, every word has meaning, and also every missing word has meaning. If he could have made a better statement, he would have. So if this Tweet is technically correct – the best kind of correct – what would that mean?
We learn that DoW has policies for human oversight of its weapon and defense systems, but that there is no particular such requirement that would make us feel better about that. Note that we do have fully automated defense systems, especially for missile defense, because speed requires it, and that this is good.
He is claiming they do not engage in ‘unlawful domestic surveillance.’That’s ‘unlawful,’ not ‘mass.’ Given the circumstances, there’s a reason it didn’t say ‘mass.’
The reason he can say they do not do such actions is they view what they do as legal (or, if they are also doing illegal things, then they’re lying about that).
He says they always strictly comply with laws, regulations and the Constitution. None of those modifiers actually mean anything. It’s just another claim of ‘we keep it legal.’
Next up is the most careful sentence:
The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.
As one person said, what is ‘spy’, what is ‘domestic’, what is ‘communication’, what is ‘U.S. people.’
Spy is typically viewed narrowly, as directly tasking collection against a person. Thus, if he’s saying they ‘do not spy’ that does not preclude many forms of, well, spying, because those are ‘acquisition’ and ‘analysis.’
Domestic communications means they’re definitely spying on foreign communications, as is legal. But a lot of what you think is domestic is actually foreign, if it touches anything remotely foreign.
And this only applies to communications. Collection of geolocation data, for example, or browsing history, would not count, because it is not communications.
U.S. people means this does not apply to those without legal status, and there’s a constant gray zone if you don’t know that someone is a U.S. person, which you never know until you check.
Here, commercial collection exclusion, since it modifies spying, means that they don’t purchase information with the intent of targeting a particular U.S. person’s communications. That’s it.
Remember, each of those words was necessary, and this was the strongest version.
Also, the statement is false.
(RTing Boaz quoting part of Michael): I think I understand what Boaz is trying to say, but given that the National Security Agency is part of the military and given the amount of incidental collection of domestic communications that (legally) occurs under FISA and 12,333, this statement is simply not true.
, he deleted but we have the screenshot.
Completely unhinged behavior here in response to the Atlantic and New York Times articles.
, but he deleted the copyright section. This is not the first time he’s talked about copyright like that.
Donald Trump can pull off that style. He makes it work. Accept no imitations.
Anthropic Will Probably Survive
This was attempted corporate murder. I think it will not succeed, but it’s not over yet.
Things would have to escalate quite a lot, in ways the markets do not expect and that I do not expect. Otherwise, this will not be an existential event for Anthropic. The government was only a small portion of its business. Trust in Anthropic has otherwise gone up, not down.
The threat to destroy Anthropic with the supply chain risk designation is dangerous, but all the competent patriots and the market both know it is insane and it is rather obviously illegal. I believe any such attempt would probably have to be walked back and would ultimately fail.
But it is 2026 and Hegseth is not a competent actor. I cannot be certain.
, the entire government argument in court would be absurd on its face, and if this is delayed until after the contract then six months is an eternity.
What matters would be if the government manages to strongarm the major cloud providers into walking away from giving compute to Anthropic, as in Google, Amazon and Microsoft. I do not believe Trump wants any part of that.
Anthropic is a private company, so we only have very illiquid proxies to see how much damage people think this all did. We can also look at the movement of major investors and business partners like Amazon, Google and Nvidia, and see that they did not substantially underperform so far.
, in that highly illiquid market, Anthropic was trading there around a valuation of ~$465 billion, down from ~$550 billion previously. They last raised money at a valuation of $380 billion. So yes, this hurt and it hurt substantially, mostly in the form of tail risks. I notice that every single person I know with stock in Anthropic is happy they stood their ground.
By Sunday morning that market recovered to ~$540 billion, as people conclude cooler heads are likely to prevail.
(I do not directly hold Anthropic stock, because I want to avoid a potential conflict of interest or the appearance of a conflict of interest. That was an expensive decision. I do hold some amount indirectly, including through Google, Amazon and Nvidia.)
, you should use Anthropic. Even if you later want to sell to DoD and the restrictions somehow stick, you can switch later.
The Goal of DoW Was Largely Mass Domestic Surveillance
As reported above, it seems what the government actually valued most in this negotiation was the ability to use Claude for mass (primarily actually legal, not ‘we got a government lawyer to come up with an absurd legal opinion’) analysis of massive amounts of existing information.
There is no common legal definition of ‘mass domestic surveillance,’ and when they do forms of it the government calls it something else.
That’s not only a government problem. , and get 40 answers, most of them different.
:
Axios: “That deal would have required allowing the collection or analysis of data on Americans, from geolocation to web browsing data to personal financial information purchased from data brokers, the source added.”
Why would you require this if you didn’t intend to use it? What is it for?
I’m not saying that the DoW is aiming to break the law. I’m saying that in the age of powerful AI that the laws do not protect against Anthropic’s redlines, and that DoW intends to do lawful things that violate those redlines, and that instead of MDS they call it something else.
: “existing legal authorities”
The legal machinery to render mass surveillance a ‘lawful’ has been in place for over a decade. FISA is a secret court of 11 judges which approves 99.97% of surveillance requests. Snowden revealed in 2013 that the court had secretly reinterpreted FISA to authorize bulk collection of all American phone metadata. Only the government’s side is heard. No defense, no adversarial argument, Pure rubber stamped circumvention of the constitution.
: The government no longer needs a warrant to surveil you.
Under current law, federal agencies including the NSA legally purchase Americans’ location data, web browsing history, and personal associations from commercial data brokers.
The Fourth Amendment is bypassed entirely through the Third Party Doctrine, which holds that you lose your expectation of privacy when you share information with a third party. Every app on your phone is a third party.
What used to require thousands of analysts working for years now happens automatically across an entire population. AI systems ingest millions of legally “public” data points and synthesize them into comprehensive behavioral profiles. Where you sleep, who you talk to, what you read, what you search, etc.
… Congress has the ability to close the data broker loophole and extend Fourth Amendment protections to match the reality AI has created. Until it does, the constitutional prohibition against general warrants exists only in theory, while the government purchases its way around it at industrial scale.
I believe Sooraj is making a slight overstatement, but that is not material here.
I have private sources that confirm the story here from Shanaka Anslem Perera, and that attribute the ultimate use of the desired permissions to Doge, created by Elon Musk, as well as one ultimately attributing it to the aim of building a classified mass surveillance network to track illegal immigrants at the behest of Musk and Miller. This exact kind of data collection and analysis is the central point.
: Anthropic just announced it will take the Trump administration to court over the supply chain risk designation. And in the same breath, Axios revealed the detail that changes everything about this story.
While Anthropic was being blacklisted for refusing to allow mass surveillance, the Pentagon’s own “compromise deal” that Under Secretary Emil Michael was offering on the phone at the exact moment Hegseth posted the designation on X would have required Anthropic to allow the collection and analysis of Americans’ geolocation data, web browsing history, and personal financial information purchased from data brokers.
Read that again. The Pentagon spent two weeks saying it has no interest in mass surveillance of Americans. Then the deal they actually put on the table asked for access to your location, your browsing history, and your financial records.
They told us Anthropic was lying. The contract language told us Anthropic was right.
.
AI is a change in kind of the type of data analysis that becomes available for a wide variety of purposes.
: For those wondering how mass domestic surveillance could be consistent with “all lawful use” of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): “…to identify ever person who attended a protest”
.
There’s an important distinction between law and policy. A policy not to use bulk data to make profiles of Americans can be changed unilaterally by the Executive. Laws require oversight from congress.
“CAI can disclose, for example, the detailed movements and associations of individuals and groups, revealing political, religious, travel, and speech activities.”
“CAI could be used, for example, to identify every person who attended a protest or rally based on their smartphone location or ad-tracking records.”
“Civil liberties concerns such as these are examples of how large quantities of nominally “public” information can result in sensitive aggregations.”
As the government report says, the scope and scale of commercially available information (CAI) which is publicly available information (PAI) is radically beyond what our current laws foresaw.
: CAI is hardly the ceiling.
: the distinction between “surveillance” and “commercially available data” is a legal fiction that lets agencies bypass the fourth amendment by purchasing what they can’t subpoena. AI doesn’t create new surveillance — it makes existing data actionable at scale.
: Seems clear at this point from Axios reports that DoW wanted to use Claude models for mass analysis of domestic commercial data, possibly fusing them with government data.
At least one use case is obvious.
Consider this (from an anonymous explainer):
Their definition of surveillance isn’t your definition:
As mentioned above, the US government doesn’t have a formal legal definition of domestic mass surveillance, only “bulk collection.” And the US government has, basically long maintained that even if they hoover up a bunch of information indiscriminately, they . As a result, at least one Director of National Intelligence when asked “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?” even though NSA has admitted it does by the ordinary meaning of this question.
On top of that, a large portion of what you think is ‘domestic’ surveillance is, legally, foreign. The laws on all this have been royally messed up for quite a long time, under both parties, and the existence of current levels of AI makes it much, much worse.
If they use ‘third party data,’ the government usually considers that fully legal.
If you combine that with use of Claude or ChatGPT, it means they can do anything and it will be ‘legal use,’ unless you have a specific carve-out that stops it.
After agreeing to language of ‘all lawful use,’ even if this also refers to laws at time of signing, it is hard to see how OpenAI can prevent this sort of analysis from happening.
This is not a new phenomenon.
The government is constantly trying to get all the big tech companies to spy on you on their behalf, including compelling them to do so. They don’t want you to have access to encryption. They want the tech companies to unlock your phone. They want backdoors. It has always been thus.
(1M views): Imagine Apple sold computers or iPads to the DOD and tried to tell the Pentagon what missions could be planned on their computers.
: Wikipedia: The Apple–FBI encryption dispute concerns whether and to what extent courts in the United States can compel manufacturers to assist in unlocking cell phones whose data are cryptographically protected. There is much debate over public access to strong encryption.
: Yea, apple can say no, government can say we can’t rely on you. No one is entitled to Apples work or government contracts. Why is this such a big deal? If Anthropic doesn’t want to do it, some other firm will.
: I mean if the Pentagon signed a contract with Apple to buy iPads and then decided retroactively that it didn’t like the terms of the contract and so it was going to try to do everything in its power to destroy Apple as a company, that would be pretty bad.
Nobody is saying that the Pentagon should be forced to buy Anthropic’s services on terms that the Pentagon doesn’t like — but if you don’t like the terms just don’t buy the product.
Yes, exactly. That’s how it should work. Alas, the situation was more like this:
: Imagine if the government tried to force Apple to add NSA backdoors to all of their devices by threatening to make it illegal for anyone doing business with the government to use macs.
They may or may not actually have such access, but for the sake of argument let’s presume they don’t, and consider that situation.
a large portion of this dispute is that the law has not caught up with AI, and also has largely eroded our civil liberties even before AI.
You can make the argument that since this is technically legal under current law, that makes it ‘democratic’ and so no one has any right to object. That’s not how a Republic works, and to the extent democracy is a positive ideal, it’s not how that works either.
What Are The Key Differences Between The Two Contracts?
This is one official explanation of formal differences, quoted in part to confirm that the key contract terms OpenAI accepted were terms Anthropic rejected.
: For the avoidance of doubt, the OpenAI – @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here.
Lewin is claiming that there were no substantive differences. If anything, OpenAI claims in its post to have included a third (highly reasonable) red line. .
After many rounds, I believe the actual differences that matter are simpler than this.
OpenAI trusts DoW, is fully fine with ‘all legal use’ and letting DoW decide what that means, and is counting on its technical safeguards, safety stack and forward engineers to spot if the DoW does something heinous and illegal, including the threat that if OpenAI is forced to pull the plug DoW would not have good options, and for political and economic reasons you can’t try to destroy them.
Anthropic is not fine with some uses that DoW considers legal and wants to do, and wants language that prevents such actions, with no way to weasel out of it. But they’re fine delivering a basically frictionless system that lets DoW do what it wants in the moment, trusting that DoW will be unwilling to outright break the contract terms or they’d find out if DoW went rogue on them.
Notice that the DoW accusations against Anthropic about asking for operational permission in a crisis are exactly backwards. Claude will work in a crisis and was modified to refuse less, but that might violate a contract to be dealt with later. ChatGPT might refuse in the moment when it’s life and death.
OpenAI’s terms may or may not work or amount to a hill of beans. Too soon to tell. It could work in practice, or it could end up worthless.
We do know exactly why they do not work for Anthropic and DoW, in either direction.
, both the Obama and Trump administrations have done actions that many objected to as rather obviously unlawful, and basically nothing happened.
And again he brings up the potential of ‘pulling the plug mid operation’ which is physically impossible in this context, which is physically impossible with Claude but could inadvertently happen with ChatGPT. And any sensible contract would include a wind down even if it was terminated for clear violations, to protect national security.
As described above instead it comes down to the claimed distinction in paragraph two, which boils down to the following components:
OpenAI in its extra rules referred to particular ‘legal and policy authorities’ rather than to distinct terms.
Anthropic is claimed to want to ‘vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.’
Yeah, that’s not what any of this is about.
The first is not a meaningful distinction if it covers the prohibitions. If OpenAI’s rules refer to particular existing legal and policy authorities, then indeed it permits ‘all legal use’ which includes large amounts of domestic surveillance, and with a flexible government lawyer will include a lot of other things as well.
Nor is it meaningful as a matter of authority and law. The fact that things happen to be on the present books does not make them not contract law, and does not fail to remove them from what is otherwise the democratic authority. But also, very fundamentally, part of democratic law is contract law, and the ability to agree to terms.
The second one is, frankly, rather Obvious Nonsense that is going around. Ignoring that CEOs are accountable (both to the board and to the government and thus ultimately one would hope the people), they are claiming that Anthropic demanded that Dario Amodei be able to decide whether the terms of the contract were fulfilled at his discretion, rather than the government deciding, or it being settled by a court of law. At most, there may have been some questions that were left to be defined in good faith later, as per normal.
My jaw would be on the floor if this was indeed insisted upon or even suggested. That’s not how anything ever works. At most, this was Anthropic asking for carve outs for its two red lines, ,’ so that in a pressing situation you could make an exception. You can take that clause out, then.
in existing contracts. In particular, the claim here is that the contract failed, as the rest of American law does, to define ‘domestic surveillance.’ Or it could simply be ‘sometimes things are not clear in edge cases.’ One sticking point of the negotiations was exactly trying to pin down various definitions and phrases so they would be unambiguous and enforceable, and that Anthropic was trying to clear away what they felt were ‘weasel words’ from proposed DoW language.
In particular, DoW kept wanting ‘as appropriate,’ which mostly invalidates any barriers, although they dropped this demand in the end to try and get other things they wanted more.
But again, having an underdefined term in a contract does not mean it means whatever Dario Amodei thinks it means. At most it means you can sue, and that’s not exactly something one does lightly to DoW over a technical violation.
If Dario Amodei felt the contract was broken, he could, like with any other contract, at most either choose to terminate the contract under whatever terms allow for that (as can OpenAI), although at obvious risk of government retaliation for doing so, or sue in court, and the government court determines if there was indeed a violation, under conditions highly favorable to DoW.
If Dario tried to suddenly shut down the system anyway, that would not even be physically possible on classified networks, and also they could arrest him or worse.
This also implies that Sam Altman does not have any role in determining whether use was lawful, or whether it is valid under the terms of the contract. Sam Altman affirms this under ordinary circumstances, but says that sufficiently clearly illegal actions, especially constitutional violations, would be different.
OpenAI’s Contract Terms
Thus, in the negotiations with Anthropic, there were two things centrally going on.
First, the Department of War wanted the ‘all legal use’ language, and failing that they wanted to avoid one particular carveout to that related to mass surveillance.
Second, Anthropic was attempting to remove various ‘weasel words’ and clauses, that would allow the Department of War to circumvent restrictions.
We can indeed see some of those weasel words in the brief wording shared by OpenAI. OpenAI isn’t relying on these terms to bind DoW, they’re relying on the safety stack and on trust.
Let’s go through every word they shared, and see why they don’t actually bind DoW:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
All lawful use.
The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
They can do anything they want unless the department policy requires human control, so again all lawful use. We already have highly effective autonomous weapons in some cases, such as missile defense. Directive 3000.09, the only plausible barrier, we’ll get to in a second.
Nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
All lawful use.
Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
Even if we assume this is locked into place, .
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.
These absolutely are not meaningfully time stamped at all.
Private information is generally interpreted as not including third party or publicly available information, which includes massive amounts of data on everyone, especially anyone who carries around a phone. And again it’s still all lawful use, except it’s worse:
The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.
So if you put any constraint on it, you’re good. Renders it meaningless.
The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
If it’s legal under applicable law, they can do it. Again, renders it meaningless.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.
At best this freezes those particular rules in place, if that is insisted upon elsewhere in sufficiently robust language.
As far as I can tell, at best OpenAI’s stated redlines amount to ‘all lawful use under existing law’ rather than ‘all lawful use.’ That’s it.
What OpenAI’s Contract Terms Actually Do
, all the above language translates to ‘all lawful use,’ a restriction in theory already in place by definition. Charlie shares my view that the language shared does not enshrine current law .
OpenAI CSO , claiming that time stamping a law in a contract preserves current language (and thus this means they don’t have any additional protections on this), and says ‘any of the chatbots should give you a similar answer.’
I checked, and ChatGPT confirms that this is not the case. The language for everything except 3000.09 does not meaningfully time stamp anything. The language on 3000.09 might or might not be sufficient under ordinary contract law if neither party was the Department of War and this was a normal contract, but under these circumstances, if the DoW doesn’t want to be bound on this, this language is at best ambiguous and thus is not going to protect you in any meaningful way.
Kwon’s statement means there is no other such enshrining language. So that’s it.
As Bullock points out, there is lots more language that we do not have, so many things could have come to pass, and there are many legal things that would qualify as common sense “mass domestic surveillance” as I repeatedly point out.
I don’t think the legal language here is going to meaningfully bind DoW.
That doesn’t mean the contract language is completely worthless.
It does do one important thing. It gives standing. If DoW were to otherwise violate someone’s rights, that’s the target’s problem, but the target might never even find out. By naming these particular statutes and provisions, OpenAI now has clear standing to sue or take other actions, should the DoW be found in violation.
That matters, but it’s still a ticket to ‘all lawful use’ as DoW interprets that.
Otherwise, as OpenAI admits, they’re basically counting on technical safeguards, and well aware that they gave the green light to pretty much anything DoW does with whatever system they provide.
And they’re trusting DoW to act honorably.
Thus, my conclusion.
OpenAI Is Trusting DoW And Sam Altman Misrepresented This
(OpenAI): there is no contractual redline obligation or safety guardrail on earth that will protect you from a counterparty that has its own secret courts, zero day retention, full secrecy on the provenance of its data etc. every deal you make here is a trust relationship
: Good point, Anthropic was foolish to try and resist
: not at all what I am saying! it looks to be quite a defensible move to resist
I don’t think Roon is fully right, in that there are various ways to find out, but he’s essentially right if they want it badly enough.
On Friday morning, Sam Altman said on CNBC that OpenAI shared Anthropic’s red lines on domestic surveillance and autonomous weapons. Many praised this.
(3:52pm Friday): It’s extremely good that Anthropic has not backed down, and it’s significant that OpenAI has taken a similar stance.
In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.
On Friday afternoon, a potential deal is described between OpenAI and the Department of War, which Altman claimed would have strong protections.
On Friday evening, while the supply chain risk designation was hanging over Anthropic’s head, they signed an agreement that included exactly the language that Anthropic rejected, including allowing ‘all lawful use.’
They continue to claim this contract has strong protections.
. Which is better than throwing it together on Friday alone, but nothing like enough time to know if you’ve made a legally viable deal. As Dean Ball says, . That’s not a two day affair.
, (to the extent this is possible at all) or other concessions was almost certainly lost, and this puts Anthropic and the whole industry in a much weaker position.
In particular, and had found terms that would achieve them for Anthropic. This was not the case. He accepted exactly the key terms Anthropic rejected, because OpenAI is trusting DoW and is drawing its red lines (and/or understanding of the functional law) differently.
It is not impossible that OpenAI got meaningful protections in the contract language that they decline to share. We do know they are misrepresenting the contract language that they did share, which offers effectively no protection.
It is also possible that OpenAI is correct, in context, to put its trust in DoW, and perhaps can trust it far more than Anthropic could, because DoW will understand and value the relationship, given better cultural fit, Altman’s relationship with the White House, OpenAI’s position as already too big to fail and a lack of strong alternatives.
I do think OpenAI has greatly weakened the legal argument for declaring Anthropic a supply chain risk, and has been very strong on the point that this label is crazy. That is very important.
But it is also possible that Altman’s negotiations are what made Hegseth feel he had a green light to order it, as he no longer felt he needed Anthropic at least medium term. , could have had the opposite effect. Hopefully at least from here it does calm things down.
OpenAI can now be threatened in similar fashion, including via a pretext, once it is in sufficiently deep. We all know that Elon Musk would love to try to destroy OpenAI.
OpenAI Accepted Terms Anthropic Explicitly Declined And That Would Not Have Protected Anthropic’s Red Lines
I repeat this several times because it must be emphasized, although they did get some potentially important additional terms in exchange.
Even if Altman was trying to ‘take one for the team,’ and it is plausible that this was part of his motivation on this, that’s not always good for the team. Sometimes your team needs you to hold the line. We all know many examples of deeply foolish compromises, both fictional and real, made in good faith hopes of heading off a threat.
: Not just did OpenAI defect and concede to this whole authoritarian maneuver, but Sam also went and just deceptively framed the whole thing to try to make it look like they had agreed to the same Anthropic redlines, which is not actually true.
Anthropic strongly believes that the language Altman signed will not hold water.
: Anthropic said Thursday this compromise that they were offered (and apparently OAI accepted) was “ New language framed as compromise was paired with legalese that would
allow those safeguards to be disregarded at will.”
I can confirm that this is Anthropic’s belief, via another source.
Why did Altman think these terms would be effective, if he believes that?
One possible contributing factor is that he was rushed, and did not understand.
A second is that he’s very confident in the use of technical safeguards.
Another is that OpenAI understands the redlines very differently than Anthropic.
I will dive more into their understanding later, but I do not expect that OpenAI’s redlines apply to anything that is legal. They only in practice object to illegal actions, and do not see it as their place to decide this, thinking that’s not how the system works. In that case, ‘all legal uses’ are indeed whatever DoW decides they are, and are acceptable, whether or not the language has any other teeth.
OpenAI’s leverage is that they claim they can decide what system to deliver, and can install any safeguards, and refuse requests that way. Okie dokie?
How Altman Initially Described His Deal
Here was Altman’s statement at that time, before we understood what it was:
(CEO OpenAI): Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
That would be quite the contrast with their private and also very public actions in discussions with Anthropic, if it was true. This is the kind of thing that you have to say in Altman’s position, so you shouldn’t update much on it.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
I have clear sourcing that this is false. As in, The DoW intends to engage in legal forms of what most of us would call mass domestic surveillance. At bare minimum, what they valued most in this negotiation was the ability to do this in particular.
. It is far better than nothing that a particular human has responsibility for the outcome, if they are indeed ensuring that, but it is DoW itself who would then hold that person responsible. Would they hold them to the same standards as they would have otherwise?
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
Cloud networks includes cloud classified networks, where OpenAI would have little control or visibility into what was happening. I don’t see how else OpenAI could relevantly replace Anthropic’s services.
The DoW made the largest of protests about the possibility that Claude might refuse a request. Sam is claiming that he can choose his safety stack and what requests the models refuse, and the DoW will respect those refusals. This is hard to believe.
There’s also the matter that no one knows how to build technical safeguards that will prevent a user of an LLM from doing whatever they want. Jailbreak robustness does not work here. Only a small number of forward deployed engineers will be able to examine queries. Without the engineers I think this is outright impossible.
With the engineers, it is merely extremely difficult. If is large enough (as in you’re willing to walk away over this and they believe you) it could work.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
(CEO OpenAI): We had some different [terms]. But our terms would now be available to them (and others) if they wanted.
We haven’t seen that language. But even if Anthropic was technically offered these terms, and the terms involved are as good as they could be, does anyone believe that Anthropic could have Claude’s safety stack refuse requests DoW thinks are legal, and the DoW would be fine with it? Or that anything that was a pure technical fix to Anthropic’s red lines wouldn’t bring a swathe of other undesired refusals, at best?
Now that we know what the fight was over, there was no zone of possible agreement unless DoW was willing to not do the thing it most wanted to do. DoW demanded Claude do [X] and Anthropic wasn’t willing to do [X]. No deal. Trying to play a game of ‘get it to do [X] despite the technical safeguards’ really, really isn’t an option.
The reporting claims OpenAI has the right to prescribe safety mitigations, and that the Pentagon will respect model refusals, and so on. We don’t yet know any of the details of that.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Second half of this is certainly true.
For the first half, see my entire series of prior posts about OpenAI and Sam Altman, and the history of the company. But I do think that Sam Altman and most employees of OpenAI want better outcomes for humanity rather than worse. Affirming that is meaningful in a political context.
(CISO, OpenAI): Proud to be at OpenAI.
Effective, safe, and high-impact AI to directly support the men and women in our armed forces. All while respecting the law, protecting the Constitution and our rights, and setting the standard for responsible deployment.
God Bless America.
This is fine sentiment but does not claim that it protects the redlines.
I would prefer if we focused first on using AI in science, healthcare, education and even just making money, than the military or law enforcement. I am no pacifist, but too many times national security has been used as an excuse to take people’s freedoms (see patriot act).
I am very worried about governments using AI to spy on their own people and consolidate power. I also think our current AI systems are nowhere nearly reliable enough to be used in autonomous lethal weapons.
I would have preferred to take it slower with classified deployment, but if we are going to do it, it is crucial that we maintain the red lines of no domestic surveillance or autonomous lethal weapons. These are widely held positions, and codified in laws and regulations. They should be stipulated in any agreement, and (more importantly) verified via technical means.
I think the terms of this agreement, as I understand them, are in line with these principles, that are also held by other AI companies too. I hope the DoW will offer them the same conditions.
Regardless, a healthy AI industry is crucial for U.S. leadership. Whether or not relations have soured, there is zero justification to treat Anthropic – a leading American AI company whose founders are deeply patriotic and care very much about U.S. success – worse than the companies of our adversaries.
It appears to me that much of this week’s drama has been more about style and emotions than about substance. I hope that people can put this behind them, and come together for the benefit of our country.
OpenAI Allowed All Lawful Use And Trusts DoW On This
The above is entirely the right attitude from Boaz Barak. For national security, we need to keep highly capable AI in our classified networks and assisting the DoW. But we should seriously worry that this could be used to cross redlines and endanger the Republic or put us at risk. We need to stipulate this in any agreement and verify it.
Given the practical state of law around surveillance in America, the principle of ‘all legal use’ will not protect against many forms of domestic mass surveillance at all, would offer only nominal practical protections against many other such forms, and we have strong reasons to believe there is intent by DoW to engage in such surveillance. Thus, we are left with only technical verification, despite all the information in question being classified.
I can only interpret OpenAI’s public statements, as I will get to them later, as saying that OpenAI does not view legal surveillance and analysis activities (or legal use of autonomous weapons) as crossing their red lines, by nature of it being legal.
I spent a lot of time ruling alternative out and establishing the arguments for this, but also they then just tweeted it out?
: A lot of the concerns about the government’s “all lawful use” language seem to stem from mistrust that government will follow the laws. At the same time, people believe that Anthropic took an important stand by insisting on contract language around their redlines.
We cannot have it both ways. We cannot say that the government cannot be trusted to interpret laws and contracts the right way, but also agree that Anthropic’s policy redlines, in a contract, would have been effective.
This is why our approach has been:
Let the democratic process decide on the legality and proper use question. The fact that people can even say that the gov has made mistakes in the past is the process in action. The fact that we are having this discussion on twitter is part of the process.
Create a reasonable contractual framework that guides expectations and the relationship, just as much if not more than the rules themselves.
And on top of this, have the ability to build the models the way we think is safe, along with cleared FDEs to do the real world work in partnership.
Katrina is saying that they will:
Allow all legal use.
Trust DoW to follow the law.
Trust DoW to determine the law and determine proper use.
I am confident that many at OpenAI believe that they would be able to prevent the Department of War from engaging in sufficiently illegal activities, were the DoW to decide to act here in a way that would be deemed illegal if it were to reach the Supreme Court, presumably by detecting the activity and either refusing the requests or terminating the contract. This may or may not include Sam Altman.
Alas, I believe they are incorrect in practice.
I do think Katrina makes an excellent point that if you do not trust DoW to follow the law, then you should doubt DoW to honor Anthropic’s redlines. In practice, both sides acted as if the terms mattered, but why couldn’t DoW, if it was not trustworthy, break the rules? I believe the answer is that they felt they would be unable to hide it from Anthropic if they used Claude at the kind of scale they had in mind, as it would have inevitably leaked.
, although I am confused that he thinks the OpenAI contract is ‘no weaker and in several ways stronger’ rather than ‘weaker in one way and stronger in others.’
(OpenAI): Two things can be true:
OAI’s DoW contract is no weaker and in several ways stronger than Anthropic’s original one in protecting the red lines of mass domestic surveillance and no autonomous lethal weapons.
Neither are good enough. AI poses unique risks to our freedoms that can’t be left to individual agencies and companies. We desperately need regulation and legislation to ensure our freedoms.
That’s very possible. I don’t think the first one is true, but they’re fully compatible, and yes I think it is highly unclear Anthropic’s language holds either.
, Sam Altman is representing that this agreement is robust, to the point of being stronger than Anthropic’s more extensive original agreement, despite the clause allowing ‘all lawful use,’ but .
Altman had a moment of huge leverage, and instead of standing with Anthropic, he caved on the key term in question, ‘all lawful use.’ At minimum, he failed to demand that the supply chain risk designation be moved off the table.
If he had meaningful redlines on currently legal activities to protect, he could not have had the time to properly consider what he was signing, or signing up for.
The DoW Could Alter This Deal
The correct prior, given the circumstances, timing and history of Altman and OpenAI, is that the protections agreed to were woefully insufficient, regardless of the degree to which Altman realized this at the time. , but we should be skeptical both that he is sufficiently invested in that to take a very expensive stand when it counts, and that he can tell the difference.
On top of that, even if the current agreement were ironclad, what’s to stop the government from doing the same thing to OpenAI that they just did to Anthropic? They are altering the deal. Pray that they do not alter it any further. Is Sam Altman going to be willing to risk a supply chain risk designation? Do you think Elon Musk wouldn’t push for the Department of War to do the same thing again?
Again, OpenAI is choosing to trust DoW.
Even if he meant maximally well, if DoW does not mean well then Sam Altman has walked into a trap, and put the entire industry in a dramatically weaker position.
: I think it’s important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day.
OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons, just like xAI. They have some clever words that pretend they are not, but we should see through them. This guy is not consistently candid.
Altman should be crying bloody murder over the supply chain risk designation. He should also refuse to work with the DoW until this threat is off the table. This is a designation reserved for foreign adversaries. This move threatens the entire tech industry and proves the DoW is unreliable. OpenAI could easily be burned next.
So no moral clarity. Altman sees a short-term way to torch a competitor and he’s going to take it. No matter what happens to OpenAI, Anthropic, the USA, or us.
Why OpenAI’s Shared Legal Language Offers Almost No Protections
On OpenAI’s legal language, at least the part that was shared with us, here’s two explainers for why it is highly unlikely to protect OpenAI’s supposed red lines:
: The government can already legally buy your location data, browsing history, and social media activity without a warrant. The only thing that prevented mass surveillance from that data was the inability to process it all. LLMs fix that. “All lawful purposes” includes this.
: These are NOT meaningful redlines. For example it only prohibits autonomous weapons “ in any case where law, regulation, or Department policy requires human control.” But the relevant safeguard against autonomous weapons is a DOD directive that Hegseth can change at will!
Also the surveillance redline is about “unconstrained” surveillance of “private” information. But what about “slightly constrained” surveillance of private information, or unconstrained surveillance of “public” information? Those are both potentially very dangerous forms of mass surveillance!
(METR): OpenAI has released the language in their contract with the DoW, and it’s exactly as Anthropic was claiming: “legalese that would allow those safeguards to be disregarded at will”.
Note: the first paragraph doesn’t say “no autonomous weapons”! It says “AI can’t control autonomous weapons as long as existing law (that doesn’t exist) or the DoD says so.”
Similarly, the mass surveillance use cases will “comply with existing law”, but many forms of data collection that we’d consider “mass surveillance” are things that the NSA has consistently argued are legal under current law.
This, of course, did not stop OpenAI from blatantly misrepresenting this language in the blog post and in Sam Altman’s tweets!
… Now, I’m sure OpenAI will claim that the real teeth of the agreement is not their contract but their deployment architecture: they have a “safety stack that includes these principles” and everything! (In other words, “trust me, bro.”)
(METR): Two more points:
It is not true that “As with any contract, [OpenAI] could terminate it if the counterparty violates the terms”.
Despite OAI’s claims, the legalese provided does not actually specify what will happen when existing laws or DoD policy changes.
(OpenAI): the contract snippet from the openai dow blog post is so obviously just “all lawful use” followed by a bunch of stuff that is not really operative except as window dressing.
the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems are deployable.
as others have covered, there are a ton of mass domestic surveillance loopholes not covered by the 4A, national security act, FISA, etc.
Dave Kasten: The people interpreting this legal guarantee are Executive Branch lawyers, and their General Counsel bosses are usually political appointees; they can always just change the DoD directives or the Executive Orders if they want, or DoD’s internal legal definition of the same. Every intelligence scandal you’ve heard of, from the to the , had legal guidance claiming it complied with those exact same authorities, or other authorities superseded them.
Second, claiming that the monitoring of US persons (that’s any US citizen, lawful permanent resident, US company or nonprofit) merely needs not to be “unconstrained” . Third, what domestic law enforcement activities are included — does targeting and arresting peaceful protestors because the FBI Director thinks they’re friends of Antifa count? They may be able to limit some of this with the technical controls they propose; but you should be skeptical.
: The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities
: How do people not get that DoW never thinks it does anything illegal, even when it does.
(OpenAI): Does this criticism not apply also to the previous contract DoW had, that relied even more on contract language and less on technical verification?
: Hard to say for sure without knowing what contractual guarantees Anthropic had, but probably less so than OpenAI At a minimum, OpenAI’s deal clearly and unambiguously rules in use of piles of domestic info incidentally collected via FISA authorities, which as far as I can tell Anthropic’s didn’t.
OpenAI’s deal also appears to allow the use at scale of analyzing massive piles of commercial data, which courts have thus far not fully ruled on (beyond Carpenter in 2018), and which Anthropic clearly indicated they were being asked to do and refused to do.
One more thought: who gets to do anything if your technical controls report an issue? It’s classified; what’s your plan for disclosure? to DoD IG? To Congress? To the press and take your chances you won’t lose the contract or be arrested?
: THAT PROBLEM IS NOT FIXABLE BY PRIVATE PARTIES.
: I don’t see how that’s accurate; every government contracting job I had gave me a very clear training on government requests that I was supposed to refuse as they were unlawful, even if someone told me they were lawful.
: How many of those scandals you mentioned were struck down in court? Most were fixed by Congress — because they were arguably legal but bad.
: I agree it would be good for us to defer less to claims of non reviewability and think there could be mechanisms (eg, establish a cleared litigation system that any litigant can engage for a fee and have litigate in a classified setting) to do so while preserving national security.
(Google DeepMind): I’m speechless at OpenAI releasing that contract excerpt and acting as if there aren’t gaping holes that could be exploited far beyond their stated “red lines.” I’m not a lawyer, but this is pretty obvious and common sense.
(And to be clear: if Google had signed the same deal, I’d be saying the same thing internally. The issues here are bigger than friendly competition between companies.)
… The actual language they published is still full of obvious escape hatches.
[ for further explanation]
Altman claims that the DOD directive is referred to as it exists today, not only as it might exist in the future. But even if that were true, it is not meaningful, as that directive leaves it to DoW to determine appropriate levels of supervision.
I do not understand why OpenAI believes, as they seem to be claiming, that the language they shared itself refers to the law as it exists today, and would continue to refer to those laws and directives even if they were later altered. That would not be how I would read those contract terms. You aren’t breaking a law if that law has been repealed or changed. At best this is highly ambiguous and DoW will read it the other way the moment it matters.
DoW keeps saying it is illegal to do ‘domestic mass surveillance’ but this is not a term of American law, so what exactly does that even mean? OpenAI has not shared any legal definition of the term, nor has DoW.
: Something I’ve been convinced of over the past 24 hours:
“Domestic mass surveillance” is NOT a defined term in US law.
The exec branch has a bipartisan history of interpreting IC legal authorities VERY broadly.
Make sure you know what’s in, and what’s out, before you sign.
Again, Anthropic explicitly rejected the core term language OpenAI accepted, exactly because they felt that those terms did not hold water. To the extent it has similar red lines, OpenAI is counting on its technical affordances, and this only potentially works for them because (as I understand it) they believe crossing the lines would be illegal.
: Idk who needs to hear this (apparently all of twitter) but OpenAI did not just magically get the DoD to agree to the terms Anthropic was asking for.
…OpenAI just took the terms Anthropic considered so egregious, it warranted jeopardizing an enormous part of their business.
The DoD does not just break off a massive contract to accept the same demands 5 minutes later from someone else. Until explicitly indicated otherwise, the only logical conclusion here is that OpenAI swooped in and unscrupulously stooped lower than Anthropic was willing to go for the money.
Assume all OpenAI data will now be used for what Anthropic deemed “mass domestic surveillance of Americans”. Plan and prompt accordingly.
I am highly confident that Anthropic did not risk going head to head with the Department of War over meaningless terminology details.
I am highly confident that the Department of War did not risk this battle with Anthropic over meaningless terminology details, although it in part did so because some people actively wanted to destroy Anthropic.
So How Does OpenAI Hope For This To Work Out?
Part of what they hope for is de-escalation. The strategy needs to be reevaluated if that does not now happen. But what about the actual contract and serving DoW?
One way to have the red lines not be crossed is if the Department of War chooses not to cross the red lines. Sometimes the ‘trust us’ strategy works and people prove worthy. At other times they don’t want to risk being caught.
I really hope that this turns out to be the case, either way.
What about the other way of holding the red lines? Is it possible Anthropic and I are wrong, and basically all the legal experts who weighed in are wrong, and Sam Altman pulled it off even if the Department of War intends to cross the red lines?
It is possible. It would require that the key terms be elsewhere in the contract, in places where they claim they cannot share the details.
It would then require OpenAI to do heroic work, including heroic technical work, and be prepared to take heroic stands at great potential financial and personal cost.
The first step to knowing if this is possible is to read the rest of the contract terms.
The argument for hope goes something like this:
OpenAI decides on its own ‘safety stack’ and chooses what model to deliver.
They can choose to deliver models incapable of the things they don’t want without tripping the safety stack, or at all.
The Department of War has agreed to accept this if it happens.
Therefore, OpenAI can build a safety stack that protects against their redlines, even if such activity was legal, either through inherent inability or refusal, no matter what the contract otherwise permits, so This Is Fine.
OpenAI would be able to sustain this even under huge political pressure across the board, and likely also legal pressure.
There are many severe problems with this plan. A lot of them are obvious. OpenAI would have to actually do the heroic work, take the heroic stand, and withstand the kind of pressures being used against Anthropic.
A less obvious problem is, even if OpenAI did heroic work, I don’t see how to deliver otherwise useful models that can’t be used for legal mass domestic surveillance. You can have them analyze one situation at a time and then clear the context.
So either you deliver a rather useless model by being unwilling or unable across a rather broad set of queries a lot of which are good uses, you’re allowed to pick up patterns and flag the whole DoW account, or you’re dead even without a jailbreak.
Then there’s the problem that there’s no known robust defense to jailbreaks, unless OpenAI is willing to implement technical pattern detection for violations, and then willing and able to pull the plug if that happens.
This Was Never About Money
Even if I am reading the situation maximally wrong, there is one thing that is clear.
, for either Anthropic, OpenAI or the Department of War.
OpenAI previously turned down the contracts Anthropic accepted, exactly because Anthropic cared deeply about national security, and OpenAI did not wish to lose focus and money and take on the associated risks by prioritizing such work, especially when Anthropic was volunteering to pick up that slack and having access via AWS.
. OpenAI an Anthropic grow their revenue more each day then the entire contract is worth.
I also strongly believe that OpenAI has consistently been attempting to de-escalate the conflict between Anthropic and the Department of War rather than escalate it. Sam Altman has been excellent on that particular point, as noted earlier, and we should give him proper credit.
To the extent that this conflict was stroked by competitors or was due to manipulation or corruption, those pulling those strings lie elsewhere, as I’ve noted.
That still leaves many potential motivations for OpenAI agreeing to this contract.
I believe there are three we must centrally consider.
I believe that at least part of the motivation is that Altman believes that doing so de-escalated the situation and helped protect Anthropic, and with it the entire AI industry and economy and military supply chain, from an epic clusterfuck. This was an excellent motivation.
Unfortunately, I do not believe his instincts were correct here. While his explicit statements have indeed been very helpful, and the contract does further invalidate any possible legal arguments for the supply chain risk designation, I fear that by being willing to contract he may have unwittingly ended up making Hegseth feel he had a green light.
I believe that at least part of the motivation was genuine concern for national security, and of what would happen on multiple levels if Grok were left as the only model with access to classified networks. No doubt he was concerned given their history that this would give Elon Musk powers he might abuse, and also he is aware that Grok is not a good model and can’t do the job protecting America.
By playing ball with the Department of War and White House, OpenAI gains political favor and power, which will be vital in the months and years ahead, and also OpenAI gains direct levers of power via its AI inside classified networks. Hopefully one of the chits they got was a promise of de-escalation.
People are not discussing this third motivation, but it is very obviously there. Sam Altman has done many things to curry favor with the administration. Fair play.
I think people have this third motivation exactly backwards. They say things like ‘Brockman contributed $25 million to Trump and that’s how they secured this contract.’ I would suggest the opposite is more important. This contract, and the willingness to bail out this crisis and capitulate, is itself a contribution.
OpenAI Tells Us How They Really Feel
Again, on Friday morning, Altman claimed to share Anthropic’s red lines, implying (but not explicitly confirming) that this would apply even to legal activities.
On Friday evening, Altman claimed to have signed a ‘more restrictive’ contract that would preserve the redlines, a contract Anthropic explicitly declined and that would not have preserved Anthropic’s red lines, but might help preserve OpenAI’s.
On Saturday afternoon, we got some of the legal language, which looks like all we’ll get, and that language we de facto ‘all lawful use’ as determined by the general counsel’s office, with the meaningful levers being the safety stack and right to cancel.
Which is totally a coherent position, highly defensible, but very different from what Altman was representing was the OpenAI position, and one that would make a lot of people very upset.
Then Altman, and several other employees of OpenAI, did an AMA and otherwise Tweeted out various sentiments on how they believe all of this works.
(CEO OpenAI): I’d like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.
These lay out a clear and coherent position and philosophy, which I believe amounts to saying that their redlines allow all legal use, and trusting the Department of War to determine and abide by what is legal, and that to do otherwise would not be appropriate in a democracy.
Yes, they intend to include a ‘safety stack’ and other safeguards, but fundamentally believe that they should not be determining what their AI is used for, other than via enforcing the law and refusing illegal requests.
First The Good News
(OpenAI): are you worried at all about the potential for things to go really south during a possible dispute over what’s legal or not later on and be deemed a supply chain risk? I find this part to be the most worrying out of distribution thing to happen this past week
(CEO OpenAI): Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that.
I think this greatly underplays the level of risk Altman is taking on by getting involved, and his other statements sound like a person already choosing his words carefully due to this. I hope I am mistaken, and I hope that Altman is correct that OpenAI intends to and actually will, even under immense pressure, use its safety stack to determine what is legal and to refuse requests it feels are illegal, and to terminate the contract if it discovers an illegal pattern of behavior that cannot otherwise be prevented.
There is also potential political risk in refusing to become involved. At some point, you might not be interested in politics and the national security state but they become interested in you.
:
What kind of implicit or explicit threats did you receive from DOW before striking the deal?
If you received such threats, would you disclose them in public during a Twitter AMA?
If the answer to (2) is “no” (which of course it is) what’s the point of this?
(CEO OpenAI):
No explicit or implicit threats. In fact, I could tell that as of Weds, the DoW was genuinely surprised we were willing to consider.
I think I would, and it would be lost in the noise of the SCR stuff.
I fully believe Altman here. I think Altman decided to do this on his own.
There is of course ‘if you help us we will remember that, and if you didn’t help us when we needed it we are going to remember that.’ That’s always there, whether or not anyone wants it to be there.
This was a good answer:
: What are AI-native things the Department of War is not yet doing that you see as opportunities over the next decade?
(CEO OpenAI): They will have their own opinions, but the two things I am currently most worried about where AI can help are a) the ability to defend against major cyber attacks (eg something on the scale of taking our whole electrical grid down) and b) the ability to contribute to biosecurity. I do not think we are currently set up well enough to detect and respond to a novel pandemic threat.
I would have added a third key area, the defense of model weights.
This was also a good answer, up until the last line:
: Which of OpenAI’s core principles was the most difficult to reconcile with the DoW’s requirements during your internal debates this week?
(CEO OpenAI): Thinking through non-domestic surveillance. I have accepted that the US military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it.
I think it is very important that society thinks through the consequences of this; perhaps the single principle I care most about for AI is that it is democratized, and I can see surveillance making that worse.
On the other hand, I also respect the democratic process. I don’t think this is up to me to decide.
The ‘not up to me to decide’ rhetoric totally applies to what DoW decides to do. It doesn’t mean you have to help them do it if you think it’s wrong, but also they can force you into a package deal decision, and they did that here.
One thing Altman should keep in mind is that a lot of what we would think of in practice as domestic surveillance, legally is classified as foreign. Then there’s the ‘border zone.’ Again, I see OpenAI as saying they will defer to DoW on what is legal, and they will assume legal overrules their red lines, unless things become sufficiently blatantly unconstitutional.
, but thinks this is a good idea and will consult with the team. I agree, good idea.
and the plan seems to be to use Azure. They have up to six, although that deadline could always be extended.
of AGI would mean ‘we are probably in a very bad place.’ And so much more.
The OpenAI Redlines Only Forbid Currently Illegal Activity
This answer and related follow-ups are very telling on several fronts at once.
I get a sense that the actual intended redline for OpenAI might be better described as the Constitution of the United States? That’s a highly reasonable redline, if you can actually act on it.
: If the government comes back with a memo saying that, in their view, mass domestic surveillance is legal, do you do that? Do you do it until the courts bar it, or do you delay until the courts approve it?
Second, would mass domestic surveillance be a lawful use right now?
(CEO OpenAI): We would not do that, because it violates the constitution. Also, I cannot overstate how much the DoW has been extremely aligned on this point.
However, maybe this is the question you are really asking: what would we do if there were a constitutional amendment that made it legal?
Maybe I would quit my job.
I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the constitution. I am terrified of a world where AI companies act like they have more power than the government. I would also be terrified of a world where our government decided mass domestic surveillance was ok. I don’t know how I’d come to work every day if that were the state of the country/Constitution.
: If the DoW gives you what you believe to be an unconstitutional order, do you refuse to follow it until the courts rule? Or do you do it until the courts bar it?
(CEO OpenAI): I don’t think this will happen. But of course if we are confident it’s unconstitutional, we wouldn’t follow it. The constitution is more important than any job, or staying out of jail, or whatever.
In my experience, the people in our military are far more committed to the constitution than an average person off the streets.
An important part of this is that I don’t think our company is above the constitution either.
: What would cause OpenAI to walk away from a government partnership? Is there a clearly defined boundary or red line you won’t cross?
(CEO OpenAI): If we were asked to do something unconstitutional or illegal, we will walk away. Please come visit me in jail if necessary.
: Will you turn off the tool if they violate the rules?
(CEO OpenAI): Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy.
What we won’t do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.
: Sam would do good to remember that Hegseth thinks it’s sedition when Sen Mark Kelly says “don’t follow illegal orders.”
Saying that your models will only be used to follow legal orders is the barest of fig leaves in the current administration.
I strongly agree that most people in the military are far more committed to the constitution than the average person on the street, but they also consistently take a broad view of what they need to do to defend national security (and they are often right), and also they usually do as instructed by the chain of command. That’s the job.
We really need to know the full OpenAI definition of ‘domestic mass surveillance.’
Here are the conclusions to draw.
First, he clarifies that he is fine with anything constitutional, presumably as the current courts understand it, which is a rather narrow reading of this question as noted extensively earlier:
Sam Altman believes that ‘domestic mass surveillance’ violates the Fourth Amendment to the Constitution.
That means that anything that does not violate the Fourth Amendment then is not ‘domestic mass surveillance.’
Since the courts have consistently ruled that all analysis of third-party data and the many other things listed above, that I consider ‘domestic mass surveillance,’ are legal, then it follow Altman doesn’t think they cross his red lines.
As in, this is a confirmation that the rule really is ‘all legal use.’
Second, that he is pledging to defy an unconstitutional order, even if it comes with a legal opinion. He is promising to pull the plug on the entire program, if he finds DoW doing illegal things.
I very much appreciate this, but if the situation happens we may never learn of it, and intent now is very different from what you do in the breach.
Third, and I very much appreciate this too, that he doesn’t know what he’d do if we abandoned our rights and freedoms. I too don’t know what I would do then.
Altman Does Not Present As Understanding The Difference In Redlines
: What was the core difference why you think the DoW accepted OpenAI but not Anthropic
(CEO OpenAI): I can’t speak for them, but to speculate with the best understanding of the situation.
*First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.
*We believe in a layered approach to safety–building a safety stack, deploying FDEs and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it’s very important to build safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one.
*We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here.
*I think Anthropic may have wanted more operational control than we did.
Altman may have an excellent point that the other terms, granting the right to build OpenAI’s own safety stack and control what is delivered, are things that Anthropic did not focus on or ask for.
But they are also exactly the type of thing that Hegseth and Michael were yelling were totally unacceptable, and that they could not and would never accept. They’re giving a private corporation operational control, the ability to substitute OpenAI’s judgment for the DoW and refuse requests based on their own reading of the law, or indeed any other safeguards they determine – if you believe Altman’s claims about this deal.
That’s not the ‘unfettered access’ that was demanded of Anthropic. Not at all. If OpenAI got a strong deal, it is because they got more, not less, operational control.
Indeed, what happens if the model starts refusing on a real time operation? Are they going to ‘call Sam?’ This points out how nonsensical all that rhetoric always was.
Meeting Of The Minds
I notice this makes me worry a lot, if Altman is presenting his view accurately, that OpenAI and DoW do not have a meeting of the minds on this contract.
Altman seems to think that technical safeguards give him the right to decide what is and isn’t acceptable via having the model refuse requests. I doubt DoW agrees.
: So I’m confused – maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected.
The way you bridge this is by saying the protections live in this “deployment architecture and safety stack” rather than the contract language. But if this contract says “all lawful purposes” and your safety stack prevents a lawful purpose, you’re in breach.
So then either the safety stack has no teeth on lawful-but-objectionable uses, or OpenAI is setting up a future contract dispute with the Pentagon.
How do you ensure both (a) and (b)?
(CEO OpenAI): We deliver a system (including choosing what models to deploy), and they can use it bound by lawful ways, including laws and directives around autonomous weapons and surveillance. But we get to decide what system to build, and the DoW understands that there are lot of risks we deeply understand. We can, and will, build a lot of protections into that system, including for ensuring that the red lines are not crossed. The DoW is supportive of this approach.
We are generally quite comfortable with the laws of the US, but there are cases where the technology isn’t very good, shouldn’t be used, and would have serious unintended consequences.
We do not want the ability to opine on a specific (and legal) military action. But we do really want the ability to use our expertise to design a safe system.
Not only does OpenAI choose what system to build, they can ‘build protections into the system’ including to ensure no crossing of the red lines.
I am confident that DoW thinks this that they are entitled to ‘all lawful use’ and that if they get any refusals for reasons they don’t like, they will throw an absolute fit. They will absolutely pull out all the rhetoric they used on Anthropic, or threaten it.
Frankly, I expect the DoW to be right about this dispute, unless there is very clear language we have not seen that says that OpenAI has the right to have its safety stack refuse requests for essentially any reason, and even then I’d be rather nervous as hell.
Anthropic’s Position Was The Opposite Of How This Is Portrayed
If anything, Anthropic was trying to address the situation contractually rather than via technical safeguards exactly so that they didn’t get accused of making operational decisions, or accidentally actually refuse in real time.
Anthropic was saying, we’ll have the model do it, and specify what you agree not to do with the model, and then we’re trusting DoW to abide by the agreement, but we’re not going to do a sudden refusal in the middle of all this.
It didn’t help. None of that was in good faith.
The Room Where It Happened
: How long has this conversation with DoW been going for? What was the reason for announcing so close to the deadline they gave Anthropic?
(CEO OpenAI): For a long time, we were planning to non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took.
We started talking with the DoW many months ago about our non-classified work.
This week things shifted into high gear on the classified side. We found the DoW to be flexible on what we needed, and we want to support them in their very important mission.
The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the US. We negotiated to make sure similar terms would be offered to all other AI labs.
I basically buy this first half, in addition to any other motives involved.
I think it was a mistake, and could have had the opposite of its intended effect by making Hegsted think he was free to try and murder Anthropic (as Hegsted seems to not understand why that would be bad), but I do think this was a strong motivation.
The second half of this answer, however, is where we start getting into parroting the DoW rhetoric about this in a way that scares me.
I know what it’s like to feel backed into a corner, and I think it’s worth some empathy to the DoW. They are on the a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work.
Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.”
And then we say “But we won’t help you, and we think you are kind of evil.”
I don’t think I’d react great in that situation.
I do not believe unelected leaders of private companies should have as much power as our democratically elected government.
But I do think we need to help them.
Anthropic was in no way telling DoW it was ‘kind of evil.’ It was saying that there were some activities in which it did not wish to participate.
Anthropic was not saying they wouldn’t help, indeed they have gone to extraordinary lengths in order to be maximally helpful. They are saying there are some narrow things, things DoW keeps saying they would never do, that Anthropic does not want to help with.
There is no corner under into which DoW was being backed. This was a ‘war of choice’ the entire way. Anthropic was happy to continue under its current contract, offered much less restrictive terms than that, and was also happy to walk away. Then Hegseth did what he did, including running right through Trump’s de-escalation.
This was rather more than a failure to ‘react great.’
The penultimate line is far, far scarier. Who is saying ‘unelected leaders of private companies should have as much power as our democratically elected government?’
The claim is that a private company should be able to determine under what terms it is willing to do business and provide its own tools. That’s it. This whole line that this is some sort of anti-democratic power grab is inimical to the values of America.
We do still need to find a way to help the DoW, even if they don’t make it easy. That doesn’t mean treating the Secretary of War like a dictator.
Notice the equating of ‘democratically elected government’ with the military chain of command. It is supposed to be the Congress that sets the rules and writes the laws.
You Don’t Have The Right
Can OAI implement a safety stack that refuses unethical actions, under this contract?
Altman seems to be saying no. Or at least that he’s not going to try.
: If the models or someone at OAI deem an action unethical, does OpenAI has the right to deny said action
(CEO OpenAI): We currently have three redlines. I could see us changing them or adding more as the technology evolves, and there are new risks we don’t yet understand. Iterative deployment is one of our most-important safety principles, and is a big part of why it was so important that we could write and update our safety classifiers.
But a really important point: we are not elected. We have a democratic process where we do elect our leaders. We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn’t ethical in the most important areas.
Seems fine for us to decide how ChatGPT should respond to a controversial question. But I really don’t want us to decide what to do if a nuke is coming towards the US.
I am pretty furious at repeating this line, and at the idea that, because you might be facing an incoming nuclear missile (that would be handled by existing automated anti-missile systems), that means you have to apply that level of deference in all other situations, including in peacetime without an emergency.
In practice, very obviously, in a true emergency like that, the DoW says jump, and you do your best to guess how high because they don’t have time to tell you. We all would. Dario would, Sam would, I would, and I bet you would.
I continue to be flabbergasted that so many people think this is a good argument.
Yet Altman reiterated this position again, that the government needs to have the power, and that to be able to say no to the government means you have ‘more power.’
(CEO OpenAI): Three general things from this AMA:
There is more open debate than I thought there would be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on, but…I don’t. This seems like an important area for more discussion.
: remember—you should do whatever the government wants, even things you think are immoral, because otherwise you’re deciding what you can do instead of the government, which is undemocratic.
Democracy is being redefined in real time.
I strongly agree that we need to talk about this. A lot. I want to live in a Republic.
No, I do not think we should move more power from corporations to the government.
No, I do not think that the government should have ‘more power’ than all the ‘unelected’ private companies, as in all the private people, collectively.
I think the is a question behind a lot of the questions but I haven’t seen quite articulated: What happens if the government tries to nationalize OpenAI or other AI efforts? I obviously don’t know; I have thought about it of course (it has seemed to me for a long time it might be be better if building AGI were a government project) but it doesn’t seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important.
The government was at least flirting with soft nationalizing Anthropic, and instead tried to destroy it. The day could come sooner than we think, and yes many of us have been thinking about that for a long time without great answers. The implication here is that he would accept nationalization. After which, of course, his red lines would not be up to him, no matter what they are.
People take their safety (in the national security sense) more for granted than I realized, which I think is a good thing on balance but I don’t think shows enough respect to the tremendous work it takes for that to happen.
I did not get this sense from the questions, and I’m worried that Altman did. I do agree that many people take it for granted, especially day to day, and that this is good. If we take it for granted, that means DoW did its job.
Also, I am on the whole very grateful for the level of reasonable and good-faith engagement here. It was not what I expected.
On one particular question, he answered with a RT.
: Which of the following is true?
a) the contract permits all lawful use, + therefore mass surveillance + autonomous weapons, which have no legal prohibition
b) the contract has substantive red lines that constrain lawful use
c) OpenAI has a controversial interpretation of the law, disagreeing with others including the DIA about whether mass surveillance and autonomous weapons are legal and will block them based on that interpretation
d) OpenAI uses words differently from the people commenting here and doesn’t mean the same thing as we do when referring to mass surveillance and autonomous weapons
Sam Altman RTs in response:
: The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that.
Further, the DoW does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with laws, regulations, the Constitution’s protections for American’s civil liberties. The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.
(OpenAI): When we say that domestic mass surveillance and autonomous lethal weapons are red lines for us, we mean what we say, and are not looking for loopholes. We are confident that the combination of legal restrictions, safety stack, and deployment surfaces, will ensure that OpenAI models will not be used to cross either of these red lines.
I earlier broke down that Tweet from Emil Michael. It is well crafted, but if understood it is not a strong statement. If Altman is answering with that, and Boaz is giving this answer as well, we have our answer. All legal use, as determined by DoW.
I Ask Questions And Get Answers
AMAs happen fast, so I didn’t do a perfect job, but I did what I could.
I got an answer from Boaz Barak rather than Altman, which is fair.
: I have a lot of Qs about this so please answer as much as you can, in priority order.
What forms of surveillance if any would your terms forbid, if the DoW determined they were legal? What is your definition of it that you believe is unconstitutional as per another Q? In particular, are you willing to do unlimited analysis of third-party or public information, which AIUI is considered legal? Of nominally ‘constrained’ private information? Is there an actual exception to ‘all legal use’ other than enshrining current law?
Can we see the rest of the contract, or at least the parts you claim tie it specifically to current law, or other parts of the defense in depth that you feel are key components?
What legal opinions did you get on your contract language before you agreed to it? Can you share any details? Did you consult with Anthropic’s team to learn what their true objections were and why they felt they couldn’t accept similar terms, and what particular language they were objecting to?
What is the enforceability mechanism? How will you know if DoW violates your redlines or does something illegal? If you do think so, what can you do about it? Does the safety stack include monitoring for patterns of activity like it would with another user? How much leeway does OpenAI have in designing its safety stack?
You said that this is more restrictive than Anthropic’s previous contract, but that previous contract AIUI contained many more restrictions that they were offering to remove. How can you be confident you’re right about this and if so why would DoW agree?
(OpenAI): 1. The DoW is prohibited by law from engaging in any domestic mass surveillance, and @USWREMichael wrote that it would be profoundly un-American to do so, including for analyzing communication of Americans by purchasing data from commercial sources. Hence us and the DoW see eye to eye in our interpretation of domestic mass surveillance. They have no desire to do this, and we have no intention to allow it.
There are legal restrictions for publishing contracts with the DoW due to the classified nature of the work.
As you can imagine, our lawyers are quite good and they relied on internal advice as well as the advice of outside counsel. On consulting with Anthropic, US antitrust laws prohibit this kind of coordination, as much as we might wish otherwise.
The contract gives us the right to implement our full safety stack, which includes automatic classifiers and monitors. We will be working on the stack for this deployment over the coming months. Due to the classified nature of this deployment, only our cleared researchers and FDEs will have visibility into usage, which is why it is important we have them.
We believe our contract offers more robust safety guarantees. Note that Anthropic has disclosed that their national security model refuses less when engaging with classified information
I interpret these answers in the following ways, solidifying my perspectives elsewhere:
There are no surveillance actions that DoW understands to be legal, that OpenAI’s red lines or contract would disallow. Whatever it is that DoW is currently doing or plans to do legally, OpenAI is fine with it.
We likely never see any more details of the contract. Which is fair, but then we can’t take your word for what it says, especially in its details.
This is a hell of a justification. There’s some danger in practice, but this is at best overstated, and does not seem like a good reason in this sort of breach, especially given that the goal was explicitly de-escalation, and at minimum it seems like this means DoW didn’t want them talking.
This is the most meaningful answer. Their FDEs will be cleared and able to put human eyes on what is happening, which is very good, although it is not clear the extent of what ‘visibility into usage’ means. I notice he didn’t answer the other parts of the question, the ‘what are you going to do about it?’ half.
This doesn’t explain why he believes this or why DoW said yes. If anything, Anthropic’s model refusing less makes this more confusing, not less. I assume Barak is putting his faith in the safety stack, and doesn’t care much about the ‘all lawful use’ language or intent.
Does This Contract Apply To NSA?
Katrina, OpenAI’s head of national security partnerships, says no, that the contract only applies to DoW in a way that excludes NSA, despite the NSA being under DoW. Again, we have not seen contract terms, so this is in theory possible.
: 1. NSA engages in incidental domestic collection under FISA 702 and makes it available for queries, and DoJ writes an annual report to Congress listing all the times it does. OpenAI models usable under your contract for that, yes or no?
: 1. No, this contract does not apply to NSA.
(later): Anthropic offered “FISA yes, commercially acquired data no” and got turned down. This, uh, makes me substantially doubt the OpenAI claims that they’ve excluded NSA from their contract successfully.
Can OpenAI Models Be Used To Analyze Commercially Available Data At Scale?
That’s at the heart of the matter.
It is where OpenAI communication is hardest to believe.
I think it’s a perfectly defensible position to say this is legal and it’s not their place to decide and it’s fine, but that’s not the position.
It’s also fine for them to build in safeguards that stop this and pull the plug if DoW tries to go around them. Their contract permits it. That would be great if it would work and they can hold that line. But that is not their stated intention.
: 2. Getting and/or analyzing commercially available data at scale. OpenAI models usable under your contract for that, yes or no?
: 2. The Pentagon has no legal authority to do this (that would be federal law enforcement agencies, not DoW)
The DoW does this, legally, now. How can Katrina not know this?
I ask that given that Katrina disclosures, most of which was not deemed illegal or improper, but which a lot of people thought was not okay. She is eminently qualified to handle exactly this set of questions.
There has since then been extensive reporting that exactly this was the sticking point of the negotiations with Anthropic that caused talks to fall apart.
. Both Katrina and Boaz Barak made clear statements saying the Pentagon is prohibited from or not allowed to do this. And yet. They have promised more language on this in the coming days, from the contract and I look forward to reading it. Seems important.
If she doesn’t know or is misrepresenting this, what the hell is going on?
: The real question is whether OpenAI is going to allow the use of AI on unclassified commercial bulk data on Americans, which is what the Pentagon wanted from Anthropic. Ant instead narrowed to classified FISA only, and got kicked.
: Well there is part of what the DoD Anthropic was about.
Based on the contract language around analyzing bulk commercial data and deanonymizing it matches with this data discussion: Since 2021 the Pentagons DIA has been purchasing anonymized and harvested geolocation data that’s used in advertising, arguing it’s not “spying” since it’s commercial.
They’ve now realized AI is strong enough to take this bulk data and de-anonymize it accurately.
Anthropic deemed that spying on Americans. OpenAI doesn’t.
: In 2021 the Pentagon’s Defense Intelligence Agency told Senator Wyden to analyze American’s data. This has to be part of what freaked Dario out.
: Note: The purported exclusion of the NSA by OpenAI doesn’t address this. The DIA, that did this, isn’t part of the NSA.
: This is an important point from Logan Koepke: OpenAI is claiming that DoW lacks authorities to get commercial data at scale, despite extensive reporting that they have done so
: on point two, they have in fact done this and claim they have the authority to do this.
[See , and ].
Finally, here’s the big three models, including OpenAI’s ChatGPT.
This seems really, really conclusive.
It doesn’t get better from there.
: 3. OLC writes a “President’s Surveillance Program 2.0”-like memo claiming President has inherent CinC authority to authorize mass domestic warrantless wiretapping. OpenAI models usable under your contract for that, yes or no?
and, 4., how can we verify that?
: 1. No, this contract does not apply to NSA.
The Pentagon has no legal authority to do this (that would be federal law enforcement agencies, not DoW)
Again, if this were to happen (and to my knowledge it hasn’t) this could only be done by the FBI and they are not a party to this contract.
Read the authorizing statute for the Department of Defense? None of these activities are within their statutory authorities. And our contract is expressly limited to the Department of Defense.
: Huh? I note you do not claim “no, our contract does not allow that” for any of these (except kinda sorta 1).
But hey, I want to assume good faith, and everyone’s very short on sleep, would you explicitly say whether those answers mean “No” for each? And can we drill down here to understand those claims?
NSA is within DoD, are you claiming there is an explicit carveout to exclude it from your contract and that no NSA individuals, including those dual-hatted to CYBERCOM, will have access? What about other DoD IC elements? What about FBI’s access to 702 data for purely criminal investigations?
2-4. What if OLC claims that the President has the implicit Constitutional right as CinC to authorize DoD to do this (e.g., on a doctrine that immigration or Antifa, or in a future admin, pro-life protestors or MAGA activists are threats to national security), and thus no statute can bind that power? After all, this is literally what OLC has done before on warrantless wiretapping.
She also, multiple times, reiterates the line ‘’ as if one can simply count them, and this is conclusive.
: What do you say to people who’ve lost trust in both OpenAI and @sama ’s leadership and character over the past few days?
(): I would wonder why [anyone] lost trust when OpenAI secured a deal with more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.
OpenAI’s strategy was rooted in four basic ideas:
Deployment architecture matters more than contract language.
The safety stack travels with the model. The Department was not asking us to modify how our models behave. Their position was, build the model however you want, refuse whatever requests you want, just don’t try to govern our operational decisions through usage policies.
AI experts directly involved. Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers.
U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract.
And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.
This is a no good, very bad topline answer both in substance and in terms of PR. If you don’t know why, you sure as hell should know why. The actual philosophy of the approach part is reasonable in parts 1 and 3. I do agree that the forward engineers were a good ask.
An architecture approach is a philosophy, and it could be right. I wish there was more consistent emphasis that this is the plan.
The second point means DoW has this backwards if true. Usage policies don’t determine operational decisions. Refusals determine operational decisions. I presume the reason Anthropic didn’t agree here is that they understood that once you agree to ‘all lawful use’ you are not in a good position when you train the model to refuse legal things you don’t like, or you threaten to pull the contract over legal actions, where legal means as determined by the general counsel.
The problem is, as I said under meeting of the minds, you really don’t want to have the DoW thinking your contract works one way, and then insist it’s the other way, and Anthropic understood this.
Employee Activism
are good reasons the sidewalk outside OpenAI looks the way it does, and why the city was unable to get the people doing it to leave long enough to hose it off.
Even if OpenAI is attempting to do the right thing, they have signed on to ‘all legal use’ language, they have misrepresented the key functionality of the contract to the point that I spent many hours being confused and sorting through it until I finally understood their intent, and made many alarming statements of trust in and deference to the DoW given what else we know.
And while Altman and others at OpenAI have spoken excellently about the fact that it is crazy to label Anthropic a supply chain risk, they have also agreed to move forward to provide a replacement for Anthropic, while the sword of Damocles is still potentially poised above Anthropic’s head.
.
. You should not be. Speak your mind. If this gets you in trouble, which it probably won’t, you’re working in the wrong place.
You must decide what to make of Sam Altman’s extensive history of cutthroat politics and not being consistently candid, read all of his statements, and decide how much faith to put in him here.
Again, is it possible that OpenAI stands ready and will stand by their redlines and protect our civil liberties in the ways that matter? For being responsible with potential autonomous lethal weapons?
I really, really want that to be true! We should all really want this to be true. It is up to those inside OpenAI to figure out for themselves whether or not it is true.
: I think OpenAI employees need to ask some serious questions about what’s going on and whether they want to be participating in whatever it is.
We do not know the full terms of the OpenAI contract. Many questions remain unanswered. It is possible that OpenAI has a robust technical plan and understanding with DoW, and a willingness to back it up if it turns out DoW does not act honorably.
There’s only one way to know. You need to do your best to understand the situation.
I still think it is important is seeing the contract terms, and you should do so if you can and get a legal analysis, so you understand the background.
But unless you are resting your hopes on the contract’s legal terms, what matters is the practical plan, and your faith in its execution.
Perhaps you will find , and find the arguments convincing. If so, okay.
I would say the same for any other company entering such a deal. If I was at xAI, I would certainly be questioning what Grok was about to be used for, and whether or not you were okay with this, given they seem to have signed with no redlines at all.
If you are at OpenAI (or xAI), and after investigation (and legal consultation as needed) you do not find the protections acceptable or that your leader misrepresented the situation, then you need to organize, and use your power to hold leadership to account.
You also need to stand ready, in case DoW attempts to murder Anthropic, in which case you need to use your leverage to try and stop that from happening.
Your decision includes what this says about future high stakes decisions, and how they will be handled. Here is a . If you do not like the results, you need to consider whether or not you wish to stay.