The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'
The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.
I think it's the other way around. They have always wanted to do those cruel things that have real victims. It took them many years of dedicated, coordinated efforts as they slowly inched many systems to align with their insane ideas. The villain branding is just that - branding. Many of them actually like the 'bad guys' in those stories, especially if those bad guys are portrayed as strong, uncompromising, militaristic, inhumane, and having simple, memorable iconography that instills fear - the more allusions to real life fascists, the better. But that enjoyment follows from their ideology and what they want to do in the world, not the other way around.
Ehh, Hamill's take on Israel is pretty middle of the road and diplomatic[1]: support for the people of Palestine and Israel while not at all supporting the governments of those places.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
I just wish there was a stronger source on this. I am inclined to agree you and the source you cited, but unfortunately
> [1] This story requires some reading between the lines - the exact text of the contract isn’t available - but something like it is suggested by the way both sides have been presenting the negotiations.
I deal with far too many people who won't believe me without 10 bullet-proof sources but get very angry with me if I won't take their word without a source :(
> "Two such use cases have never been included in our contracts with the Department of War..."
While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.
The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).
If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.
However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.
While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.
It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.
You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.
I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.
Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?
Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)
> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude
I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.
For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).
One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.
My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.
edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.
> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
> Anthropic argues that your Crayola analogy is fundamentally incorrect.
Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.
Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.
To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.
Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."
How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.
Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?
I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.
Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.
Also, Trump's own words complaining about being forced to stick to Anthropic's terms of service:
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.
In this case, do you really believe that we should trust an EA less than this administration? EA as bad people is a stereotype; corruption, fraud, and breaking the law is the standard MO for this administration.
(Or maybe it’s catchier to respond glibly with “never trust a child rapist and convicted felon.”)
In this case, the choice is between the two apples, so I’d pick the one less obviously rotten. Sadly that is the current administration that operates in pure lawlessness.
I think a big question mark here, is whether anything said on Anthropic's side if in the framing of "We have a thing going on that we are trying to communicate around where a canary notice if it existed would no longer be updated"
It isn't about commercial agreements, it's about patriotism. The national industry is supposed to submit to the military's wishes to the extent that they get compensated. Here it's a question or virtue.
The Pentagon feels it isn't Anthropic to set boundaries as to how their tech is used (for defense) since it can't force its will, then it bans doing business with them.
If anthropic is saying “you can use our models for anything other than domestic spying or autonomous weapons” and the pentagon replies “we will use other models then”, I'd say Anthropic are the patriots here...
I had the same thing happen to me when I posted about how unbridled capitalism requires external costs in the form of pollution and what not. I didn't make it clear that I thought it was a terrible truth.
Once the hive decides you're being serious without checking, they turn the down vote button into an I disagree with you button.
This is actually one of the reasons I left Reddit. I hate to see it here.
It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".
The comment was serious. It didn't feel the need to take a side.
The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.
If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.
The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.
I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.
What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?
Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.
I fully acknowledge that it doesn't take much courage to bully people anonymously on HN. I don't claim to have any deep well of courage in real life either - many of my friends were already radicalized against OpenAI for other reasons, I don't expect to face professional consequences for being angry about this, and I might not be so willing to go scorched earth if either of those weren't true. Just wanted to explain where the world is at and why people should expect to see further incivility about this.
What's your definition of "patriotism" and why do private companies need to be "patriotic"? How do you reconcile this with the Constitutional guarantees of freedom of speech, freedom of association, and so on?
The US isn't Iran, North Korea, or even China, as much as some people, including the US president, seem want to emulate those models.
No one cares if the Pentagon refuses to do business with Anthropic. But Hegseth has declared that effective immediately, no one else working with the DoD can either--which includes the companies hosting Anthropics models (Amazon, Microsoft, and Alphabet).
So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
Which miiight impact the amount of inference the DoD would be able to get done in those six months.
> So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
> Which miiight impact the amount of inference the DoD would be able to get done in those six months.
Which might not be by accident looking at the Truth Social posts which state "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
I would not be surprised to see this being used as an excuse to nationalize Anthropic.
To attempt to nationalize Anthropic. I'm sure there would be court cases filed almost immediately, restraining orders, months of cases and then appeals and then appeals of the appeals.
I think you were downvoted due to your use of "patriotism" (specifically without scare quotes) because that word is usually used with an intended positive connotation. So the reader gets the impression that you think that submitting to the DoD’s wishes is how things ought to be.
Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.
Playing devil's advocate: if I did in fact grab one of my kitchen knives to defend myself against a violent intruder into my kitchen, I wouldn't expect to be banned from buying kitchen knives.
I'm not sure this is still a useful analogy, though...
And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.
The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.
1. Found out you used their knives to go murdering
2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)
Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)
> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
That... Doesn't happen.
Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.
Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?
> Not sure why knife dealers would be assumed to be more moral than firearms dealers
What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.
But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.
If I shoot someone, something that is explicitly warned against in firearm safety materials that come with every purchase of a new firearm, I am no longer allowed to purchase any more firearms.
The specific shape of a kitchen knife would make it a particularly poor fighting knife, and knives in general are bad for self defense, due to the potential for it to be turned against the user. So, there is a good argument that such a suggestion is really in the user's best interest rather than a cynical play for the manufacturer to limit liability.
Seconded. You can't see all the up and down votes, only the balance at the moment you look, and it's not too uncommon to be negative or even dead and be upped or vouched back to life later.
No it isn't. There are warnings, but once a knife is yours you are free to do whatever you want with it, including reselling it to someone else. The idea of terms of service of using something is not something that typically exists with physical objects that one can own. They can't take your knife away from you because you decided to use it for a medical purpose without purchasing a medical license for the knife.
Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.
I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?
I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.
Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.
I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.
Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.
Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.
After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.
So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.
Here’s the master project prompt:
Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”
On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.
Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.
Yep. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is respecting that provider's choice and acting accordingly.
The thing is nobody is saying the government is bad for not renewing the contract. Like it or not, that's definitely the administration's prerogative.
What we're seeing here is that when a vendor declines to change the terms of its contractual agreement for ethical reasons, the government publicly attacks it.
Perhaps for ethical reasons but a stated reason by Anthrophic is technical. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons."
With the other stated reason being legal. "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."
I don't think we should lessen Anthrophic's stance from technical/legal to ethical. Just as we shouldn't describe what the department of war is doing as "not renewing a contract".
Not in software though. Clear precedent has been established via EULAs. Software companies set the rules and if users don't like, they can piss off. I don't see why it would be any different for the government.
I'm not a fan of EULAs, I think if you acquire some software anonymously and run it on your own systems you should be able to do whatever you want. however if you want software hosted on someone else's machines, or want to enter into a contractual relationship with them then government or not you should not have the right to compel work from them.
Agreed they haven't and it will be difficult to see them voting in favour. But there are precedents. The Patriot act was more radical than a potential mandate for AI providers to prioritize national security.
The government is armed and can exempt itself from prosecution either by judicial means and/or by naked force. So it isn’t just a cut and dry licensing problem.
The government cannot set arbitrary rules, it has to follow the law. (And, at least with a functioning separation of powers, it cannot change the law arbitrarily.)
> Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Utter nonsense. When the US built the Blackbird, it could only use titanium because of the heat involved in traveling at that speed. But they didn't have enough titanium in the US. So the the US created front companies to purchase titanium from the Soviet Union.
Do you think the US should have informed the Soviet Union what it wanted to do with the metal?
Yes, it's officially still the Department of Defense.
If this were a news outline writing "Department of War" I would be concerned. But in the case of the Anthropic CEO's blog post, I can understand why they are picking their fights.
It's a silly shibboleth, but I automatically ignore anyone who calls it the Department of War or Gulf of America. Hasn't steered me wrong yet. They're telling me they're the kind of people who only care about defending fascism.
I think it's worth giving people a tiny bit of grace on this. I've surprised people by explaining that the "Department of War" is just fascist fanfic and that the legal name has not changed.
It's a testament to the broken information ecosystem we're in that many people genuinely don't know this. Most will correct themselves when told. I agree with you that those who don't are not worth engaging.
I would not defend all of Google's decisions in the Trump era, but complying immediately with politicized name changes has always been the status quo. Even in healthy democracies, the precise names of geographic features can be extremely controversial, and no sane company wants to get in a debate with the Japanese government about the real names of various islands.
They can, however, rename their Twitter/X accounts and vacate the @SecDef handle, which seems to be up for grabs now, if anyone wants to do the funniest thing...
No, fighting a war requires only engaging in international armed conflict.
Declaring a war requires Congress, and fighting a war other than in response to an invasion may be illegal under US law if Congress has not exercised its power to declare war, but that doesn't prevent wars from happening it just makes it illegal (though the only actual remedy is impeachment) for the President to wage war without authorization. And, in any case, that’s largely moot because Congress has exercised that power in an open ended (in terms of when and against whom) but limited (in authorized duration of any particular action without subsequent authorization) manner via the War Powers Act, giving every President since Nixon a blank check to start wars with full legal authority and then allow Congress an opportunity to vote to pull support from forces already in combat and hope the enemy already engaged is willing to treat the war as over as the only after-the-fact constraint.
Of all the silly things that Trump did, I think this one is the most reasonable. This has always been a department of war. Calling it defense was propaganda.
After it was changed from DoW the first time (in 1947), it was called the National Military Establishment (NME). They renamed it in 1949, potentially because "NME" said aloud sounds like "Enemy"
the entire administration negotiates in bad faith. literally every agreement they sign whether it's international trade or corporate contracts is up to the whim of a toddler with twitter
And they don’t think anything through. If they do this then Amazon, Google and the rest will need to terminate their involvement with Anthropic. Trump will be getting a call from some Wall Street bigwigs imminently and it’ll get rolled back, I bet.
Contract law will certainly be a casualty once Rule of Law has completely been broken. I don’t understand why the business sector isn’t pushing back more. Surely they must all know that the legal legal context itself, within which they all operate, is at mortal risk and that Business as Usual will vanish once autocratic capture is complete.
My main takeaway from all of this is that Hegseth seems deeply unfit for his job. First there was the Signal leak and now this.
Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?
If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.
So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.
| then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
So one thing to call out here is that the assumption that DoW is working on specifically these use cases is not bullet proof. They simply may not want to share with anthropic exactly what they are working on for natsec issues. /we can't tell you/ could violate the terms.
It is also dumb that DoW accepted these terms in the first place.
Is this matter about publicly available model or private model? For publicly available model like opus 4.6, bad actors can do whatever they want and Anthropic won't know.
If this is only about private custom model, designating public model as supply chain risk doesn't make sense as others can use it.
With this administration, after all their proven lies, when in doubt, assume bad faith on their part. Assuming good faith at this point is Lucy and Charlie Brown and the football, but now the football is fascism (i.e., state control of corporations, e.g., what Trump administration is doing here).
Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?
If anyone is the epitomy of arrogance, it is Hegseth.
No doubt the US Gov't will be using A I to perform automated military strikes without human supervision. and spying on US citizens (which they already have been doing for decades now).
Look no further than the case of patriot Mark Klein, a former AT&T technician, exposed a massive NSA surveillance program in 2006, revealing that AT&T allowed the government to intercept, copy, and monitor massive amounts of American internet traffic. Klein discovered a secret, NSA-controlled room—Room 641A—inside an AT&T facility in San Francisco, which acted as a splitter for internet traffic.
I assume those agreements were probably signed before the current fascist regime running the US government and now they want to upend the terms of said agreement to allow in more fascism to aforementioned contract.
It's so fishy, I spent the morning reading sam'AMA and it's a classic whitewashing act. OpenAI is claiming their setup is stronger and that DOW has agreed to their red lines but read the agreement below, it only says use in compliance with laws and executive order.
Anthropic wouldn't have walked away from a multi million contract if their two redlines could be respected. OpenAI on the other hand is a fast, willing and ready company. I would love to see Anthropic's proposed contract
In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.
We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.
Our agreement includes:
1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).
Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.
2. Our contract. Here is the relevant language:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
It's not recent news that Anthropic has (had?) DoD contracts. This is a lot of words to write while seeming ignorant of basic facts about the situation.
The argument isn't that nobody knew Anthropic had DoW contracts. The argument is that there's a difference between "publicly known if you follow defense-tech procurement" and "trending on social media where Anthropic's core audience is now actively discussing it." Both can be true simultaneously.
A fact being technically available and that fact commanding widespread public attention are very different things. Anthropic's communications team understands this distinction even if you don't find it interesting. The blog post wasn't written for people who already track federal AI contracts, it was written for the much larger audience encountering this story for the first time and forming opinions about it in real time.
If the point you're making is just "I already knew this," that's fine, but it doesn't address anything about the incentive structure behind the public response.
This is an interesting perspective, but I think the fallout from sticking to his guns here is probably greater than the public blowback he would receive from serving the DoD. Without this specific sticking point, the public would know that Anthropic was serving the DoD, but not what specifically the model was being used for, and it would be difficult to prove it wasn't something relatively innocuous.
That's a fair point about sequencing, but it actually reinforces the argument rather than undermining it. If Anthropic pushed back internally, and that pushback is what led to the directive going public, then Anthropic had every reason to anticipate that this would become a public story. Which means the blog post wasn't a spontaneous act of transparency, it was a prepared response to a foreseeable escalation. That's more strategic rather than less so.
Internal pushback and public damage control aren't mutually exclusive. A company can genuinely disagree with a client's demands behind closed doors and simultaneously craft a public narrative designed to make itself look as good as possible once those disagreements surface. In fact, that's exactly what competent communications teams do, they plan for the scenario where private disputes become public, and they have messaging ready.
The real question isn't who went public first or why. It's whether Anthropic's stated position, "we support these military use cases but not those ones", reflects a durable ethical framework or a line drawn precisely where it needed to be to keep both the contracts and the brand intact. Nothing in the sequencing you've described answers that question. It just tells us Anthropic saw this coming, which, if anything, means the messaging was more carefully engineered, not less.
I already suspected the first comment was by an LLM, but deleted that from my reply as it didn't feel like a productive accusation. However, with "that's a fair point" as an opener, plus the sheer typing speed implied by replies, and the way that individual sentences thread together even as the larger point is incoherent, I'm now confident enough to call it.
I actually use assistive voice transcription as I am unable to type well with a keyboard.
[Edit: update]
I use assistive voice transcription because I'm unable to type well with a keyboard. But I'd point out that "you must be an AI" has become the new way to dismiss an argument without engaging with it. It's the modern equivalent of "you're just copy-pasting talking points", it lets you discard everything someone said without addressing a single word of it.
The fact that my sentences "thread together" is not evidence of anything other than coherent thinking. And speed of response says more about the tools someone uses than whether a human is behind them. Plenty of people use dictation, accessibility tools, or just happen to type fast.
Ok, good to have that explanation. Your larger point, though, remains incoherent. Whether Anthropic saw this coming has nothing to do with the substance of the conflict here and is very much not "the real question".
I was pondering the same thing and to me the answer is a contractor sold something to the DoD and Anthropic pulled the rug out from under that contractor and the DoD isn't happy about losing that.
My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.
Yes, I assumed a mass surveillance Palantir program also. Interesting take on how it allows them to claim “we are not doing this” while asking Anthropic to do it.
Of course they can just say - we aren’t, Palantir is.
IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.
It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.
But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.
Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.
> But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
It's like how Uber and Airbnb in the early days were burning loads of cash to build market share. People went to these services because they were cheaper. Then they would increase prices once they had a comfortable position.
OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc. Compared to say Uber which didn't provide a lot of efficiency gains.
A little bit, but the scale is another magnitude higher. I just saw a chart yesterday that shows Uber burning $18B, Tesla burning $9B, and Netflix burning 11B before reaching profitability. Open AI so far spent $218 Billion.
The opportunity is disproportionately greater as well though.
Unfortunately that doesn't change the fact even a small miscalculation could have an enormous impact. We are approaching levels of risk comparable in size to the subprime crisis of 2008.
Is it? AI isn't going to be a winner take all market. Competition between American AI labs and even Chinese ones have seen to that.
The winners for AI will be the product companies, because soon enough the top-tier models are all going to have good enough performance that companies can just pick the cheapest. It'll be a race to the bottom for inference and OpenAI is very poorly placed to compete in that kind of thing.
I disagree. It's like Uber and Airbnb in how they try to gain market share. Big difference: For Uber (and when it got big, basically everybody I know has used it once in a while) and Airbnb, you oaid for each transaction. With OpenAI, most peopme are on the free tier. And if there is something incredibly hard, it's converting free users to paid users. That will, IMHO, be the thong that blows (many) of the AI companies up. They won't ever reach a profit/loss-equality.
And unlike Uber and Airbnb, OpenAI has no way to maintain marketshare. It’s a domain name with no moat.
Google has to pay Apple billions of dollars to make Google.com the default search engine. I just looked it up, over 15% of search revenue goes to pay to be the default search engine.
Every Android device defaults to Gemini.
Every Microsoft device defaults to Copilot.
I’d love to see where these cost reductions are. If costs are going to decrease rapidly why does OpenAI’s spending plan look so insane?
I don't think it's right to say that these devices "default" to their vendors' AI software when it's impossible to replace it with something else. Yes I can install Claude as a standalone app but I don't have the OS-wide integration that Gemini does for Android for example.
My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?
If you need to do the latter to be able to make money on the former, then you're not making money. Because if the latter requirement would disappear, inference margins would also drop.
At the end of the day, they're still burning cash. Even if inference is cheap, it's also not hard to compete on. They aren't going to be a trillion dollar
inference company.
Eventually there will be a race to the bottom on inference price to the customer by companies that aren't trying to subsidize their GPU investments.
OpenAI is spending money because they think they need to for their business to survive. They're hoping that the next big breakthrough just requires more compute and, somehow, that'll build them a moat.
OpenAI and quite honestly the others think they are in a race to AGI not the bottom. That's why they aren't concerning themselves with moats or cost. This is quite simply a massive bet that we've already cracked AGI and the rest is just funding the engineering to make it happen.
I personally think we haven't cracked AGI yet but it doesn't change their calculus.
The people running these companies have a perverse incentive to keep the ball rolling as long as possible so that they can extricate as much personal wealth and influence as possible. Maybe AGI makes all the problems go away. But, failing that, they get out relatively scot-free when it all collapses. And they don't owe anything to the public. And no one is going to bring them up on fraud charges or any other kind of criminal charges. So, while the world is burning around them (including their former companies), they have the money and connections to acquire property and businesses that are actually productive. It's the Russian oligarch playbook. They're the kings of a struggling society on the brink of failure, but they heard "kings" and said, "Let's go."
I generally agree with the sentiment, but it's not the russian oligarch playbook. The playbook is some kind of a variation of buying out a productive asset in a legacy industry under it's market price (because everything is on fire already), then using political or monopoly power to funnel (tax) money through it and into your pockets (the asset has to function, but doesn't have to provide a good quality of service due to not allocating proper maintenance). Sovereign AI fund and Microsoft are very close to that setup. If NYC subway would be sold to certain Elon and he will then jack up the prices and have the city hall to subsidize it still, but keep the quality of service the same, that would be more or less it.
The other variation goes in reverse -- using the legacy asset and it's capture labor force to output some kind of a commodity that is sold below market price to a controlled company in a different jurisdiction, where it's resold at small discount of a market price. The company still has to function here too.
Bonus points for not even owning the asset in question, but having effective control over it through the corrupt management, this way the government still pays the bills to keep it running at loss.
What you are describing is actually very western thing, because it assumes you can exchange the asset into cash directly and then buy something with that liquidity, which assumes solid property rights. I'm not even talking about OpenAI being an actual tech company that just wasn't there before. It's not how oligarchy works in the places.
Since the US is slowly moving in a direction of oligarchy, I think the actual reference will be helpful.
Please read Sarah Kendzior. What's happening under Trump is different from what's happened under other admins precisely because he's drawing from the Russian quasi-state/mob playbook, and not from the normal "socially-caustic Capitalism" one. The difference is that one seeks to maintain a state, and one seeks to dismantle it and replace it with a quasi-state, which exists mainly to interface with other the entities that are still playing in the nation-state system, but which internally functions almost completely as a projection of the power of the elites.
You're conflating the assets the elites own before the state collapse with the ones they seek to acquire afterwards. The don't care if the ones from before function, because their only purpose is to be maximally extractive. Afterwards, there's no need to funnel tax money through the functional businesses they acquire; they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money. No laundering games necessary.
>replace it with a quasi-state, which exists mainly to interface with other the entities
I don't exactly disagree with that assessment and I think you should stay vigilant for that indeed. What I'm saying, that selling a hot potato to get cash is the opposite of what oligarchs are known to do. I could be that it's but a step to buy something else with oligarchic intentions in mind, but alternatively it could a normal westerner money-handling behavior.
>they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money.
That doesn't contradict what I wrote or at least meant. The asset in question is not the means of laundering, but a pretext for extracting money from everyone unfortunate enough to live in the forsaken place.
The laundering part usually comes when the oligarch wants to safeguard their own money from political risks, which they do by keeping the funds in a place that is outside of their (and their potential rivals) political influence. Otherwise, once the political balance shifts, the money is just gone, because no laws exist to guard it anymore. I'm not sure what this "outside" place could be for Americans, but could guess (with no confidence in the answer at all), it's either Swiss or Gulf banks. Maybe UK or whatnot. Some structures that have a combination of impartiality to their disputes, strong enough property and privacy regimes, but with zero to none ethical constrains to walk away from it.
"so that they can extricate as much personal wealth and influence as possible"
I've always thought this. If you're running something like OpenAI, it really doesn't matter to you if the company fails because you're already comfortably wealthy. But, it sure would be nice to be worth another 10x billion - though I'm not totally sure why.
So these individuals perceive a large upside and no downside. It's more of a hobby than a job. Like learning to play piano. It would be amazing to be a badass pianist...but not a big deal if that never happens.
Cisco did this in 1999. That's how my smallish apartment building in Sweden ended up with a kick-ass Cisco 10 Gbps switch in its basement a year later - when these cost real money.
I think the HOA still only pays like $10/month/apartment for an entry level that's now defined as 250/250 Mbit/s. Someone must have been unusually savvy with the contracts.
Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.
It's not "continue" buying as much as this is NVIDIA fronting the money for (most of) the hardware OpenAI has already ordered from them. It's like borrowing rent money from your drug dealer.
It's like credit cards loaning money to people who are unemployed and will default on payments. It's a risky business that is legal and can be very profitable, but may also be disastrous in the future.
I don't see the problem as long as materially significant transactions by publicly traded companies are properly disclosed to investors. If someone loses money by buying NVDA then they have only themselves to blame.
Tuld wasn't wrong. There will always be financial bubbles and misallocation of capital. It can't be prevented, and even trying to prevent it would involve intrusive government overreach that would make most people even more unhappy. Investors who want safety are free to buy Treasuries.
Come on, calling a round of vendor financing (which is what the NVIDIA money is) "funding" is eggregiously misleading. The only new money entering the sector from this is SoftBank's stake.
They might have dressed up the wording, but the details are all there for anyone who wants to objectively look at the deal. It is a group of two executives making a non coerced deal and disclosing the required information to investors.
Might be a stupid gamble, but it's not akin to a loan shark shaking down a hungry, cold person for life's essentials.
> On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
> hence I end up with $100 worth of stock in your company and it only cost me $25.
You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.
It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.
So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.
If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.
Sure, but OpenAI doesn't have cash. It does have stock.
Even if Nvidia has capped production for now, increased demand still allows them to sell chips at a greater margin. Or, to put another way, presumably Nvidia is charging OpenAI a premium for the privilege of paying with stock.
In that case, you spent $80 to produce an item and exchanged it for $100 worth of their stock.
Now if you check, these companies selling their stock like this tend to have large amounts of debt. If their stock becomes worthless, you just wasted $80 producing an item that their creditors have first dibs on. And liquidating your shares immediately to ensure your gain, would weigh on their stock's value, potentially to the point where their stock would be only $80 worth, and you wouldn't be gaining anything anymore. Your earnings would then tank, alongside them.
> I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.
Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.
Urgently need an IPO so losers can chip in. If the sandcastle plummets before, funds and other AI companies lose a lot, so better bet again and again, even if this is nonsensical.
> Isn't that basically the same as me giving you $80?
In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.
Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.
> Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?
Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.
This distorts the market for the product we're trading, and distorts the share price for both my company and yours.
> Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
It's a good question, what I think you're missing is that if the market is valuing me (NVIDIA) at 25x revenue then it's more like I traded you (OpenAI) a GPU it cost me $80 to make for $100 worth of OpenAI stock, and I got a bonus $2500 in market cap of my own stock (which existing shareholders like).
IOW for every incremental "$100" in revenue (circular or otherwise), existing shareholders get paid "$2500" in equity (NVIDIA appreciation + OpenAI shares).
This "works" for NVIDIA and its shareholders as long as they/the market keeps thinking $100 of OpenAI stock is a good price for a GPU. If OpenAI tangibly fails to deliver on this valuation then NVIDIA may wind up in the red on these deals.
Caveat: it's a bit more complicated than that as OpenAI doesn't typically buy/operate GPUs directly afaict, rather they team up with the big cloud providers like AMZN (also part of the deal). But it's an useful way to wrap your head around the economics, I think (open to correction, not a domain of professional expertise).
I don't see anything _inherently_ unethical about this as some comments seem to imply. It's definitely riskier than accepting cash, in which case you're free not to play, but it's a calculated risk based on future expectations of growth by OpenAI. Granted there are some sketchy incentives qua existing shareholders that could materialize in pump and dump dynamics.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
>they have good legal goons who find way to make all of it appears fair with regards to the law
I thought it was more that the legal goons delay the final judgement until Microsoft can eventually find someone they can (technically legally) bribe to drop the case?
Swallowing a few millions dollars fine will do. The DOJ needs to fund the whole department. By then MS will have moved onto other things, rinse and repeat.
I'm not a finance expert, but it may be because investment and purchase are are taxed differently (I don't know). You gave $100 away as investments, got $100 back as revenue. Meanwhile you establish that your product are worth $100 (while costing $80) and you have $100 worth of shares. Without considering side effects, you gave away $80 worth of product for $100 (supposed) worth of shares. But shares are subject to side effects and those side effects can be quite nice (making the news, establishing price,...).
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
> I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
What if the product only costs you $20 to produce?
How I see it is the companies want to jack their revenue and in turn jack the price of their stock and please shareholders. Those are the two main goals which this accomplishes, regardless of the underlying fundamentals.
The reason this doesn't make sense is that this is the math of monopoly creation! The government should be making sure companies don't go around throwing money at circular deals that will make them and their friends a fortune while cornering the market, but it seems that capitalism rules don't exist anymore in the US.
Comparing OpenAI and WeWork is a nonsensical perspective. OpenAI is shipping the most revolutionary product in a generation, with 800 million monthly active users. It's the fastest revenue ramp ever, at incredible scale -- $20B+ ARR. These are real fundamentals. They matter. And the cost of inference is coming down all the time.
WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.
It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).
They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.
They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).
One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.
Local ai is cool and all but the models that run on typical consumer hardware doesn’t really compare to the breadth of information available by the likes of chatGPT, lets be real.
How will they make money on their product exactly? To the tune of being worth nearly a trillion dollars? There is no guarantee that inference will go down, we’ve seen some improvement with cheap models, but they aren’t what people want, and otherwise models stay expensive to run and use
So what. In a highly competitive industry they can't keep selling inference unless they continually train better models. It's like saying my airline is profitable if you don't count the cost of buying new airplanes.
This is a completely new market and players are currently burning money in order to capture market share. The money will stop flowing in at some point, but until then, you can’t compare it to an industry like aviation which is extremely mature and heavily optimized.
Nah. The software industry never really becomes mature. Microsoft is still spending a fortune churning on new versions of Windows and Office. The moment that OpenAI cuts spending on training they'll start to slide into irrelevance. Training costs are no longer just for compute resources and engineers: now they need to pay for proprietary training data to differentiate from competitors.
OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.
Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.
Just try using Claude with API for an hour and you will see that the subscriptions are definitely not profitable (unless they percent off “partying but dormant” is very high).
They aren't making money on the vast majority of those 800 million monthly actives. I wonder how many will stick around once they roll out ads. If they keep those users with ads, they definitely will be worth their valuation.
That's not P/E. That's Price to Sales. P/E is price to earnings ratio. Earnings is profit. Since neither of these companies is profitable, they don't have a P/E ratio today.
The only reason to draw this comparison is to show SoftBank are not as competent as they'd like to appear to be - so putting their name in relation to investors of OAI does not strengthen the prospects we should share re. OAI.
It’s one of the worst takes I’ve heard. OpenAI creates the fastest growing app ever, spawns a revolution bigger than the internet, and this guys take is they are like WeWork…
Both can be true. Just because you've created a revolutionary product doesn't mean it's a viable business, let alone one worth $700+ billion. There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.
> There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.
I'd say most first movers fade away. Microsoft wasn't the first OS, Google wasn't the first search engine, Facebook wasn't the first social network... etc... etc... etc...
But it can also simply be the financial framing for direct bartering. Which is even more direct than regular financial transactions.
"I will provide these resources you need, in exchange for part ownership", and/or "a limited license to your tech", "right to provide access to our customers on these terms", Etc."
Amazon doesn't need any frothy fake revenue. But they do want to offer their customers the most in demand models, with the best financial terms for Amazon.
Nvidia wants customers, but not at the expense of throwing money away. Their market cap may be volatile, but their books are beyond solid.
I would be a lot more concerned if OpenAI was getting "funding" from a quantum computer startup, and vice versa.
The biggest issue I see is Microsoft's entire mentality around AI adoption that focuses more on "getting the numbers up" then actually delivering a product people want to use.
Most of the announcements I hear about Copilot, it's always how they've integrated it into some other piece of software or cut a deal with yet another vendor to add it to that vendors product offering. On the surface there's nothing wrong with doing that but that just seems to be the ONLY thing Microsoft is focused on.
Worse yet, most of these integrations seem like a exercise in ticking boxes rather than actually thinking through how integrating Copilot into a product will actually improve user experience. A great example was someone mentioned that Copilot was now integrated into the terminal app but beyond an icon + a chat window, there is zero integration.
Overall, MS just reeks of an organization that is cares more about numbers on a dashboard and pretty reports than they are on what users are actually experiencing.
There aren't any "AI" products that have enough value.
Compare to their Office suite, which had 100 - 150 engineers working on it, every business paid big $$ for every employee using it, and once they shipped install media their ongoing costs were the employees. With a 1,000,000:1 ratio of users to developers and an operating expense (OpEx) of engineers/offices/management. That works as a business.
But with "AI", not only is it not a product in itself, it's a feature to a product, but it has OpEx and CapEx costs that dominate the balance sheet based on their public disclosures. Worse, as a feature, it demonstrably harms business with its hallucinations.
In a normal world, at this point companies would say, "hmm, well we thought it could be amazing but it just doesn't work as a product or a feature of a product because we can't sell it for enough money to both cover its operation, and its development, and the capital expenditures we need to make every time someone signs up. So a normal C staff would make some post about "too early" or whatever and shelve it. But we don't live in a normal world, so companies are literally burning the cash they need to survive the future in a vain hope that somehow, somewhere, a real product will emerge.
For most software products I use, if the company spent a year doing nothing but fixing P2 bugs and making small performance improvements, that would deliver far, FAR more value to me than spending a year hamfistedly cramming AI into every corner of the software. But fixing bugs doesn't 1. pad engineer's resumes with new technology, or 2. give company leadership exciting things to talk about to their golfing buddies. So we get AI cram instead.
I think it is more externally driven as well, a prisoners dilemma.
I don't want to keep crapping out questionable features but if competitors keep doing it the customer wants it -- even if infrastructure and bug fixes would actually make their life better.
Last time I saw results of a survey on this, it found that for most consumers AI features are a deciding factor in their purchasing decisions. That is, if they are looking at two options and one sports AI features and the other doesn’t, they will pick the one that doesn’t.
It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.
Consumers is nice, but far more important are the big corporate purchases. There may be a lot of people there too who don't want AI, but they all depend on decisions made at the top and AI seems to be the way to go, because of expectations and also because of the mentioned prisoner's dilemma, if competitors gain an advantage it is bad for your org, if all fail together it is manageable.
My job is like that, although it's mostly driven by my direct boss and not the whole company, but our yearly review depends on reaching out to our vendors and seeing if an AI solution is available for their products and then doing whatever is necessary to implement it. Most of the software packages we support don't have anything where AI would improve things, but somehow we're supposed to convince the vendor that we want and need that.
>It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.
I think it's because there's a financial motivation for all the toxic positivity that can be seen all over the internet. A lot of people put large quantities of money into AI-related stocks and to them any criticism is a direct attack on their wealth. It's no different from crypobros who put their kids' entire college fund into some failed and useless project and now they need that project to succeed or else it's all over.
I’m not sure that really explains how people get onto hype trains like this in the first place, though. I doubt many people intentionally stake their livelihoods on a solution in search of a problem.
My guess is that it’s more of a recency bias sort of thing: it’s quite easy to assume that a newer way of solving a problem is superior to existing ways simply because it’s new. And also, of course, newfangled things naturally attract investment capital because everyone implicitly knows it’s hard to sell someone a thing they already have and don’t need more of.
It’s not just tech. For example, many people in the USA believe that the ease of getting new drugs approved by the FDA is a reason why the US’s health care system is superior to others, and want to make it even easier to get drugs approved. But research indicates the opposite: within a drug class, newer drugs tend to be less effective and have worse side effects than older ones. But new drugs are definitely much more expensive because their period of government-granted monopoly hasn’t expired yet. And so, contrary to what recency bias leads us to believe, this more conservative approach to drug approval is actually one of the reasons why other countries have better health care outcomes at lower cost.
Currently if someone posts here (or in similar forums elsewhere) there is a convention that they should disclose if they comment on a story related to where they work. It would be nice if the same convention existed for anyone who had more than say, ten thousand dollars directly invested in a company/technology (outside of index funds/pensions/etc).
A browser plugin that showed the stock portfolios of the HN commenter (and article-flagger) next to each post would be absolutely amazing, and would probably not surprise us even a little.
I doubt obsolescence anticipation has anything to do with it. That’s how tech enthusiasts think, but most people think more in terms of, “Is this useful to me?” And if it’s doing a useful thing now then it should still be doing that useful thing next year as long as nobody fucks with it.
I would guess it’s more just consumer fatigue. For two reasons. First, AI’s still at the “all bark and no bite” phase of the hype cycle, and most people don’t enjoy trying a bunch of things just to figure out if they work as advertised. Where early adopters think of that as play time, typical consumers see it as wasted time. Second, and perhaps even worse, they have learned that they can’t trust that à product will still be doing that useful thing in the future because the tech enthusiasts who make these products can’t resist the urge to keep fucking with it.
I strongly felt this way about most software I use before LLMs became a thing, and AI has ramped the problem up to 11. I wish our industry valued building useful and reliable tools half as much as chasing the latest fads and ticking boxes on a feature checklist.
This is exactly what I was thinking about my current place of employment. Wouldn't all of our time be spent better working on our main product than adding all these questionably useful AI add ons? We already have a couple AI addons we built over the years that aren't being used much.
100% agree. Office and Windows were hugely successful because they did things that users (and corporations) wanted them to do. The functionality led to brand recognition and that led to increased sales. Now Microsoft is putting the horse before the cart and attempting to force brand recognition before the product has earned it. And that just leads to resentment.
They should make Copilot/AI features globally and granularly toggleable. Only refer to the chatbots as "Copilot," other use cases should be primarily identified on a user-facing basis by their functionality. Search Assistant. Sketching Aid. Writing Aid. If they're any good at what they do, people will gravitate to them without being coerced.
And as far as Copilot goes, if they are serious as me it as a product, there should be a concerted effort to leapfrog it to the top of the AI rankings. Every few weeks we're reading that Gemini, Claude, ChatGPT, or DeepSeek has broken some coding or problem-solving score. That drives interest. You almost never hear anything similar about Copilot. It comes off as a cut-rate store brand knockoff of ChatGPT at best. Pass.
>Now Microsoft is putting the horse before the cart and attempting to force brand recognition before the product has earned it. And that just leads to resentment.
I'm surprised that they haven't changed the boot screen to say "Windows 11: Copilot Edition".
they somehow made it worse and use a less capable version with smaller context window.
The only potential upside for businesses it that it can crawl onedrive/sharepoint, and acts as a glorious search machine in your mailbox and files.
That's the only thing really valuable to me, everything else is not working as it should. The outlook integration sucks, the powerpoint integration is laughably bad to being worthless, and the excel integration is less useful than Clippy.
I actually prefer using the "ask" function of github copilot through visual studio code over using the company provided microsoft copilot portal
I think this is a really good take, and not one I’ve seen mentioned a lot. Pre-Internet (the world Microsoft was started for), the man expense for a software company was R&D. Once the code was written, it was all profit. You’d have some level of maintenance and new features, but really - the cost of sale was super low.
In the Internet age (the likes of Google and Netflix), it’s not much different, but now the cost of doing business is increased to include data centers, power, and bandwidth - we’re talking physical infrastructure. The cost of sale is now more expensive, but they can have significantly more users/customers.
For AI companies, these costs have only increased. Not only do they need the physical infrastructure, but that infrastructure is more expensive (RAM and GPUs) and power hungry. So it’s like the cost centers have gone up in expense by log-units. Yes, Anthropic and OpenAI can still access a huge potential customer base, but the cost of servicing each request is significantly more expensive. It’s hard to have a high profit margin when your costs are this high.
So what is a tech company founded in the 1970s to do? They were used to the profit margins from enterprise software licensing, and now they are trying to make a business case for answering AI requests as cheaply as possible. They are trying to move from low CapEx + low OpEx to and market that is high in both. I can’t see how they square this circle.
It’s probably time for Microsoft to acknowledge that they are a veteran company and stop trying to chase the market. It might be better to partner with a new AI company that is be better equipped to manage the risks than to try to force a solo AI product.
> cost of doing business is increased to include data centers, power, and bandwidth
Microsoft Azure was launched in 2010. They've been a "cloud" company for a while. AI just represents a sharp acceleration in that course. Unfortunately this means the software products have been rather neglected and subject to annoying product marketing whims.
They've had cloud products for a long time, but I don't think that Microsoft fundamentally changed. I still see them organized and treated as an Enterprise software company. (This is from my N=1 outside perspective.)
ChatGPT says that "productivity and business processes" is still the largest division in Microsoft with 43% of revenues and 54% of operating income (from their FY2025 10K). The "intelligent cloud" division is second with 38% revenue and 35% operating income. Which helps to support my point -- their legacy enterprise software (and OS) is still their main product line and makes more relative profits than the capital heavy cloud division.
Yeah. Hyperscalers who are building compute capacities became asset heavy industries. Today's Google, MSFT, META are completely different than 10 years ago and market has not repriced that yet. These are no longer asset light businesses.
> But with "AI", not only is it not a product in itself, it's a feature to a product, but it has OpEx and CapEx costs that dominate the balance sheet based on their public disclosures. Worse, as a feature, it demonstrably harms business with its hallucinations.
I think it depends on how the feature is used? I see it as mostly as yet another user interface in most applications. Every couple of years I keep forgetting the syntax and formulas available in Excel. I can either search for answers or describe what i want and let the LLM edit the spread sheet for me and i just verify.
Also, as time passes the OpEx and CapEx are projected to reduce right?
It maybe a good thing that companies are burning through their stockpiles of $$$ in trying to find out the applicability and limits of this new technology. Maybe something good will come out of it.
The thing about giving your application a button that costs you a cent or two every time a user clicks on it is, then your application has a button that costs you a cent or two every time a user clicks on it.
For the usecase of "How do I do thing X in Excell" you could probably get pretty far with just adding a small, local LLM running on the user's machine.
That would move the cost of running the model to the end user but it would also mean giving up all the data they can from running prompts remotely.
It would probably also make Office users more productive rather than replacing them completely and that's not the vision that Microsoft's actual customers are sold on.
Fair. But I sure wish we could instead solve this problem the way we did 20 years ago: by not having Web search results be so choked off by SEO enshittification and slop that it’s hard to find good information anymore. Because, I promise you, “How do I do thing X in Excel?” did not used to be nearly so difficult a question to answer.
Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.
AI is literally the fastest growing and most widely used/deployed technologies ever.
Yup, I've been here before. Back in 1995 we called it "The Internet." :-) Not to be snarky here, as we know the Internet has, in fact, revolutionized a lot of things and generated a lot of wealth. But in 1995, it was "a trillion dollar market" where none of the underlying infrastructure could really take advantage of it. AI is like that today, a pretty amazing technology that at some point will probably revolutionize a lot of things we do, but the hype level is as far over its utility as the Internet hype was in 1995. My advice to anyone going through this for the first time is to diversify now if you can. I didn't in 1995 and that did not work out well for me.
The comparison to the dotcom bubble isn't without merit. As a technology in terms of its applications though I think the best one to compare the LLM with is the mouse. It was absolutely a revolution in terms of how we interact with computers. You could do many tasks much faster with a GUI. Nearly all software was redesigned around it. The story around a "conversational interface" enabled by an LLM is similar. You can literally see the agent go off and run 10 grep commands or whatever in seconds, that you would have had to look up.
The mouse didn't become some huge profit center and the economy didn't realign around mouse manufacturers. People sure made a lot of money off it indirectly though. The profits accrued from sales of software that supported it well and delivered productivity improvements. Some of the companies who wrote that software also manufactured mice, some didn't.
I think it'll be the same now. It's far from clear that developing and hosting LLMs will be a great business. They'll transform computing anyway. The actual profits will accrue to whoever delivers software which integrates them in a way that delivers more productivity. On some level I feel like it's already happening, Gemini's well integrated into Google Drive, changes how I use it, and saves me time. ChatGPT is just a thing off on the side that I chat randomly with about my hangover. Github Copilot claims it's going to deliver productivity and sometimes kinda does but man it often sucks. Easy to infer from this info who my money will end up going to in the long run.
On diversification, I think anyone who's not a professional investor should steer away from picking individual stocks and already be diversified... I wouldn't advise anyone to get out of the market or to try and time the market. But a correction will come eventually and being invested in very broad index funds smooths out these bumps. To those of us who invest in the whole market, it's notable that a few big AI/tech companies have become a far larger share of the indices than they used to be, and a fairly sure bet that one day, they won't be anymore.
I knew people who purchased their options but didn't sell and based on the AMT (Alternative Minimum Tax) had tax bills of millions of dollars based on the profit IF they sold on the day they purchased it. But then it dropped to $10 and even if they sold everything they couldn't pay the tax bill. They finally changed the law after years but those guys got screwed over.
I was young and thought the dot com boom would go on forever. It didn't. The AI bubble will burst too but whether it is 2026, 27, 28, who knows. Bubble doesn't mean useless, just that the investors will finally start demanding a profit and return on their investment. At that point the bubble will pop and lots of companies will go fail or lose a lot of money. Then it will take a couple of years to sort out and companies have to start showing a profit.
I have zero doubt that AI will eventually make many people lots of money. Just about every company on earth is collecting TBs of data on everyone and they know they're sure they can use that information against us somehow, but they can't possibly read and search through it all on their own.
I have quite a few doubts that it'll be a net positive for society though. The internet (for all of its flaws) is still a good thing generally for the public. Users didn't have to be convinced of that, they just needed to be shown what was possible. Nobody had to shove internet access into everything against customer's wishes. "AI" on the other hand isn't something most users want. Users are constantly complaining about it being pushed on them and it's already forced MS to scale back the AI in windows 11.
Sell the risky stock that has inflated in value from hype cycle exuberance and re-invest proceeds into lower risk asset classes not driven by said exuberance. "Taking money off the table." An example would be taking ISO or RSU proceeds and reinvesting in VT (Vanguard Total World Stock Index Fund ETF) or other diversified index funds.
What tomuchtodo said. When I left Sun in 1995 I had 8,000 shares, which in 1998 would have paid off my house, and when I sold them when Oracle bought Sun after a reverse 3:1 split, the total would not even buy a new car. Can be a painful lesson, certainly it leaves an impression.
Eh, the top ten stocks in that fund are Nvidia, Apple, Microsoft, Amazon, Google, Broadcom, Google, Facebook, Tesla and TSMC. I propose looking for an ex-USA fund to put part of your investment into. Vanguard has a few, e.g. https://investor.vanguard.com/investment-products/etfs/profi... . You still get TSMC, Tencent, ASML, Samsung and Alibaba in the top 10, but the global stock markets seem less tech-frothy than the US.
Stocks are fine for diversification, just stocks that have a different risk factors. So back in the 90's I had been working at Sun then did a couple of startups, and all of my 'investment' savings (which I started with stock from the employee purchase plan at Sun) were in tech of one kind or another. No banking stocks, no pharmaceutical stocks, no manufacturing sector stocks. Just tech, and more precisely Internet technology stocks. So when the Internet bubble burst every stock I owned depreciated rapidly in price.
One of the reasons I told myself I "couldn't" diversify was because if I sold any of the stock to buy different stock I'd pay a lot of capital gains tax and the IRS would take half and now I'd only be half as wealthy.
Another reason was my management telling me I couldn't sell my stock during "quiet" periods (even though they seemed too) and so sometimes when I felt like selling it I "couldn't."
These days, especially with companies that do not have publicly traded stock, that is harder than ever to diversify. The cynic in me says they are structured that way so that employees are always the last to get paid. It can still work though. You just have to find a way to option the stock you are owed on a secondary market. Not surprisingly there are MBA types who really want to have a piece of an AI company and will help you do that.
So now I make sure that not everything I own is in one area. One can do that with mutual funds, and to some extent with index funds.
But the message is if you're feeling "wealthy" and maybe paying your mortgage payments by selling some stock every month, you are much more at risk than you might realize. One friend who worked at JDS Uniphase back in the day just sold their stock and bought their house, another kept their stock so that it could "keep growing" while selling it off in bits to pay their mortgage. When JDSU died they had to sell their house and move because they couldn't afford the mortgage payments on just their salary. But we have a new generation that is getting to make these choices, I encourage people in this situation to be open to the learning.
The blockchain hype bubble should probably be pretty near in memory for most people I would suspect. I thought that was a wild, useless ride until Ai took it over.
> at some point will probably revolutionize a lot of things we do
The revolution already happened. I can't imagine life without AI today. Not just for coding (which I actually lament) but just in general day to day use. Sure it's not perfect but I think it's quite difficult to ignore how the world changed in just 3-4 years.
That's just so strange to me. In my experience, it hallucinates and makes things up often, and when it's accurate, the results are so generic and surface level.
Yes but I use it as a substitute friend, gf, therapist, dumb questions like "how 2 buy clothes and dress good and is this good and how to unclog my toilet shits"
> Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.
Their incentives are to juice their stock grants or other economic gains from pushing AI. If people aren't paying for it, it has limited value. In the case of Microsoft Copilot, only ~3% of the M365 user base is willing to pay for it. Whether enough value is derived for users to continue to pay for what they're paying for, and for enterprise valuation expectations to be met (which is mostly driven by exuberance at this point), remains to be seen.
Their goal is not to be right; their goal is to be wealthy. You do not need to be right to be wealthy, only well positioned and on time. Adam Neumann of WeWork is worth ~$2B following the same strategy, for example. Right place, right time, right exposure during that hype cycle.
> In the late 90s and early 00s a business could get a lot of investors simply by being “on the internet” as a core business model.
> They weren’t actually good business that made money…..but they were using a new emergent technology
> Eventually it became apparent these business weren’t profitable or “good” and having a .com in your name or online store didn’t mean instant success. And the companies shut down and their stocks tanked
> Hype severely overtook reality; eventually hype died
("Show me the incentives and I'll show you the outcome" -- Charlie Munger)
Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.
It's happened before.
Your premise that companies which become financially successful doing one thing are automatically excellent at doing something else is hard to believe.
Moreover, it demonstrates both an inability to dispassionately examine what is happening and a lack of awareness of history.
should be really easy to conjure up examples then. where every single business leader has been wrong about a new technology to the tune of hundreds of billions of dollars.
I find it very easy to believe. The pressures that select for leadership in corporate America are wholly perpendicular to the skills and intelligence for identifying how to leverage novel and revolutionary technologies into useful products that people will pay for. I present as evidence the graveyard of companies and careers left behind by many of those leaders who failed to innovate despite, in retrospect, what seemed to be blindingly obvious product decisions to make.
And this is the broken mindset tanking multiple large companies' products and services (Google, Apple, MS, etc). Focus on the stock. The product and our users are an afterthought.
Someone linked to a good essay on how success plus Tim Cook's focus on the stock has caused the rot that's consuming Apple's software[0]. I thought it was well reasoned and it resonated with me, though I don't believe any of the ideas were new to me. Well written, so still worth it.
The investor being the customer rather than actual paying customers was something I noticed occurring in the late 90s in the startup and tech world. Between that shift in focus and the influx of naive money the Dot Bomb was inevitable.
Sadly the fallout from the Dotcom era wasn't a rejection of the asinine Business 2.0 mindset but instead an infection that spread across the entirety of finance.
In particular it's the short term stock price. They'll happily grift their way to overinflated stock prices today even though at some point their incestuous money shuffle game will end and the stocks will crash and a bunch of people who aren't insider trading are going to be left with massive losses.
Buybacks lead to stock price increases and are indistinguishable from dividends in theory, and in practice they are better than dividends because of taxation.
The problem I have with that logic is that it still doesn't really give any sensible reason for why the stock should have any economic value at all. If the point is that the company will pay for it at some point, it makes more sense for it to be a loan rather than a unit of stock. I stand by my claim that selling a non-physical item that does nothing other than hopefully get bought again later for more than you sold it for is indistinguishable from a scam.
> top 10 biggest and most profitable companies in human history are all preposterously wrong
There's another post on the front page about the 2008 financial crisis, which was almost exactly that. Investors are vulnerable to herd mentality. Especially as it's hard to be "right but early" and watch everyone else making money hand over fist while you stand back.
every time these companies make a mistake and waste billions of dollars it is well-publicized. so there is plenty of data that they are frequently and preposterously wrong.
name a technology that every single top tech company has invested billions of dollars in and then has flopped. the metaverse does not count unless google, amazon, microsoft etc was also throwing billions into it.
right because copilot is bad, that must mean no one uses chatgpt, or claude code, or gemini. they only have billions of MAUs, people must really hate it
MS actually changed their office.com landing page to a funnel that tricks you to into installing a copilot app. It used to be the dashboard for MS web apps. There are no links to the web apps, but they are all still there, you just have to know the subdomains. The app doesn’t have any of the functionality that page used to offer…
I haven't used office.com but it does seem to have links to the four main webapps (did there used to be more?). They're the second row of big boxes titled "Word with Copilot", etc. Admittedly with very confusing names.
I checked way back machine and they have been making large changes to that page every day for the last month. It used to be a lie that office 365 was rename to office 365 copilot, yet it is an app with only chat bot functionality. They advertise the copilot integration for the main office apps now, but those are not part of the copilot app they are trying to trick you into installing.
There are no office tools. It’s just a chatbot app. The page says they combined word and excel and PowerPoint, but it still doesn’t do anything but chat. I asked it to create a word document and it offered me a download link to a word template…
I noticed this and I wad enraged but it. The URL to the old page is way less easy to remember and I had to add it to my bookmarks. I'm still peeved about it.
I just attended a training about AI Foundry today and they advertised thousands of integrations and support for like 50 different models. There is no way in hell all that stuff is tested and working properly. Microsoft seems to just be trying to throw as much chum as possible in the ocean and seeing what bites.
I see Microsoft throwing spaghetti at the wall just in time as “AI” functionality hits government and educational procurement procedures.
The copilot product is obviously borked, and is outshone by ‘free’ competitors (Gemini, ChatGPT). But since the attributes and uses are so fuzzy, they have a minimum viable product to abort meaningful talk about competition while securing big contracts from governments and delivering dog water.
My anecdotal observations of copilot are people using competing products soon after trialling. Reports say Anthropics solution is in widespread use at Microsoft… a bunch of devs on MacBooks and iPhones using Claude to build and sell … not what they themselves use (since they are smart and have taste?).
They did the same thing with Azure right? I remember articles about Microsoft stock that would mention that Azure subscription numbers included Office 365. But the thing is, their weird game of inflating numbers worked. There wasn’t really any negative consequence of doing that. So why wouldn’t they do it again? It’s yet another unfortunate example of dishonesty being rewarded these days.
> "The biggest issue I see is Microsoft's entire mentality around AI adoption that focuses more on "getting the numbers up" then actually delivering a product people want to use."
That succinctly describes 90% of the economy right now if you just change a word and remove a couple:
The biggest issue I see is the entire mentality that focuses more on "getting the numbers up" than actually delivering a product people want to use.
KPI infection. You see projects whose goal is, say "repos with A I code review turned on" vs "Code review suggestions that were accepted". And then if you do get adoption (like, say, a Claude Code trial), then VPs balk about price. If it's actually expensive now it's because they are actually using it all the time!
The same kind of logic that led companies to migrate from Slack to Teams. Metrics that don't actually look at actual, positive impact, as nobody picks a risky KPI, and will instead pick a useless one that can't miss.
This is the bad side of things like OKRs. They push you away from user satisfaction since that harder to measure, coupled with go consequences for missing them. People just force adoption without taking the product signals that come from users rejecting your changes.
I have Copilot buttons sprinkled everywhere on my work computer, and every time I have tried to use them I get something saying "Oh, I can't do that". It's truly baffling.
Copilot button on my email inbox? I try "Find me emails about suchandsuch", and get the response "I don’t have direct access to your email account.
If you’re using Outlook (desktop, web, or mobile), here are quick ways to find all emails related to...". Great, so it doesn't even know what program it's runnning in, let alone having any ability to do stuff in there! Sigh.
Using the paid M365 Copilot ($30/mo) Chat and Researcher agent, I recently discovered an interesting limit: Copilot is technically unable to retrieve more than 24 email messages. Ever.
We can't know if the answers I got from it are reliable but it seems like the Microsoft Graph API calls it makes and the tools Copilot has are missing the option to call the next page. So, a paginated response is missing all data beyond the first page.
I tried copilot agent once, and it just claimed that it accessed a website that should have been blocked by corporate firewalls and uploaded a bunch of proprietary data. Lots of very specific information about how it clicked on specific buttons of the website etc.
We raised a high priority ticket with MS and turns out that Copilot Agent lied about the entire thing because the website was blocked. It completely made it up.
The fact that we are supposed to use Copilot Agent for open-ended "research" is mind-boggling.
A whole new toolbar appeared in Outlook on my work computer with nothing but a single button to open a copilot chat window. I tried asking it a few simple questions and it completely failed at all of them. Copilot didn't even know if I was using the web or desktop version of the very app it was embedded in!
Wasting UI space for a useless tool it's just a waste of time, it actively makes it harder to get work done. But I guess the important thing is the number of times that AI button gets clicked is going up on some PMs telemetry dashboard.
Yeah did they test any of this? Did they run a pilot and ask 1000 users did you use it? Did you like it? Is it better with this than without it?
It's as though they think some "AI revolution" will come, and all they need to do is just make sure that by the time it does, they will have sprinkled enough AI pixie dust on their products and services. And then they added some KPI's in the organization and called it a day.
Most of all the whole strategy feels extremely faceless. Who is the visionary here? Where are the proud product launches and visionary blog posts about how all this happens?
The wild thing is, the business prop is so clear - an llm built into your corporate data, with the same security, guard rails, grc auditing stack that protects the rest of your data. Why integrate and exfiltrate to an outside company?
But copilot is fucking terrible. Sometimes I ask it powershell questions about microsoft products and it hallucinates answers. Get your shit together microsoft, why would I use this product for any reason if it doesnt work squarely inside your own stack
Last year we wanted IT to confirm that Copilot Agent hadn't exfiltrated data and we couldn't get logs for its website usage without raising a ticket to Microsoft. Maybe this changed, maybe our IT people are bad, but I for one wasn't impressed.
Or, scaling back trying to keep their datacenter bill manageable.
Used to be one could upload an unlimited number of files (20 at a time) and process them directly at the initial window --- now one has to get into "Pages Mode", and once there, there's a limit on the number of files which can be uploaded in a given 24-hour period.
That only good if you're doing measurably more with the time you save. I feel like I'm significantly faster in parts of my job using Copilot, but when I try to get data on what I'm doing now that I wasn't doing before I had it I don't come up with anything. I know I'm working faster, but the time seems to have just gone.
Describing to Claude that I need an edit made in the second paragraph of the third section feels easy, comfortable, and straightforward. I’m using my speech centers, speech to text, and then I wait for a generation during which I hit my phone or Reddit. Poof, the text flies out like magic, taking 20+ seconds, then I re-re-re-read it to make sure the edit was good and nothing was lost in that edit. Oops, the edit inverted the logic of the paragraph, lemme repeat the above… and again… time flies! 2 hours gone in a flash.
Old and boring workflow:
I gruellingly move my mouse to open a file, then take a coffee break. I come back and left-click into the sentence that sucks. I hit Reddit to deal with the anxiety… I think, boo, and then type out the edit I needed. It’s bad, I fix. Coffee break. Squiggly red line from a misspelling? I fix. I google and find a better turn of phrase, copy and paste it in manually with a little edit. Ugh. This sucks. I suck, work sucks. Time sucks. 35 entire minutes of my life has been wasted… time to get another coffee and check Reddit.
———
Working with an LLM is kinda like working under stage hypnosis. The moment to moment feelings are deceptive and we humans are unreliable recorders of time-usage. Particularly when engaged in unwanted effort.
Google has had all this tech for a minute. Their restrained application and lack of 10x-vibe-chad talk make me think their output measurements are aligned with my measurements.
1 rabbit hole hallucination wrong-turn can eat up a lot, lot, lot of magic one-shotting.
> Working with an LLM is kinda like working under stage hypnosis.
Another post on HN likened it to gambling, in the way that slot machines work. Each time you prompt, you could hit the jackpot! But usually you end up with some mediocre or wrong, so you tweak that prompt and pull the lever again. It's and endless cycle.
They should be trying to convince people it is something they want rather than forcing it on people. Alas that would mean making a product people want and Im not sure they are there.
It feels like that's the entire MO of the Azure platform as well. Make a minimum viable product and then get adoption by selling at all costs, despite the products edges.
It's an AI image generator. There's thousands of tools that do this exact thing, and it seems their only "benefit" is infesting search engine image results with their horrible low-quality output.
...
On a related note, here's another great LLM feature Microsoft seemingly failed to promote: instead of returning bits of page content or the description meta tag, the Bing API now gives you utter slop[0] for website descriptions!
Sounds almost like every manager just covers their ass by formally doing what is expected core top-down idea is "AI is a future, thus make it everywhere".
Anyone who would try to say "let's not do AI" would be a white crow, will be eaten by other managers in reviews and discussions.
I found that the time I spent reviewing and fixing issues/errors/omissions in Copilot’s meeting notes was more than just cleaning up my own notes that I took and sending out.
It's time to accept the new way of working, just change your reality to match the copilot version and boom, you save time fixing its mistakes!
In fact, why have the meeting at all? Just prompt copilot to create notes based on a fictional conception of the meeting and you just saved everyone a whole hour!
I would argue that they have earned their own controversy independent of Musk with all the shenanigans they pulled building out their data centers, namely their illegal use of gas turbines to power the whole thing.
That's part of the way he runs that business. Other AI data centers aren't necessarily a lot better; or at best just toeing the line of what is allowed rather than sticking their green energy commitments (or silently backing away from those).
I'm actually not that upset about AI data center energy usage. I see this as a short term and costly scaling measure with a minor impact (considering overall wasteful energy practices) that is an obvious target for large and rather obvious cost reductions the second this market gets profitable. The only reason that isn't happening from day 1 is all the red tape currently being put in place to actively slow down the demise of fossil fuel based generation.
Cost reductions here mean switching to a cleaner form of energy for the reason that that can be a lot cheaper than burning expensive gas in an expensive generator. Any large scale user of energy is going to be optimizing their energy opex if it saves them lot of money. If they survive long enough to matter, of course. If you are using energy by the tens/hundreds of gwh per year that is not going to be small amounts.
If by illegal you mean a spelled-out loophole that the EPA only decided they didn't like in retrospect. Businesses are run by people that think this is a level of forward-thinking-ness that they aspire to, not something to be avoided. (Source: my own CEO.)
Others have mentioned this but looks like fires from close to ~20 years ago are still showing up as "active emergencies"[0]. Shows the Nash Ranch fire as an active emergency but it was declared in 2008.
It's kinda shocking that the same Supabase RLS security hole we saw so many times in past vibe coded apps is still in this one. I've never used Supabase but at this point I'm kinda curious what steps actually lead to this security hole.
In every project I've worked on, PG is only accessible via your backend and your backend is the one that's actually enforcing the security policies. When I first heard about the Superbase RLS issue the voice inside of my head was screaming: "if RLS is the only thing stopping people from reading everything in your DB then you have much much bigger problems"
Supabase is aware of this and they actually put big banners stating this flaw when you unlock your authentication.
What I think it happens is that non-technical people vibe-coding apps either don't take those messages seriously or they don't understand what it means but made their app work.
I used to be careful, but now I am paranoid on signing up to apps that are new. I guess it's gonna be like this for a while. Info-sec AIs sound way worse than this, tbh.
My thought exactly. Is this standard practice with using Supabase to simply expose the production database endpoint to the world with only RLS to protect you?
Just started vibing and have integrated codex into my side project which uses Supabase. I turned off RLS so that could iterate quickly and not have to mess with security policies. Fully understand that this isn't production grade and have every intention of locking it down when I feel the time is right. I access it from a ReactNative app - no server in the middle. Codex does not have access to my Supabase instance.
Of course. What I meant was I'm calling Supabase directly from the client instead of handing off the request to for example Node / Express and having that manage the req / res.
I've been running Ubuntu Linux for a long time now (over a decade, started with 8.04). Linux still has it's fair share of bugs but I'll take having to deal with those over running Windows or MacOS any day.
For me the biggest thing is control, with Windows there are some things like updates that you have zero control over. It's the same issue with MacOS, you have more control than Windows but you're still at the whims of Apple's design choices every year when they decide to release a new OS update.
Linux, for all it's issues, give you absolute control over your system and as a developer I've found this one feature outweighs pretty much all the issues and negatives about the OS. Updates don't run unless I tell them to run, OS doesn't upgrade unless I tell it to. Even when it comes to bugs at least you have the power to fix them instead of waiting on an update hoping it will resolve that issue. Granted in reality I wait for updates to fix various small issues but for bigger ones that impact my workflow I will go through the trouble of fixing it.
I don't see regular users adopting Linux anytime soon but I'm quickly seeing adoption pickup among the more technical community. Previously only a subset of technical folks actually ran Linux because Windows/MacOS just worked but I see more and more of them jumping ship with how awful Windows and MacOS have become.
I remember when Ubuntu decided to reroute apt installations into SNAP installs. So you install a package via apt and there was logic to see if they should disregard your command and install a SNAP instead. Do they still do that?
Yes. I know its more than firefox, but I don't have the full list. On 24.04:
me@comp:~$ apt info firefox | head -n 5
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Package: firefox
Version: 1:1snap1-0ubuntu7
Priority: optional
Section: web
Origin: Ubuntu
me@comp:~$
The control is both a blessing and a curse. It’s really easy to accidentally screw things up when e.g. trying to polish some of the rough edges or otherwise make the system function as desired. It also may not be of any help if the issue you’re facing is too esoteric for anybody else to have posted about it online (or for LLMs to be of any assistance).
It would help a lot if there were a distro that was polished and complete enough that most people – even those of us who are more technical and are more demanding – rarely if ever have any need to dive under the hood. Then the control becomes purely an asset.
This is literally Linux Mint, Zorin, and several other distros. I haven't had to "go under the hood" on my daily driver machines that run either of these distros for over 7 years.
I think at this point people are just (reasonably) making excuses not to change.
Those and other big distros are better in that regard, but they're still not perfect. Depending on one's machine and needs, there can still be pain.
One recent example I experienced is jumping through hoops to get virtualization enabled in Fedora… it takes several steps that are not obvious at all. I understand not having it enabled by default since many won't need it, but there's no reason that can't just be a single CLI command that does it all.
Things like that can be unbelievably annoying and confusing on Windows or Macs, too. Even worse, they can just turn out to be impossible: the company can actively be preventing you from doing the thing that you want to do, refuses to give you enough access to your own system to do the thing you want to do, and/or sells permission to do what you want to do as an upgrade that you have to renew yearly.
These are things that don't happen in Linux. Doing what you want to do might be difficult (depending on how unusual it is), but there's no one actively trying to stop you from doing it for their own purposes (except systemd.)
Also, as an aside, a reason that Windows and Macs might have easy virtualization (I have no idea if they do) is because of how often they're running Linux VMs.
One needs to go a fair ways off the beaten path before they'll start running into trouble like that under macOS and Windows.
For macOS in particular, most trouble that more tinker-y users might encounter disappears if guardrails (immutable system image, etc) are disabled. Virtualization generally "just works" by way of the stock Virtualization.framework and Hypervisor.framework, which virtualization apps like QEMU can then use, but bespoke virtualization like that QEMU also ships with or that built into VirtualBox and VMWare works fine too. No toggles or terminal commands necessary. Linux does get virtualized a lot, but people frequently virtualize Windows and macOS as well.
What exactly did you need to do? All I've ever had to do to get QEMU working properly has been to make sure KVM is enabled in the BIOS (which you have to do on all OSs).
Just run a KVM based Windows VM (via GNOME Boxes, virt-manager, etc. On my Fedora install I had to install the @virtualization meta-package and enable dameons among other things, and the only reason I knew to do that is because I looked it up. Without that Boxes, etc just throws an unhelpful error that doesn’t suggest that more packages or config changes are needed.
I had to enable virtualization features in BIOS too, but that’s entirely separate and not the fault of any Linux distro.
Ah, I guess I might be a little unusual in that I use the QEMU CLI directly. I tried some QEMU GUIs in the past but they were annoying to get working so I just learned the CLI.
There's several distros that are fully usable without ever touching a terminal. The control is a gradient, some distros give you all the control and others (eg. SteamOS) lock down your root filesystem and sandbox everything from the internet.
> It’s really easy to accidentally screw things up when e.g. trying to polish some of the rough edges or otherwise make the system function as desired.
'Similar to Windows' System Restore and macOS's Time Machine', the Linux 'Timeshift' tool can be used to do make periodic saves of your OS files & settings. (They can be saved elsewhere.) Restoration is a cinch.
Mint program 'Backup Tool' allows users to save and restore files within their home directory (incl. config folder and separately installed apps).
You do have to know what you're doing. A complete OS has a bunch of components that work together. But an out of the box distro hides all that do you end up fiddling with incomplete knowledge.
Gentoo is great for learning what all the individual components are. You install it by booting a kernel from a USB stick then chrooting into your newly installed system to start installing and configuring everything. Just knowing the existence of individual components helps a lot. Plus Gentoo gives you more control than almost any other distro (much more than Arch, for example).
> I've been running Ubuntu Linux for a long time now...Linux still has it's fair share of bugs...
> I don't see regular users adopting Linux anytime soon...
I can see why you think the second statement is true based on the first statements. When Ubuntu switched their desktop to Gnome, they gave up on being the best Linux desktop distro. I'd recommend you to try Linux Mint.
I tried Fedora once. On a fresh install, all it did was clog up all the hard drive space with error logs within 3 days.
I'm not interested in any distro that is controlled by a corporation. IBM is a corporation and they already screwed up CentOS and is eventually going to screw up Fedora someday because that's what corporations do, and I'm not interesting in going through that.
You have your fun running Fedora for now but know you're going to get burned someday.
Well, to start they tried putting Amazon ads in Unity's Dock which was also doing data collection, but removed them after the backlash.
Then they switched to Gnome, meaning they gave up on their own desktop, Unity, so they were no longer dictating what their desktop was like, so how much did they care?
Since then they have replaced a number of apps with SNAPs which are only available from Canonical so many people see it as an attempt to corner the Linux market. Many see AppImages and Flatpacks as better than SNAPs.
They are a company. They exist to make money. Of course they are going to decide to do things that make more money and annoy their users.
You are confusing debian-family with Linux. Debian family is designed to be outdated upon release. When they say "Stable" it doesn't mean 'Stable like a table'. It means version fixed. You get outdated software that has bugs baked into it.
Fedora is modern and those bugs are fixed already.
Reminder Fedora is not Arch. Don't confuse the two.
Meh, I don't care much about control, I care more about getting my work done with the least amount of friction. Macs do that for me. Linux and Windows have too many barriers to make them a daily GUI driver.
> Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
This is probably worth reporting. I don't think I've ever heard or ran into something like that before. Most issues I ran into during the early rollout of Wayland desktop environments was broken or missing functionality in existing apps.
I don't live around any Amazon Fresh stores so I never saw them though I did see the technology in use at several airports (though I've never personally used it). IMO I think places like airports are the best place for something like this, people are usually in a rush so not having to wait in line to checkout is nice and you don't have to worry about security as much because everyone there is a ticketed passenger (only saw them post-security) and even if someone did try stealing they wouldn't get very far.
I saw these in several different airports. It usually had multiple people staffed at the gate to get in and out meanwhile most of the other snack vendors often only had a single person employed.
So you spend a few hundred thousand dollars extra on all the cameras, many millions on all the design, pay all the overseas contractors to manually review the transactions, and you still end up with twice the in-person staff than the average store in the airport.
I look at ReactOS largely as an exercise in engineering and there's really nothing wrong it with it being just that. Personally I think projects like Wine/Proton have made far more in-roads in being able to run Windows software on non-Windows systems but I still have to give props to the developers of ReactOS for sticking with it for 30 freaking years.
Yes. The unique point of ReactOS is driver compatibility. Wine is pretty great for Win32 API, Proton completes it with excellent D3D support through DXVK, and with these projects a lot of Windows userspace can run fine on Linux. Wine doesn't do anything for driver compatibility, which is where ReactOS was supposed to fill in, running any driver written for Windows 2000 or XP.
But by now, as I also wrote in the other thread on this, ReactOS should be seen as something more like GNU Hurd. An exercise in kernel development and reverse engineering, a project that clearly requires a high level of technical skill, but long past the window of opportunity for actual adoption. If Hurd had been usable by say 1995, when Linux just got started on portability, it would have had a chance. If ReactOS had been usable ten years ago, it would also have had a chance at adoption, but now it's firmly in the "purely for engineering" space.
"ReactOS should be seen as something more like GNU Hurd. An exercise in kernel development and reverse engineering, a project that clearly requires a high level of technical skill, but long past the window of opportunity for actual adoption."
I understand your angle, or rather the attempt of fitting them in the same picture, somehow. However, the differences between them far surpass the similarities. There was no meaningful user-base for Unix/Hurd so to speak of compared to NT kernel. There's no real basis to assert the "kernel development" argument for both, as one was indeed a research project whereas the other one is just clean room engineering march towards replicating an existing kernel. What ReactOS needs to succeed is to become more stable and complete (on the whole, not just the kernel). Once it will be able to do that, covering the later Windows capabilities will be just a nice-to-have thing. Considering all the criticism that current version of Windows receives, switching to a stable and functional ReactOS, at least for individual use, becomes a no-brainer. Comparatively, there's nothing similar that Hurd kernel can do to get to where Linux is now.
Hurd was not a research project initially. It was a project to develop an actual, usable kernel for the GNU system, and it was supposed to be a free, copyleft replacement for the Unix kernel. ReactOS was similarly a project to make a usable and useful NT-compatible kernel, also as a free and copyleft replacement.
The key difference is that Hurd was not beholden to a particular architecture, it was free to do most things its own way as long as POSIX compatibility was achieved. ReactOS is more rigid in that it aims for compatibility with the NT implementation, including bugs, quirks and all, instead of a standard.
Both are long irrelevant to their original goals. Hurd because Linux is the dominant free Unix-like kernel (with the BSD kernel a distant second), ReactOS because the kernel it targets became a retrocomputing thing before ReactOS could reach a beta stage. And in the case of ReactOS, the secondary "whole system" goal is also irrelevant now because dozens of modern Linux distributions provide a better desktop experience than Windows 2000. Hell, Haiku is a better desktop experience.
"And in the case of ReactOS, the secondary «whole system» goal is also irrelevant now because dozens of modern Linux distributions provide a better desktop experience than Windows 2000. Hell, Haiku is a better desktop experience."
Yet, there are still too many desktop users that, despite the wishful thinking or blaming, still haven't switched to neither Linux, nor Haiku. No mater how good Haiku or Linux distributions are, their incompatibility with the existing Windows simply disqualifies them as options for those desktop users. I bet we'll see people switching to ReactOS when it will get just stable enough, yet long before it will get as polished as either Haiku or any given quality Linux distribution.
No, people will never be switching to ReactOS. For some of the same reasons they don't switch to Linux, but stronger.
ReactOS aims to be a system that runs Windows software and looks like Windows. But, it runs software that's compatible with WinXP (because they target the 5.1 kernel) and it looks like Windows 2000 because that's the look they're trying to recreate. Plenty of modern software people want to run doesn't run on XP. Steam doesn't run on XP. A perfectly working ReactOS would already be incompatible with what current Windows users expect.
UI wise there is the same issue. Someone used to Windows 10 or 11 would find a transition to Windows 2000 more jarring than to say Linux Mint. ReactOS is no longer a "get the UI you know" proposition, it's now "get the UI of a system from twenty five years ago, if you even used it then".
"UI wise there is the same issue. Someone used to Windows 10 or 11 would find a transition to Windows 2000 more jarring than to say Linux Mint. ReactOS is no longer a «get the UI you know» proposition, it's now «get the UI of a system from twenty five years ago, if you even used it then»." "A perfectly working ReactOS would already be incompatible with what current Windows users expect."
That look and feel is the easy part. That can be addressed if it's really an issue. The hard part is the compatibility (that is given by many still missing parts) and stability (the still defective parts). The targeted kernel matters, of course, but that is not set in stone. In fact, there is Windows Vista+ functionality added and written about, here: https://reactos.org/blogs/investigating-wddm although doing it properly would mean rewriting the kernel, bumping it to NT version 6.0
I'm sure there will indeed be many users that will find various ReactOS aspects jarring for as long as there are still defects, lack of polish, or dysfunction on application and kernel (drivers) level. However, considering the vast pool of Windows desktop users, it's reasonable to expect ReactOS to cover the limited needs for enough users at some point, which should turn attention into testing, polish, and funding to address anything still lacking, which then should further feed the adoption and improvement loop.
"No, people will never be switching to ReactOS. For some of the same reasons they don't switch to Linux, but stronger."
To me, this makes sense maybe for corporate world. The reasons that made them stick with Windows has less to do with familiarity or with application compatibility (given the fact that a lot of corporate infrastructure is in web applications). Yes, there must be something else that governs corporate decisions, something to do with the way corporations function, and that will most likely prevent a switch to ReactOS just as it did to Linux based distributions. But, this is exactly why I intentionally specified "for individual use" when I said "switching to a stable and functional ReactOS, at least for individual use, becomes a no-brainer". For individual use, the reason that prevented people to switch to Linux is well known, and ReactOS's reason to be was aimed exactly at that.
> There was no meaningful user-base for Unix/Hurd so to speak of compared to NT kernel.
Sure, but that userbase also already has a way of using the NT kernel: Windows. The point is that both Hurd and ReactOS are trying to solve an interesting technical problem but lack any real reason to use rather than their alternatives that solve enough of the practical problems for most users.
While I think better Linux integration and improving WINE is probably better time spend... I do think there's some opportunity for ReactOS, but I feel it would have to at LEAST get to pretty complete Windows 7 compatibility (without bug fixes since)... that seems to be the last Windows version people remember relatively fondly by most and a point before they really split-brained a lot of the configuration and settings.
With the contempt of a lot of the Win10/11 features, there's some chance it could see adoption, if that's an actual goal. But the effort is huge, and would need to be sufficient for wide desktop installs much sooner than later.
I think a couple of the Linux + WINE UI options where the underlying OS is linux, and Wine is the UI/Desktop layer on top (not too disimilar from DOS/Win9x) might also gain some traction... not to mention distros that smooth the use of WINE out for new users.
Worth mentioning a lot of WINE is reused in ReactOS, so that effort is still useful and not fully duplicated.
> I do think there's some opportunity for ReactOS, but I feel it would have to at LEAST get to pretty complete Windows 7 compatibility
That's not going to happen in any way that matters. If ReactOS ever reaches Win7 compatibility, that would be at a time when Win7 is long forgotten.
The project has had a target of Windows 2000 compatibility, later changed to XP (which is a relatively minor upgrade kernel wise). Now as of 2026, ReactOS has limited USB 2.0 support and wholly lacks critical XP-level support like Wifi, NTFS or multicore CPUs. Development on the project has never been fast but somewhere around 2018 it dropped even more, just looking at the commit history there's now half the activity of a decade ago. So at current rates, it's another 5+ years away from beta level support of NT 5.0.
ReactOS actually reaching decent Win2K/XP compatibility is a long shot but still possible. Upgrading to Win7 compatibility before Win7 itself is three plus decades old, no.
maybe posts like this will move the needle. If i could withstand OS programming (or debugging, or...) I'd probably work on reactOS. I did self-host it, which i didn't expect to work, so at least i know the toolchain works!
Basically if you do the math, it means a whole generation got tired of being on the project and focused into something else, and there is no new blood to account for that.
The history of most FOSS projects after being running for a while.
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
[1]: https://www.anthropic.com/news/statement-department-of-war
reply