Microsoft is surviving precisely because of stickiness as you put it. But their users have to use them, and have to pay for it. There are very few people that use openai today that have to pay for it, those forced to use it are typically doing so via free avenues like windows copilot.
OpenAI has the stickiness of MSN news or MS Teams. Your wife uses chatgpt on a daily basis but is she paying for it? If they charge her $0.99/mo will she not look at alternatives? If she gets two or three bad responses from chatgpt in a row, will she not explore alternatives to see if there is something better? Does she not use google? If she does, she is already interacting with gemini everyday via their AI overview.
OpenAI has a first-to-market advantage, not a moat as you think. they can absolutley dominate the market, if they stay on top of their game. Ebay was the main online shopping network, they had that advantage, they were even the ones that made Paypal a thing! But they're relatively little used now, better alternatives crushed them.
Amazon was the first-to-market with cloud services, they didn't get worse in any significant way, but their market share is not as great as it used to be, Azure has gained decent ground on them. 10 years ago the market share break down was 31/7/4, now it is 28/21/14 for AWS/Azure/GCP respectively.
For OpenAI to survive it needs most of the market share, if it gets only a 3rd for example, the AI industry on its own needs to be a $1T+ industry. Over the past 10 years revenue alone (not profit) for AWS has been $620B total and just made $128B in revenue (highest) last year. OpenAI needs to make in profits (not revenue) what AWS made last year in revenue by 2029 just to break even. If it manages to just break even by then, it needs to have more profits than the revenue AWS managed to attain after its entire lifetime until now. It's far easier to switch LLM models than cloud providers too!
Their only remote way of survival, I hate to say it, is by going the way of palantir and doing dirty things for governments and militaries. they need a cash-cow client that can't get anyone else like that. And even then, being US-based, I don't think outside the US any military is insane enough to use OpenAI at all due to geopolitics. Even in sectors like education, Google (via chromebooks) is more likely to form dependence than Microsoft via OpenAI since somehow they're more open to arbitrary apps due to historical anti-trust suits.
I can see a somewhat far-fetched argument being made for their survival, but only on thin-threads and excellent execution. But I can't see how they can actually survive competition. They're using the Azure strategy for market share, they're banking on AI being so ubiquitous that existing vendor-lock-in mindset will serve as a moat. They'll need to be much more profitable than AWS in like 1/5th of the time. Their product is comparable to (and literally is in Azure) one of many cloud service offerings, as oppose to an entire cloud provider, and their costs are huge similar to cloud providers like needing their own data-centers level huge, they need to overcome those costs, and on top of that have $125B> revenue in like 2 years!!
I have started using chatgpt for everything from financial planning to holiday planning to product purchase. Whenever I think I hit something useful I add it to memory. I'm a "go" plan user because they had a promotional offer that gave me free access to the plan for a year. Will I continue after one year? Truth is nothing I have in chatgpt cannot be recreated elsewhere. But if I care about keeping those memories I might. I think the real challenge for me now is finding back out conversations, it seems their history search is quite bad.
Controversial opinion: City roads shouldn't be for fast driving, they should be cobblestone. Safer for predestrians, easier to maintain, not rough on cars. It encourages pedestrian friendly city planning.
You can somewhat walk in LA but it's damn shame how unfriendly it can be for pedestrians at times. I say shame because of the pristine weather, and people roll around sealed in aluminum boxes instead. trams and bicycles are ideal for it instead of cars. The bus system is ok by US standards too.
Cut the LAPD and the LA sheriff dept's budget, demilitarize them and fund infrastructure with it instead.
Here is their budget chart, there are about 15 countries in the world with lesser budget and GDP. The LASD has a budget of about $4B, more than the budget of 30 countries. Even that comparison aside, look at how they keep increasing their budget. In the LAPD chart above, it is "field forces" (+$200M over 10 years) that keep increasing their budgets, so it isn't "just inflation", and the crime rates don't justify it either. But that said, I get it, LA has bigger problems.
this is one of the core conceits behind why Strong Towns / Not Just Bikes / urbanism discourse in general makes the disctinction between a "street", which is meant for people to walk along, go to stores/restaurants/etc, and "roads", which are meant to efficiently move traffic from one part of the city to another.
Combining them degrades the ability to address either point: efficiently moving traffic is inherently in conflict with being able to access businesses or having pedestrians nearby.
public transport is more efficient at transporting people than cars, public transport makes roads more walkable. If you have subways, trams and buses, you can have narrower and slower roads friendly to pedestrians.
But with HN, I'd like to ask @dang and HN leadership to support deleting messages, or making them private (requiring an HN account to see your posts).
At first I thought of how this would impact employment. But then I thought about how ICE has been tapping reddit,facebook and other services to monitor dissenters. The whole orwellian concern is no longer theoretical. I personally fear physical violence from my government, as a result. But I will continue to criticize them, I just wish it wasn't so easy for them to retaliate.
notepad is supposed to be like the 'nano' for windows. it's already bloated.
But this is just following a pattern, the enshittified even calc.exe and mspaint. Previewing pictures in windows is shamefully slow because the previewer is also a bloat.
My diagnosis is that Microsoft doesn't have good technical leadership. It has spread the risk of bad decisions by individual leaders by spreading it amongst too many decision makers, and those people aren't always technically apt, or they have aptitude within their specific domain of expertise. Why is the start menu in react native for example.
they also have a crippling illness in the form of sunken-cost fallacy. Even when no one is especially depending on it, they go all-or-nothing on tech stacks and design patterns. Marketing and branding ultimately, I think is their biggest problem. You know how they name everything terribly? that's trying to capitalize on existing branding. This is fundamentally the mindset of salespeople. they could be spinning a new app, or making a vscode-lite ship with windows, but brand familiarity is why they're messing with notepad.
It is truly dumbfounding, they're being run like HP and IBM but because of how much the world relies on them, and because of Azure they're making so much profit.
Why are the shareholders no enraged even more? To have such a vast marketshare and failing to capitalize on it is terrible. They could be doing better than Apple. Even apple sees the writing on the wall and adapts their strategy fundamentally by starting to make their own silicon. It's like having a barn full of chicken that lay golden eggs, but the farmer is slaughtering them for their meat, and the farmer's employer doesn't care because chicken meat is still making good enough profits.
> You know how they name everything terribly? that's trying to capitalize on existing branding.
It's funny because they are actively destroying existing branding these days. Like how they renamed Office after their failed AI assistant, rather than the other way around.
Everything is copilot, so much so that I don't even know what copilot is.
From the security side, everything is Microsoft Defender. When talking to people I have to say things like "defender but the AV thing that's on by default, not the paid cloud thingy, and by that I don't mean the cloud protection one but the thing that protects endpoints using cloud stuff". They can't come up with good names and they confuse the crap out of their users. I hate to say it's just MBAs, since I don't really know but that'd be my guess. Someone at an Ivy league school somewhere is perpetuating this perhaps?
Great for them. But are they just going too mooch off of open source software then? Nothing wrong with that, so long as they fund developers and projects.
The dark side of MAD is that it isn't really real-world practical. The LLM is right, nuking is strategically ideal in a war with powerful enemies. Not only that, it is the most humane option if all you look at is body count. To be clear, I'm not advocating nuking of anyone.
But.. the assumption is that in war, when you get nuked, you'll launch nukes back. Even the first step retaliation might not make sense, because you know that will only lead to counter-retaliatory strikes. In practical terms, you just lost half a city, retaliating in kind means you're potentially sacrificing large numbers of your own civilians in the hopes that you achieve retribution.
But let's say that war planners think risking more of their own civilians is worth it because maybe, the other side will stop nuking when they see their own cities being wiped out. Fine, you launch retaliatory strikes, what happens when the other side doesn't let up. At some point you have to give up and surrender first, because even if the other side wants to kill all of your people, they gain nothing by irradiating valuable real estate. The natural response to a nuclear strike, even when you can continue retaliating is an unconditional surrender. My argument is that nuclear weapons are inherently first-strike weapons, they're not that useful for retaliation, unless there is a disparity in delivery capabilities. If China nuked the US for example, the US has a clear advantage in delivery capability, so it makes sense for the US to retaliate until China is wiped out. But if the US first-striked China, I'm confident they'll retaliate but they're so densely populated that it would be a huge sacrifice on their end, without having a similar impact on the US. Keep in mind that in this scenario, the US war planners might not pull punches if they've gone as far as actually using a nuke, if every major city in China is hit on the first strike, what will China gain by retaliating? Even if they managed to wipe out the continental US, the submarine fleet is huge enough and sneaky enough to finish off what is left of China, even when they can retaliate it doesn't make much sense, a surrender makes more sense.
In short, I'm not saying that MAD isn't a thing at all. I'm saying that MAD is not about nukes, but about nuke delivery capability. even then it is a weak principle, it only works well if the first wave of strikes was not enough to convince the the target country they should surrender immediately. If one side is committed to risk their own destruction by risking your retaliation, then it doesn't make sense to also commit to your own people's destruction.
Countries like India vs Pakistan are a better candidate for MAD, because they don't have huge disparities when it comes to delivery capability. But if the US decided to nuke just about any country except Russia, it is a viable and practical way of not only achieving victory, but doing so by minimizing body count (again, I don't advocate for this, I'm just saying the numbers work out that way). If China decided to nuke its way into any country that's not in NATO, possibly including Russia, it might be a practical option because of it's proximity to Russia.
Delivery capabilities, and post-war objectives are what make or break MAD in my opinion.
My solution is for every country to pursue nuclear capability, not to use it but for increasing the cost of war. if north korea and pakistan can have nukes, why can't others. Not just nukes either, but nuclear capability in general. it will solve lots of climate and energy related problems. Ukraine would not have had 4 years of war if it didn't give up its nukes. Even if Ukraine had nukes, it can't wipe out russia, MAD wouldn't have worked for Ukraine. But it could retaliate by hitting major russian cities, russia would not be destroyed but the cost of invasion would be too high.
given the current state of geopolitics, I'm betting many countries are regretting their stance on non-proliferation decades ago. If even the US is bullying countries, kidnapping heads of state and (about to) invading disagreeable regimes, then Iran and NK were right to pursue nuclear power from their own perspective. nuclear capability makes it very hard to use military force to achieve geopolitical objectives, leaving diplomacy and economic means.
So TL;DR: I'm not sure the AI is wrong at a macro-level. nukes will result in less civilian deaths in many situations, but you're also explicitly targeting and murdering large numbers of innocent civilians. Strategically correct does not mean morally acceptable. LLMs don't get morality, you have to define morality and moral constraints in your prompts.
they're tools, you don't ascribe trust to them. you trust or distrust the user of the tool. It's like say you trust your terminal emulator. And from my experience, they will ask for permission over a directory before running. I would love to know how people are having this happen to them. If you tell it it can make changes to a directory, you've given it every right to destroy anything in that directory. I haven't heard of people claiming it exceeded those boundaries and started messing with things it wasn't permitted to mess with to begin with.
OK, but we learned decades ago about putting safety guards on dangerous machinery, as part of the machinery. Sure, you can run LLMs in a sandbox, but that's a separate step, rather than part of the machinery.
What we need is for the LLM to do the sandboxing... if we could trust it to always do it.
Again, the trust is for the human/self. it's auto-complete, it hallucinates and commits errors, that's the nature of the tool. It's for the tools users to put approprite safeguards around it. Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it. You're expecting a dumb tool to be smart and know better. I suspect that is because of the "AI" marketing term and the whole supposition that it is some sort of pseudo-intelligence. it's just auto-complete. When you have it run code in an environment, it could auto-complete 'rm -rf /'.
> Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.
True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.
But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).
We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)
I think I'm in agreement with you. But regardless of expectations, the tool works a certain way. It's just a map of it's training data which is deeply flawed but immensely useful at the same time.
Also in that analogy, the LLM is the fire, not the furnace. If you use codex for example, that would the furnace, and it does have good guardrails, no one seems to be complaining about those.
You didn't use git with a remote repo? or did it somehow delete the repos, or perhaps you didn't commit and checkout into a feature branch before it ran?
If only a time traveling robot and his human companions were to pay a visit to decision makers at claude(aka cyberdyne? :) ).
What are they using it for though? Target selection for precise strikes? I'm guessing their argument will be less lives will be lost if claude assisted with making sure the attacks were surgically precise?
> Cold War computers were primarily driven by military necessity, focusing on
nuclear weapon simulation, ballistic missile trajectory calculation, and cryptography to support Mutually Assured Destruction (MAD). Key uses included modeling hydrogen bomb design using Monte Carlo methods (e.g., on MANIAC), air defense systems like the Navy’s NTDS, and early AI for strategic planning.
OpenAI has the stickiness of MSN news or MS Teams. Your wife uses chatgpt on a daily basis but is she paying for it? If they charge her $0.99/mo will she not look at alternatives? If she gets two or three bad responses from chatgpt in a row, will she not explore alternatives to see if there is something better? Does she not use google? If she does, she is already interacting with gemini everyday via their AI overview.
OpenAI has a first-to-market advantage, not a moat as you think. they can absolutley dominate the market, if they stay on top of their game. Ebay was the main online shopping network, they had that advantage, they were even the ones that made Paypal a thing! But they're relatively little used now, better alternatives crushed them.
Amazon was the first-to-market with cloud services, they didn't get worse in any significant way, but their market share is not as great as it used to be, Azure has gained decent ground on them. 10 years ago the market share break down was 31/7/4, now it is 28/21/14 for AWS/Azure/GCP respectively.
For OpenAI to survive it needs most of the market share, if it gets only a 3rd for example, the AI industry on its own needs to be a $1T+ industry. Over the past 10 years revenue alone (not profit) for AWS has been $620B total and just made $128B in revenue (highest) last year. OpenAI needs to make in profits (not revenue) what AWS made last year in revenue by 2029 just to break even. If it manages to just break even by then, it needs to have more profits than the revenue AWS managed to attain after its entire lifetime until now. It's far easier to switch LLM models than cloud providers too!
Their only remote way of survival, I hate to say it, is by going the way of palantir and doing dirty things for governments and militaries. they need a cash-cow client that can't get anyone else like that. And even then, being US-based, I don't think outside the US any military is insane enough to use OpenAI at all due to geopolitics. Even in sectors like education, Google (via chromebooks) is more likely to form dependence than Microsoft via OpenAI since somehow they're more open to arbitrary apps due to historical anti-trust suits.
I can see a somewhat far-fetched argument being made for their survival, but only on thin-threads and excellent execution. But I can't see how they can actually survive competition. They're using the Azure strategy for market share, they're banking on AI being so ubiquitous that existing vendor-lock-in mindset will serve as a moat. They'll need to be much more profitable than AWS in like 1/5th of the time. Their product is comparable to (and literally is in Azure) one of many cloud service offerings, as oppose to an entire cloud provider, and their costs are huge similar to cloud providers like needing their own data-centers level huge, they need to overcome those costs, and on top of that have $125B> revenue in like 2 years!!
reply