Hacker Newsnew | past | comments | ask | show | jobs | submit | zer00eyz's commentslogin

The dot com bubble. The reboot of tech after (pre 2008) at the dawn of podcasting, Web 2.0, the "open web".

70 hour weeks weren't unheard of. Why... because the money was stupid and you had skin in the game.

Lots of people got wealthy, very wealthy. Fuck you money wealthy.

I know a lot of people who did that and then kept working. The large majority of them in fact.

If you're here and you're looking at one of these jobs, this is the critical sentence you need to ask when negotiating: "Can I see a cap table." If they say anything other than yes, then your response is "with out a cap table the value of the equity being offered is ZERO, I'm going to need a lot more cash".


> My pet peeve with AI is that it just accelerates whatever has already been automated or can be automated easily ....

> I’m just a frustrated old man I guess.

I think this is a great summary of the failure of vision that a lot of tech people are having right now.

> automate anything that already has an endpoint or whatever

Facebook used to have API's, Reddit used to have API's, amazon used to have API's

They are gone.

Enshitification and dark patterns have taken over.

"Hey open claw, cancel service xxx" where XXX is something that is 17 steps and purposely hard to cancel so they keep your money.

What's going to happen when your AI tool can go to a website and strip the ad's off and return you just the text? What happens when it can build a customized news feed that looks less like Facebook and more like HN? Aren't we just gaining back function we lost with the death of RSS?

Consumers are mad about the hype of AI but the moment that it can cut through the bullshit we keep putting in their way it's going to wreck business MODELS, and the choice will be adapt or die. Start asking your "AI" tools to do all the basic, tedious bullshit tasks that are low risk (you have a ton of them) and if it gets 1/4 of them done your going to free up a ton of your own time.


Dead wrong.

Because the world is still filled with problems that would once have been on the wrong side of the is it worth your time matrix ( https://xkcd.com/1205/ )

There are all sorts of things that I, personally, should have automated long ago that I threw at claud to do for me. What was the cost to me? Prompt and a code review.

Meanwhile, on larger tasks an LLM deeply integrated into my IDE has been a boon. Having an internal debate on how to solve a problem, try both, write a test, prove out what is going to be better. Pair program, function by function with your LLM, treat it like a jr dev who can type faster than you if you give it clear instructions. I think you will be shocked at how quickly you can massively scale up your productivity.


Yup, I've already run like 6 of my personal projects including 1 for my wife that I had lost interest in. For a few dollars, these are now actually running and being used by my family. These tools are a great enabler for people like me. lol

I used to complain when my friends and family gave me ideas for something they wanted or needed help with because I was just too tired to do it after a day's work. Now I can sit next to them and we can pair program an entire idea in an evening.


The matrix framing is a very nice and way to put it. This morning I asked my assistant to code up a nice debugger for a particular flow in my application. It’s much better than I would have had time/patience to build myself for a nice-to-have.

I sort of have a different view of that time matrix. If AI is only able to help me do tasks that are of low value, where I previously wouldn’t have bothered—- is it really saving me anything? Before where I’d simply ignore auxiliary tasks, and focus on what matters, I’m now constantly detoured with them thinking “it’ll only take ten minutes.”

I also primarily write Elixir, and I have found most Agents are only capable of writing small pieces well. More complicated asks tend to produce unnecessarily complicated solutions, ones that may “work,” on the surface, but don’t hold up in practice. I’ve seen a large increase in small bugs with more AI coding assistance.

When I write code, I want to write it and forget about it. As a result, I’ve written a LOT of code which has gone on to work for years without touching it. The amount of time I spent writing it is inconsequential in every sense. I personally have not found AI capable of producing code like that (yet, as all things, that could change).

Does AI help with some stuff? Sure. I always forget common patterns in Terraform because I don’t often have to use it. Writing some initial resources and asking it to “make it normal,” is helpful. That does save time. Asking it to write a gen server correctly, is an act of self-harm because it fundamentally does not understand concurrency in Erlang/BEAM/OTP. It very much looks like it does, but it 100% does not.

tldr; I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.


> are only capable of writing small pieces well.

It excels at this, and if you have it deeply integrated into your workflow and IDE/dev env the loop should feel more like pair programing, like tennis, than it should feel like its doing everything for you.

> I also primarily write Elixir,

I would also venture that it has less to do with the language (it is a factor) and more to do with what you are working on. Domain will matter in terms of sample size (code) and understanding (language to support). There could be 1000s of examples in its training data of what you want, but if no one wrote a commment that accurately describes what that does...

> I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.

This is spot on. I stopped thinking of it as "AI" and started thinking of it as "power tools". Useful, and like a power tool you should be cautious because there is danger there... It isnt smart, it's not doing anything that isnt in its training data, but there is a lot there, everything, and it can do some basic synthesis.


If you want to build a house you still need plans. Would you rather cut boards by hand or have a power saw. Would you rather pound nails, pilot hole with a bit and brace and put in flat head screws... or would you want a nail gun and an impact driver.

And you still need plans.

Can you write a plan for a sturdy house, verify that it meets the plan that your nails went all the way in and in the right places?

You sure can.

Your product person, your directors, your clients might be able to do the same thing, it might look like a house but its a fire hazard, or in the case of most LLM generated code a security one.

The problem is that we moved to scrum and agile, where your requirements are pantomime and postit notes if your lucky, interpretive dance if you arent. Your job is figuring out how to turn that into something... and a big part of what YOU as an engineer do is tell other people "no thats dumb" without hurting their feelings.

IF AI coding is going to be successful then some things need to change: Requirements need to make a come back. GOOD UI needs to make a comeback (your dark pattern around cancelation, is now going to be at odds with an agent). Your hide the content behind a login or a pay wall wont work any more because again, end users have access too... the open web is back and by force. If a person can get in, we have code that can get in now.

There is a LOT of work that needs to get done, more than ever, stop looking back and start looking forward, because once you get past the hate and the hype there is a ton of potential to right some of the ill's of the last 20 years of tech.


> Typing 'Find me reservations at X restaurant' and getting unformatted text back is way worse than just going to OpenTable and seeing a UI that has been honed for decades.

Your conflating the example with the opportunity:

"Cancel Service XXX" where the service is riddled with dark patterns. Giving every one an "assistant" that can do this is a game changer. This is why a lot of people who aren't that deep in tech think open claw is interesting.

> We all learned the lesson that mass-market IT tools almost always outperform in-house

Do they? Because I know a lot of people who have (as an example) terrible setups with sales force that they have to use.


Admiral Grace Hopper is famous for using a length of wire to explain to others what a nanosecond was.

https://www.pbs.org/newshour/world/pentagon-embraces-musks-g...

Data centers in space make absolute sense when you want as close to real time analysis on all sorts of information. Would you rather have it make the round trip, via satellite to the states? Or are you going to build these things on the ground near a battlefield?

Musk is selling a vision for a MASSIVE government contract to provide a service that no one else could hope to achieve. This is one of those projects where he can run up the budget and operating costs like Boeing, Northrup etc, because it has massive military applications.


> Consumer can eat all the GPUs they have and more if we stop trying to force B2B

You should really crunch the numbers on buying and then running enough compute to run a leading edge model. The economics of buying it (never mind running it) just dont add up.

You still haven't factored in "training", the major problem right now that every one remains head in sand about.

I dont need a model to know who Tom Cruise is or how to write SQL if I am asking it "set up my amazon refund" or "cancel xyz service". The moment someone figures out how to build targeted and small it will take off.

And as for training, well having to make ongoing investment into re-training is what killed expert systems, it's what killed all past AI efforts. Just because it's much more "automated" doesn't mean it isnt the same "problem". Till a model learns (and can become a useful digital twin) the consumer market is going to remain "out of reach".

That doesn't mean we dont have an amazing tool at hand, because we do. But the way it's being sold is only going to lead to confusion and disappointment.


Consumer, as in B2C, not consumers buying directly. B2C companies will happily buy (or rent from people who are buying today) GPUs, because a huge part of the game is managing margins to a degree B2B typically doesn't need to concern itself with.

> I dont need a model to know who Tom Cruise is or how to write SQL if I am asking it "set up my amazon refund" or "cancel xyz service". The moment someone figures out how to build targeted and small it will take off.

I think people got a lot of ideas when dense models were in vogue that don't hold up today. Kimi K2.5 maybe be a "1T parameter model" but it only has 32B active parameters and still easily trounces any prior dense model, including Llama 405B...

Small models need to make sense in terms of actual UX since beating these higher sparsity MoEs on raw efficiency is harder than people realize.


> We used tech without letting it own us.

This was, and is, a personal choice.

You can call me a judas goat if you want for spending my whole life building the things I would not use, but that is the nature of the game.


For some context:

Nvidia: GAAP and non-GAAP gross margins are expected to be 74.8% and 75.0%,

Micron: MU (Micron Technology) Gross Margin % as of today (January 28, 2026) is 56.04%


"Abominable Intelligence"

I cant wait till the church starts tithing us mear flesh bags for forgiveness in the face of Roko's Basilisk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: