Hacker Newsnew | past | comments | ask | show | jobs | submit | nemo1618's commentslogin

IMO this is one of the best use cases for AI today. Each function is like a separate mini problem with an explicit, easy-to-verify solution, and the goal is (essentially) to output text that resembles what humans write -- specifically, C code, which the models have obviously seen a lot of. And no one is harmed by this use of AI; no one's job is being taken. It's just automating an enormous amount of grunt work that was previously impossible to automate.

I'm part of the effort to decompile Super Smash Bros. Melee, and a fellow contributor recently wrote about how we're doing agent-based decompilation: https://stephenjayakar.com/posts/magic-decomp/


And the renaming of all the variables from the auto-gen ones into something human readable was always a thankless task which LLMs are really good for.

> And no one is harmed by this use of AI; no one's job is being taken

what about: see cool app, decompile it, launch competing app.

(repeat)


Decompiling seems like the hard way to go here. Lots of clones pop up for popular games and apps all the time. I don't think you need to go down the decompile route to achieve that.

"The steamroller is still many inches away. I'll make a plan once it actually starts crushing my toes."

You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.


What contingency plan is there exactly? At best you're just going from an automated-already job to a soon-to-be-automated job. Yay?

I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?

Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.


My contingency plan is that if AI leaves me unable to get a job, we are all fucked and society as a whole will have to fix the situation and if it doesn’t, there is nothing I could have done about it anyway.

As a fellow chad I concur. Though I am improving my poker skills - games of chance will still be around

You likely already know, but the "Pluribus" poker bot was beating humans back in 2019. Games of chance will be around if people are around, but you'll have to be careful to ensure you're playing against people, unassisted people.

https://en.wikipedia.org/wiki/Pluribus_(poker_bot)


Yeah, thanks, I only play live games. I'm in australia so online poker is illegal here. I was thinking of getting a vpn and having a play online, then I saw this recently https://www.reddit.com/r/Damnthatsinteresting/comments/1qi69...

So much of these degenerate online gambling / "investment" platforms are illegal here for good reason. If you are just a normal person playing fairly, you are being scammed. Same for things like Polymarket, the only winners are the people with insider knowledge.

Even horse racing, it's a solved problem, and if you start winning they'll just cancel your a/c (happened to a friend of mine)

this has been me ever since my philosophy undergrad.

This is a sensible plan, given your username.

Yeah seriously. Don't people understand the fact that society is not good at mopping up messes like this—there has been a K shaped economy for several decades now and most Americans have something like $400 in their bank accounts. The bottom had already fallen out for them, and help still hasn't arrived. I think it's more likely that what really happens is that white collar workers, especially the ones on the margin, join this pool—and there is a lot of suffering for a long time.

Personally, rather devolving into nihilism, I'd rather try to hedge against suffering that fate. Now is the time to invest and save money. (or yesterday)


If white collar workers as a whole suffer severe economic setback over a short term timespan, your savings and investments won’t help you.

Unless you’re investing in guns, ammo, food, and a bunker. We’re talking worse unemployment than depression era Germany. And structurally more significant unemployment because the people losing their jobs were formally very high earners.


That’s the cataclysmic outcome, though. Although I deemed that that’s certainly possible and I would put a double digit percentage probability on it, another very likely outcome is a very severe recession, or a recession, wear a lot of, but not all, white collar work is wiped out. Maybe there’s a significant restructuring in the economy I think in a scenario like that, which also seems to be in the realm of possibility, I think having resources still matters. Speech to text, sorry for the poor grammar.

It’s definitely possible that there’s an impact that is bad but not cataclysmic. I figure in thst case though my regular savings is enough to switch to something else. I could retire now if I was willing to move somewhere cheap and live on $60k a year. There’s a lot of things that could cause that level of recession though without the need for AI.

I do also think the mid level bad outcome isn’t super likely because of AI is good enough to replace a lot of white collar jobs, I think it could replace almost all of them.


> You are in danger. Unless you estimate the odds of a breakthrough at <5%

It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.

There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.

At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.


We are the French artisans being replaced by English factories. OpenAI and its employees are the factory.

Checking the scoreboard a bit later on: the French economy is currently about the same size as the UK.

That has little to do with what I wrote, and isn't addressing the central issue.

I'm not worried about the danger of losing my job to an AI capable of performing it. I'm worried about the danger of losing my job because an executive wanted to be able to claim that AI has enhanced productivity to such a degree that they were able to eliminate redundancies with no regard for whether there was any truth to that statement or not.

> Unless you estimate the odds of a breakthrough at <5%

I do. Show me any evidence that it is imminent.

> or you expect that AI will usher in enough prosperity that your job will be irrelevant

Not in my lifetime.

> it is straight-up irresponsible to forgo making a contingency plan.

No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?


> What's your contingency plan? Buy a subscription to the revolution?

I’ve been working on my contingency plan for a year-and-a-half now. I won’t get into what it is (nothing earth shattering) but if you haven’t been preparing, I think you’re either not paying enough attention or you’re seriously misreading where this is all going.


This ^ been a SWE for 20 years the market is the worst I have seen it, many good devs been looking for 1-2 years and not even getting a response, whereas 3-4 years ago they would have had multiple offers. Im still working but am secure in terms of money so will be ok not working (financially at least). But I expect a tsunami of layoffs this and next year, then you are competing with 1000x other devs and Indians who will works for 30% of your salary.

That's called an economic crisis, it has nothing to do with AI, my friends also have trouble to find 100% manual jobs which were easily available 2 years ago.

Yes I said the word that none of these company want to say in their press conference.


Thats because there are more tech/service workers competing for the manual jobs now.

Tech workers aren't numerous enough to have that effect.

Besides that, why aren't we seeing any metrics change on Github? With a supposedly increase of productivity so large a good chunk of the workforce is fired, we would see it somewhere.


A lot of non-AI things have happened though.

So AI is going to steamroll all feasible jobs, all at once, with no alternatives developing over time? That's just a fantasy.

It'd probably be cold day in Hell before AI replaces veterinary services, for example. Perhaps for mild conditions, but I cannot imagine an AI robot trying to restrain an animal.

All these so-called safe jobs still depend on someone being able to afford those services. If I don't have a job, I can't go see the vet, the fact that no one else can do the vets job is irrelevant at such a point.

I would like to know if there's some kind of inflection point, like the so-called Laffer curve for taxes, where once an economy has X% unemployment, it effectively collapses. I'd imagine it goes: recession -> depression -> systemic crisis and appears to be somewhere between 30-40% unemployment based on history.


Every job deemed "safe" will be flooded by desparate applicants from unsafe jobs.

> it is straight-up irresponsible to forgo making a contingency plan.

What contingencies can you really make?

Start training a physical trade, maybe.

If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.


Working in the trades won’t help you at 40-50% unemployment. Who’s going to pay for your services. And even the meager work remains would be fought over by the hundred million unemployed who are all suddenly fighting tooth and nail for any work they can get.

Isn’t it a bit silly to say AI is going to eat the entire economy, but you have a contingency plan?

It seems kind of like saying “I’m smarter than all the AIs in this one particular way.” If someone posted that, you would probably jump in to say they’re fooling themselves.


Unless I misunderstand your metaphor, there is nothing you can do about the steamroller, it is going to roll, no matter what.

I think it's a combination of a) reflexive dislike of any hyped-up tech, mainly due to the crypto era, and b) subconscious ego protection ("this can't be legit, otherwise everything I've built my identity around will be thrown into question").

The best models already produce better code than a significant fraction of human programmers, while also being orders of magnitude faster and cheaper. And the trendlines are stark. Sure, maybe AI can't replace you today. Maybe it will hit that "wall" people are always forecasting, just before it gets good enough to threaten your job. But that's a rather uncomfortable proposition to bet a career on.


> humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being

What happens when businesses run by AIs outperform businesses run by humans?


The humans will still own the business (unless you are proposing some alternative version of AI ownership), so in effect there will be always a human who is concerned about their business’s well being.

I doubt that we would get into a world where a company would be allowed to run without human involvement (AI directors and AI management) as you will have nobody to hold accountable.


Well, wasnt this what are all these blockchain DAO entites where supposed for? :D

Yes, I was just about to bring this up as well. One could argue that they were simply too early. It will be interesting to watch things like ERC-8004.

Let's be real. The sky is blue because God thought it was a pretty color, simple as. All this stuff about wavelengths and resonant frequencies and human color perception got retconned into the physics engine at some point in the past millennium, that's why all these epicycles are needed.

Our lord Zeus always thinks of everything

His noodly appendage touches all.

> thought it was a pretty color

So was blue intrinsically pretty and thus made into the sky, or considered pretty and thus imprinted in the minds of humans that way?


Centaurs are a transient phenomenon. In chess, the era of centaur supremacy lasted only about a decade before computers alone eclipsed human+computer. The same will be true in every other discipline.

You can surf the wave, but sooner or later, the wave will come crashing down.


They are transient only in those rare domains that can be fully formalized/specified. Like chess. Anything that depends on the messy world of human - world interactions will require humans in the loop for translation and verification purposes.


>Anything that depends on the messy world of human - world interactions will require humans in the loop for translation and verification purposes.

I really don't see why that would necessarily be true. Any task that can be done by a human with a keyboard and a telephone is at risk of being done by an AI - and that includes the task of "translation and verification".


Sure, but at the risk of running into completely unforeseen and potentially catastrophic misunderstandings. We humans are wired to use human language to interact with other humans, who share our human experience, which AIs can only imperfectly model.


I have to say I don't feel this huge shared experience with many service industry workers. Especially over the phone. We barely speak the same language!


> Any task that can be done by a human with a keyboard and a telephone

The power doesn’t stay on solely from people with keyboards and phones.


From a human, to a centaur, to a pegasus, as it were.


Sure, but in pure mathematics there are a lot of well specific problems which no one can solve.


Mathematics is indeed one of those rare fields where intimate knowledge of human nature is not paramount. But even there, I don't expect LLMs to replace top-level researchers. The same evolutionary "baggage" which makes simulating and automating humans away impossible is also what enables (some of) us to have the deep insight into the most abstract regions of maths. In the end it all relies on the same skills developed through millions of years of tuning into the subtleties of 3D geometry, physics, psychology and so on.


How is chess not fully specified?


They said chess was an example of something that is fully specified.


I'm guessing that they were referring to the depth of the decision tree able to be computed in a given amount of time?

In essence, it used to be (I have not stayed current) that the "AI" was limited on how many moves into the future it could use to determine which move was most optimal.

That limit means that it is impossible to determine all the possible moves and which is guaranteed to lead to a win. (The "best" than can be done is to have a Machine Learning algorithm choose the most likely set of moves that a human would take from the current state, and which of that set would most likely lead to a win.


How transient depends on the problem space. In chess, centaurs were transient. In architecture or CAD, they have been the norm for decades.


Agreed. But I don't think the time scale will be similar.

Chess is relatively simple in comparison, as complex as it is.


On the other hand, chess is not very financially rewarding. IBM put some money into it for marketing briefly, but that’s probably equal to about five minutes of spend from the current crop of LLM companies.


Last I heard, which was last year, human + computer still beat either by themselves. You got a link about what's changed?


I'm curious what you heard exactly. As far as I can tell, centaur chess looks completely dead.

Nobody ever wins anymore in the ICCF championships (which I believe is the most prestigious centaur chess venue, but am not sure).

This is not an exaggeration. See my comment from several months ago: https://news.ycombinator.com/item?id=45768948

As far as I can tell based on scanning forums, to the extent humans contribute anything to the centaur setup, it is entirely in hardware provisioning and allocating enough server time before matches for chess engines to do precomputation, rather than anything actually chess related, but I am unsure on this point.

I have heard anecdotally from non-serious players (and therefore I cannot be certain that this reflects sentiment at the highest levels although the ICCF results seem to back this up) that the only ways to lose in centaur chess at this point is to deviate from what the computer tells you to do, either intentionally or unintentionally by accidentally submitting the wrong move, or simply by being at a compute disadvantage.

I've got several previous comments on this because this is a topic that interests me a lot, but the two most topical here are the previous one and https://news.ycombinator.com/item?id=33022581.


The last public ranking of chess centaurs was 2014, after which it is generally held to be meaningless as the ranking of a centaur is just the same as the ranking of the engine. Magnus Carlsen’s peak elo of 2884 is by far the highest any human has ever achieved. Stockfish 18 is estimated to be in excess of 4000 elo. Which is to say the difference between it and the strongest human player ever is about the same as the difference between a strong club player and a grandmaster. It’s not going to benefit meaningfully from anything a human player might bring to the partnership.

Magnus himself in 2015 said we’ve known for a long time that engines are much stronger than humans so the engine is not an opponent.

https://stockfishchess.org/blog/2026/stockfish-18/

https://www.dw.com/en/world-chess-champion-magnus-carlsen-th...


You're the one claiming "Last I heard" so you're the one who owes a link.


Why do you nitpick his illustrative example and entirely ignore his substantive one about finance?


I'm highly worried that you are right. But what gives me hope is that people still play chess, I'd argue even more than ever. People still buy paper books and vinyl records. People still appreciated handwritten greeting cards over printed ones, pay extra to listen to live music where the recorded one is free and will likely sound much better. People are willing to pay an order of magnitude more for a sit in a theater for a live play, or pay premium for handmade products over their almost impossible to distinguish knock offs.


Just wait. In a few years we'll have computer-use agents that are good enough that people will stop making APIs. Why bother duplicating that effort, when people can just direct their agent to click around inside the app? Trillions of matmuls to accomplish the same result as one HTTP request.


This strikes me as a very agent-friendly problem. Given a harness that enforces sufficiently-rigorous tests, I'm sure you could spin up an agent loop that methodically churns through these functions one by one, finishing in a few days.


hallucinations in a libc implementation would be especially bad


Have you ever used an LLM with Zig? It will generate syntactically invalid code. Zig breaks so often and LLMs have such an eternally old knowledge cutoff that they only know old ass broken versions.

The same goes for TLA+ and all the other obscure things people think would be great to use with LLMs, and they would, if there was as much training data as there was for JavaScript and Python.


i find claude does quite well with zig. this project is like > 95% claude, and it's an incredibly complicated codebase [0] (which is why i am not doing it by hand):

https://github.com/ityonemo/clr

[0] generates a dynamically loaded library which does sketchy shit to access the binary representation of datastructures in the zig compiler, and then transpiles the IR to zig code which has to be rerun to do the analysis.


To be fair, this was true of early public LLMs with rust code too. As more public zig repositories (and blogs / docs / videos) come online, they will improve. I agree it's a mess currently.


You must have not tried this with an LLM agent in the past few months.


i tested sonnet 4.5 just last week on a zig codebase and it has to be instructed the std.ArrayList syntax every time.


I made a Zig agent skill yesterday if interested: https://github.com/rudedogg/zig-skills/

Claude getting the ArrayList API wrong every time was a major reason why

It’s AI generated but should help. I need to test and review it more (noticed it mentions async which isn’t in 0.15.x :| )


The linked blog post about making this is an excellent read.


Thanks! I think I spent as much time writing the post as I did making the skill, so I’m happy someone got some value out of it.


Fighting fire with fire


A little bit! I wrote a long blog post about how I made it, I think the strategy of having an LLM look at individual std modules one by one make it actually pretty accurate. Not perfect, but better than I expected


Try it again. This time do something different with CLAUDE.md. By the way it's happy to edit its own CLAUDE.md files (don't have an agent edit another agent's CLAUDE.md files though [0])

0: https://news.ycombinator.com/item?id=46723384


Are you using an agent? It can quickly notice the issue and fix it. Obviously if it's trained on an older version it won't know the new APIs.


For those curious about division, I wrote a popular uint128 package in Go that uses one of the standard approaches: https://github.com/lukechampine/uint128/blob/3d2701e8e909405...


Here's one attempt: https://x.com/sigilante/status/2013743578950873105

My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.


If only we had some programs that were good at generating vast amounts of code examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: