Hacker Newsnew | past | comments | ask | show | jobs | submit | nrclark's commentslogin

Not the grandparent, but I've used most of the OpenAI models that have been released in the last year. Out of all of them, o3 was the best at the programming tasks I do. I liked it a lot more than I like GPT 5.2 Thinking/Pro. Overall, I'm not at all convinced that models are making forward progress in general.

I get your point about camera vs lidar. Humans do have other senses in play while driving though. We have touch/vibration (feeling the road surface texture), hearing, proprioception / acceleration sense, etc. These are all involved for me when I drive a car.


Kind of reads 25 people's stories about "How I became a parasite". Why not create new things, instead of making a career out of leeching the wealth created by others?


This isn’t really a good angle to critique finance IMO, because it is indeed a necessary part of the modern economy.

A better angle is how finance tends to acquire a ton of smart young people that could/would otherwise be doing work that has more benefits to society. It’s hard to blame the individual here, because the salaries are orders of magnitude larger in finance vs. say, aerospace engineering. Would I turn down $700k at a hedge fund to earn $90k at a science lab? Probably not, unless I was already independently wealthy.


"it's all just numbers really. Just changing what you're adding up. And, to speak freely, the money here is considerably more attractive." - Peter Sullivan in the movie Margin Call


I ended up rewatching that movie more than ten times a few months ago after I got stuck with a capped internet connection and not much to do online. It's one of those films where there isn't a single fucking scene wasted: everything plays out a little over a day, and the character dynamics and dialogue feel genuinely tight. Lots of great characters overall, but Jeremy Irons's John Tuld is just stellar in terms of presence and delivery.


It really is a perfect movie, in the sense of having precisely the right parts and nothing else.


How is that not a restatement of what OP said?


Everyone benefits with more efficient markets.

It is easy to fall into the trap of thinking HFT/low frequency quant firms "leech wealth".

You can get out of the trap by learning about what they do and the essential role they play in the proper functioning of our markets.


It's an intentionally naive position to say that places don't leech off of others. Even large places like Fidelity and Schwab that respect customers aren't just keeping people's money in vaults. They literally take your checking, savings, retirement accounts, etc. and make money off of them while they "sit".

Firms specialize in intercepting trades and then placing trades faster than 99.9% of others.

These institutions hide behind "we provide liquidity" like it's a selfless act of kindness, whereas that's just a mere side effect, and just one of many.

The entire modern financial system is layers and layers of unneeded complexity that almost solely rose out of people trying to leech money from the system. These financial institutions have built the entire system around them so that now they can say "look at how essential we are!".


> aren't just keeping people's money in vaults. They literally take your checking, savings, retirement accounts, etc. and make money off of them while they "sit". <

depending on jurisdiction and TOS, this maybe legal, but it needs to be announced somehow to the customer; a capital management firm of an ETF needs to buy the included shares, e.g; those have no money "sitting around"?


Sure, it's announced and by contract. But where else do you have to put your accounts?


It can both be true they provide a necessary service and that unfair financialization exists.

We are not in disagreement.

But it is ignorance to say the system would work better without any involvement of HFTs.


the leeches are the publicly traded companies that are listed on the stock market. Any market should be happy to have more participants, unless you like price-uncertainty.


Why not practice compassion instead of critiquing others?


Did you actually read them?

I didn't go to NYC, but Money is fungible so it's a simple math problem.

How much non-parasite good can you do making $50k/year * 10 years? Even if we ignore taxes and you donated your entire salary, that tops out at $500k worth. If instead you could make, say, $500k/year * 10 years, and then quit and form your own non-profit for $2,000,000 and do 4x as much good.


I get your point, but that was the exact logic of Effective Altruism and Sam Altman is now jailed for the Largest Fraud of All Time. It's a slippery slope.


Sam Bankman-Fried. And I think he’s a bit of a special case, others do not need to be worried they’ll succumb to multi billion dollar fraud schemes if they try to earn-to-give.


I'm not defending SBF, but I think you may not be completely taking into account how strong the pressures are on someone like that. I'm pretty sure he didn't set out to commit a multi-billion dollar fraud, he was sucked into it as a consequence of the expectations on him and so forth. My point here is just that this is a symptom of a societal problem, and SBF is just a well-positioned scapegoat.


SBF was really unusual in that he claimed to be a pure expected-utility maximiser. He admitted that he would take 51% coin-flips forever on Conversations with Tyler in March 2022, long before everything blew up:

> COWEN: Then you keep on playing the game. So, what's the chance we're left with anything? Don't I just St. Petersburg paradox you into nonexistence?

> BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That's the other option.

I'm not saying the pressures are absent, but they are hopefully vastly less compelling for any normal person with a more standard view of risk and utility. ("Sure, I'll just cover up this little bit of fraud, because that's got a better than 50% chance of success" is a course of action SBF all but said he would take, months in advance!)


If you have a billion dollars to give me, I'm pretty sure I can manage to not to use them for outright crypto fraud. You'd have to give me a billion dollars to be sure, but I promise really hard.


Freudian slip?


If you’re only make on average 500k/year after 10y — you’re not really in the game at all


there are a lot of quants who aren't at T1 firms or in the hottest seats. plenty make less than this. obviously if you are top of class and getting headhunted by XTX, your offers will be much bigger than $500k ... but it's kind of obnoxious to claim that QRs making less than this aren't even players


The term "player" tends to imply something more than just someone employed in a field. I don't support all this insane inequality, but the other commenter is not wrong on the relative value assessment.


Feel free to do more accurate math! I don't think you'd be doing $500k right out of college either, so it was intended to be a rough average. The person I know I'm finance is doing well over $1,000,000/yr, but I have no idea how average that is.


Said on a website hosted by a major VC that specializes in Silicon Valley types of venture capital, bringing the next data collecting mobile app to your doorstep.


I’ve disputed fraud a couple of times on my Chase cards. It was.. fine? Uneventful and simple.


Their problem is with false positives they find, not true positives you find. My application for a credit card was somehow flagged as fraudulent. Chase repeatedly asked for additional forms of ID, then told me the scans I sent were illegible. (The scans were fine; I think they just needed an excuse.) I went to a branch with the physical documents, and they said they couldn't look at them. The branch put me in an office and called the same telephone support, with the same result. I eventually gave up.

I guess I'm lucky they rejected me before any money changed hands. I've heard horror stories from people with significant assets at their bank, locked out until an actual lawsuit (the letter from a lawyer didn't work) finally got their attention. I think it's like Google support, usually fine but catastrophic when it's not.


> The branch put me in an office and called the same telephone support, with the same result.

As far as I can tell, going to a branch of a big bank to address a problem nowadays is similar to going to a cellphone store for tech support. All they can really do is call the same hotline or fill out the same webform you’d have access to at home.


Interesting, I wonder if this will spike VPN traffic into Vietnam.


What's the subset of users with a VPN but no ublock?


NordVPN users sold by the "anti-hacker" ads?


Yeah probably not. A large amount of posts and videos from social medias are blocked in Vietnam, it's still a communist country with very low level of free speech and press freedom, albeit still better than China.

Source: I used to live there.


Agree. Even asking it can anchor your thinking.


This was a really interesting read. I'd highly recommend it for anybody who's setting up (or currently maintains) a pre-commit workflow for their developers.

I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.

Instead, put what you want in an optional pre-push hook and also put it into an early CI/CD step for your pull request checker. You'll get the same end result but your fussiest developers will be happier.


> This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.

And with git, you can even make anything that happens on the dev machines mandatory.

Anything you want to be mandatory needs to go into your CI. Pre-commit and pre-push hooks are just there to lower CI churn, not to guarantee anything.

(With the exception of people accidentally pushing secrets. The CI is too late for that, and a pre-push hook is a good idea.)


A good analogy is: git hooks are client-side validation; CI is server-side validation, aka the only validation you can trust.


> with git, you can even make anything that happens on the dev machines mandatory

s/can/can't?


You can an an enterprise environment when following SOPs are mandatory due to cybersecurity and infrastructure requirements.


Yes, indeed.


You can run git commit with a --no-verify flag to skip these hooks


I can second that. If there are multiple commits: https://github.com/tummychow/git-absorb is handy to add formatting changes into the right commit after commits already happened.


It looks like git absorb rewrites history. Doesn’t that break your previously pushed branch?


That's a controversy I'm not sure you necessarily realise you've stepped into :).

It's fairly common to consider working and PR branches to be "unpublished" from a mutability point of view: if I base my work on someone else's PR, I'm going to have to rebase when they rebase. Merging to `main` publishes the commit, at which point it's immutable.

Working with JJ, its default behaviour is to consider parents of a branch that's not owned by you to be immutable.


My branch is mine. Don't tell me what I can or can't do. I push WIP stuff all the time, to share code with others for discussion, to get the build to run in parallel while I keep working or just at the end of the day. I freely amend and will squashed before merging (we only allow a single commit per branch to go to master).

If I or someone else bases something off anything but master that's on them to rebased and keep up to date.


My philosophy is that once a PR is open, that's the point at which people should no longer feel free to treat their branch as their own. Even in groups that squash commits, it should still preserve the aggregate commit messages.

But until that PR is open? Totally with you. There is no obligation to "preserve history" up until that point.


Strong disagree: until the branch is merged, it's mine.

I'm in a camp that prefers single rebased commits as units of change, "stacked diffs" style.

GitHub in particular was annoying with this style but is definitely getting better. It's still not great at dealing with actual stacks of diffs, but I can (and do) work around that by keeping the stack locally and only pushing commits that apply directly to the main branch.


Not to disagree, but this is so GitHub-centric. What is up with "diffs", "patches", and "submissions"? :D


Not to disagree, but calling it Github-centric is a bit over specific :)

I regularly work with Github, Bitbucket, and Gitlab. Everything I said applies except for the fact that I said "PR" instead of "MR". But yes, you're right. I'm highlighting a specific, albeit extremely popular, workflow.


I know, I know, I was going to edit it to "Git{Hub,Lab}" in the beginning but oh well.

In any case, my comment just reflects on the fact that you had a series of patches that you could not squash or rebase. It stuck.

And the fact that I see many people use the abbreviation "PR" for something that is merely a patch or diff. For example you might send a diff to the tech@ mailing list, but you should not refer to it as a PR.


Git{Hu,La}b


GitPub


There's a weird thing happening on my current project. Sometimes I merge main into my branch and it fails. What fails is the pre-commit hook on the merge commit. Changes in main fail the linting checks in the pre-commit hook. But they still ended up in main, somehow. So the checks on the PR are apparently not as strict as the checks on the pre-commit hook. As a result, many developers have gotten used to committing with `--no-verify`, at which point, what is even the point of a pre-commit hook?

And sometimes I just want to commit work in progress so I can more easily backtrack my changes. These checks are better on pre-push, and definitely should be on the PR pipeline, otherwise they can and will be skipped.

Anyway, thanks for giving me some ammo to make this case.


For the sake of argument, let's say you have a check that caps the number of lines per file and that both you and main added lines in the same file. It's not too weird if that check fails only after merge, right?

One benign example of something that can break after merge even if each branch is individually passing pre-merge. In less benign cases it will your branch merged to main and actual bugs in the code.

One reason to not allow "unclean merges" and enforced incoming branches to be rebased up-to-date to be mergable to the main branch.

You probably want to run the checks on each commit to main in CI and not rely on them being consistently run by contributors.

You do you but I find rebasing my branch on main instead of merging makes me scratch mybhead way less.


Your hook really shouldn't be running on the merge commit unless you have conflicts in your merge.


Never had conflicts on a merge? We've got a lot of people on the same codebase. Merge conflicts are a fact of life. And they wouldn't be a problem without the stupid commit hook. It's the commit hook that makes them a problem.


If you have conflicts then you can fix them and run your linter or formatter. If you have a no conflict merge it doesn't matter.


Thanks, but that's not the issue here.


Why not take the best of both worlds? Use pre-commit hooks for client-side validation, and run the same checks in CI as well. I’ve been using this setup for years without any issues.

One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.



> I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.

you will save your org a lot of pain if you do force it, same as when you do force a formatting style rather than letting anyone do what they please.

You can discuss to change it if some parts don't work but consistency lowers the failures, every time.


Enforcement should live in CI. Into people's dev environments, you put opt-in "enablement" that makes work easier in most cases, and gets out of the way otherwise.


Agreed, my company has some helper hooks they want folks to use which break certain workflows.

We’re a game studio with less technical staff using git (art and design) so we use hooks to break some commands that folks usually mess up.

Surprisingly most developers don’t know git well either and this saves them some pain too.

The few power users who know what they’re doing just disable these hooks.


It's a good thing you can't force it, because `git commit -n` exists. (And besides, management of the `.git/hooks` directory is done locally. You can always just wipe that directory of any noxious hooks.)

I can accept (but still often skip, with `git push -n`) a time-consuming pre-push hook, but a time-consuming and flaky pre-commit hook is totally unacceptable to my workflows and I will always find a way to work around it. Like everyone else is saying, if you want to enforce some rule on the codebase then do it in CI and block merges on it.


I'm the type of developer who always have a completely different way of working. I hate pre-commit hooks, and agree that pre-push + early step is CI is the right thing to do.


You dont have to install hooks. Its that simple.

Be prepared to have your PR blocked tho.


fwiw I'm the CEO of htmx, and I am a huge fan of these types of hyperbolic articles.


I am starting to understand htmx.


As the CEO of HTMX I can assure you that you’ve only scratched the surface of the HATEOAS doctrines


Game theory is a model that's sometimes accurate. Game theorists often forget that humans are bags of thinking meat, and that our thinking is accomplished by goopy electrochemical processes

Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.

Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.

People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.


It's a bummer that the library and utilities are GPLv3 - really limits adoption, because it means that product developers can't build it into the kinds of small embedded Linux systems where it would really shine.


Have you tried reaching out to enquire if you could buy a version that was at least LGPL or proprietary licenced for you to bundle in your closed source application?

Or was this just a statement about being entitled to other peoples work and closing it up?


> product developers can't build it into the kinds of small embedded Linux systems

Are you sure about that? Because if you don't ship binaries, but whole devices, than only AGPL might demand what you think GPL does. Also I don't see what the issue is with "distributing" software from somebody else. If you designed things modular, the GPL software can be updated without the user needing to touch any of your proprietary code.


GPL doesn't extend to I/O.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: