I'm always curious how does the HN submission acceptance algorithm work? I sometimes get redirected to another submission, while sometimes (like in this case) there are multiple submissions with the identical URL.
Shouldn't all same-URL submissions be considered as one, at least in a certain time period?
They're considered as one for the first 8 hours. After that, the repost only counts as a dupe if an earlier post has had significant attention in the last year or so. That's because we want good submissions to get multiple shots at getting attention.
It's worth noting that the title is a pun on the name of the popular social media channel "I ****ing Love Science". The "science" that they ****ing love is mostly "science as a bundle of useful, entertaining, and authoritative facts", rather than "science as a human process of discovery and error-correction".
Aside from the substantive points about the cost of bugs, the subtext of the article is that real science is frequently difficult, painful, frustrating and ambiguous, in contrast to the neat stories that make up the "science as a bundle of facts" world-view.
I disagree, the meat of the article isn't about software bugs, it is about the current state of academia, and specifically academic research on software.
The problem stated is that instead of being able to find widely cited or good papers, any amount of research involves trudging through hundreds of bad, off topic, outdated, unrelated, or otherwise useless papers, just to get an idea of what is being talked about in the field. And even when getting there, what you find may be completely non-practically relevant in the end.
You say you hate science, but then you go on to state the opposite, that you actually hate stuff being unscientific, not verifying, not checking the facts etc.
There are two definitions of “science”: The abstract ideal (roughly, doubt everything until you have reproducible proof), and the way it's put into practice (or not) by institutions, companies, and individuals. The failure modes are numerous, including perverse incentives to do the wrong thing, culminating in outright fraud. See the other discussion here on medical research [1]. It's easy to hate the latter if you value the former.
The title is quite clearly a reference to fluff websites and social media pages called 'I F___ love Science'. Those pages post low-quality blog articles summarizing scientific developments, but don't link to the original research papers.
The coverage is significantly juiced up, and readers will take the articles as fact, as they have no reason to believe otherwise.
The author's example of a highly-cited IBM chart that can't be properly sourced plays on the same principle as the IFLS sites.
You're missing the point, his point is that alot of science is unscientific. He explains that the main problem is that either the study is full of gaps, aka. unanswered questions. Or either it's a good study, but then continue to make conclusions about an entire subject while their data only represents a part of the subject. And this happens all the time, over and over again. It happens at google, IBM or any other research company. Science is hard! and the people working at these companies are just people like any other people. People like to rush things or claim that they have proven something.. because.. fame, money, recognition, etc etc..
Lack of empirical results is a recognized problem. I do however know of one effort to dig out and bring together data sets - http://www.knosof.co.uk/ESEUR/
Even in experiments where it's very easy to make a study replicable by simply sharing the code, many people do not. I take this as evidence that the goal of research is rarely good science.
Essentially we are stuck in ancient times, where rhetoric and personality dominate the software field. Until we start to do serious science, we will continue to see the constant churn of practices and paradigms, without even knowing if they have more effect than a placebo.
TL;DR: (Without providing any hard data to back it up)
- Design bugs are (more) expensive to fix after implementation.
(I really don't want to be an asshole here. The piece is well written. Particularly the part where it criticizes how a lot of CS research is done nowadays. But so many words for such a remarkably obvious platitude!)
I found out recently that the author of this article only really cares about science in his own field. In other fields, specifically medicine, he parrots the establishment line without thought to the actual evidence.
I read this article before I found that out, and afterward, it just seemed hypocritical to me.
Software engineering research is a train wreck - https://news.ycombinator.com/item?id=27892615 - July 2021 (90 comments)