Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Deafening Silence of the Royal Society Open Science Journal (iqoqi-vienna.at)
85 points by mathgenius on Sept 25, 2020 | hide | past | favorite | 18 comments


The publishers' peer review process doesn't work.

They pull in half a dozen unpaid "volunteers" (who mostly don't have time and don't care) -- to skim a paper and then give a quick thumbs up or down. That's not real peer review, that's just holding up an arbitrary, meaningless bar. There are tons of papers that pass this bar and are either complete bullshit, or that are so poorly written that there was nothing meaningful for "peer reviewers" to actually assess.

You shouldn't be able to pass peer review without a methods section that allows a third party to fully recreate your experiment, but it turns out it's easier to pass peer review by leaving out sketchy details so that there's less for a reviewer to flag as sketchy! That's so obviously broken!

We spend a lot of time talking about the outrageous cost structure of the traditional publishing industry, but the other problem is that they never actually gave a shit about real review.

The model that should be preferred for open access journals should also be open access. Anybody should be able to publish pre-prints, like they can today on arxiv and similar. Then the entire community can choose to accept, reject, or ignore your work. The best regarded works can then be selected for publication.


I can make a number of counter-arguments.

1. The volunteer nature of peer review is not a direct cause of bad peer review. In some sub-fields, peer reviews works very well at providing feedback or stopping bad papers from being published. In others, not so much. The difference is the norms of the community.

2. There are plenty of licensing bodies in other areas (medicine/plumbing/etc) where the reviewing body lets some of the chaff through with the wheat. Again, paying people can help but is not sufficient. What matters are the norms of the community - which take a lot of hard work to change/build.

3. Please note that the peer review process will always have errors (it's a noisy classification algorithm). The debate should be on the magnitude of acceptable Type I and II errors.

4. Asking papers to have 100% details of the experiment is impossible. It takes 20% effort to get to 80% details. And an additional 4x effort to get the last 20% of the details. A lot of details are often the psycho-motor skills of the grad student, and can't always be written down. I agree that experimentalists should include more details, and release their data, but 100% is too high a bar to reach.

> Then the entire community can choose to accept, reject, or ignore your work.

5. The Facebook architecture fails at separating true news from fake news, and a similar architecture for science will similarly be worse than the current system. Popularity contests rarely work at reliably selecting the best.


4 is a strawman. The comment you are replying to did not say 100% details, it said "a methods section that allows a third party to fully recreate your experiment."


I took their meaning to be that the paper should have enough details be such that say a junior grad student could set up the same equipment and do the exact process described in the paper, and get pretty much the same results. I don't think this is possible.

What is possible is the following. It's sufficient that the paper has the crucial details. And whoever tries to replicate will still have to apply some creativity, change some details, and learn some skills, and ultimately get slightly different results.

In other words, enough detail to "fully recreate [any sufficiently complicated] experiment" is not possible to write down. Obviously, I am not providing any evidence for my position, besides my personal observations.


I published a paper that was effectively not reproducible by third parties, simply because nobody else in the world had the computing power to reproduce it. In fact, several of my papers have this problem. It's entirely unclear what to put in a methods section. Note that in at least one of the papers, the results are now used in nearly all protein design projects, so hopefully they were right.


I agree with Moldoveanu that the incentive structure at many open access journals is backwards (even though I empathize with the intent of the journals). But at the same time I don't think this has anything to do with this particular crack (assuming it is a crack) in peer review, which would have happened anyway at any number of journals. I agree with you that it might be better to just publish in archives or use academic organization fees or something to cover costs of some kind of peer review system.

On the other hand, I've seen both sides of this coin. Assuming Moldoveanu is correct, every once in awhile this happens and it's frustrating as a reviewer. I was once in a very similar position as a reviewer at a statistics journal, where someone submitted and eventually published something that flew in the face of well-established findings and proofs, due to a simple but easily overlooked and consequential error in one of the derivations. When I reached out to the journal about a commentary or letter to the editor, I was stonewalled in a similar way and got the sense the journal was trying to save face over the error. I actually respected the author and think they made a simple mistake but still think it was worth a correction or at least discussion.

At the same time, I've seen cases that look and sound just like this to an outsider, where some contingent of researchers gangs up on someone with a sense of self-righteousness to assert their authority over something, but they're actually completely one hundred percent wrong about it. A smear campaign that looks just like these published refutations of Christian's claims, that look very intelligent and rigorous (and in some sense are), that are nonetheless completely dead wrong. Someone on HN would see a blog post just like this, by the equivalent of Moldoveanu complaining about the incompetence of the journal etc, how Christian is incompetent, etc, and would just assume that because Moldoveanu is asserting some claim to rigor, they must be correct, and the person not doing so is incorrect.

This case is different because of course we're talking about Bell's Theorem, so our prior about it being wrong is skewed strongly in the contrary direction. But in many cases, most cases in academics, which side is right or wrong isn't always clear. There's just one party who claims the other party is incorrect, and others just kind of adopt one side or another. Do we assume the majority is correct?

I'm not really sure what my point is, other than that indignation doesn't really persuade me so much anymore. I can empathize with it, I've been there -- but really in the end all that can be done is to make your case and just make it again whenever you're asked or the topic comes up.

Really the broader issue is about gish gallop and Brandolini's law which seems to be the norm now in many facets of life, and what, if anything, to do about it.


This Joy Christian guy is really a stain on the science ecosystem. He makes wild claims over and over again in slightly different ways that have been mathematically proven impossible in general. Whenever one claim gets analyzed carefully someone finds a trivial flaw. Yet somehow he still gets attention.


Reminds me a bit of certain lawmakers in the US who keep pushing variants of the same laws over and over again. When all it takes is one crack in the gates, it makes sense to keep prodding.


The difference is that in politics you can make your own reality but in the real world you can't, so politicians actually have a point when they won't quit. When science crackpots do the same we ignore them and move on.


Another recent outburst of creativity from him is discussed here:

https://golem.ph.utexas.edu/category/2020/09/new_normed_divi...

I wish I was half as confident as this guy!


I cannot understand why people publish bogus results and stretch the truth, which I have personally witnessed (but not by my previous mentors). Academia doesn't pay as well as industry, so why remain in academic research unless you care enough to raise the bar? I mean, publishing a research paper is months of work, so it doesn't make sense to spend that effort on bad results


Publish or perish is, in my experience, by far the most serious pressure in academic career.

Aside from that, your pay is "not great, not terrible", but your job likely won't be outsourced to China anytime soon, your employer won't go bankrupt, your work won't likely kill anyone, burnouts are rare, scientific conferences in attractive venues are common (or were, until the covid thingy).

I sometimes visit my teachers and former colleagues at the university I graduated from. They do not earn much and there is an obvious bias - people who stay there and teach come from more well-off families, often inheriting their homes etc. But they seem to be happy enough, have 8 weeks of paid leave yearly etc.

Moreover, at least here in the Czech Republic, teaching at a university is highly prestigious.


The simple answer is that people at regional universities need tenure. In the past, it was enough to just teach and perhaps write up some things for the local undergraduate journal, nowadays you have to fulfill some metric, typically publish n papers (it's left unspecified where) and bring in a small grant. The pay-to-publish journals exist to fill that niche. It's all the result of well-meant incentives, not a decay of morals.


Academia doesn't pay as well as industry in terms of money but it does pay well in terms of status and ego supply. Couple that with a bit of self-delusion and it's a recipe for crackpots who can't believe their beautiful ideas could be wrong.


Because some people don't want to work or can't fit well in industry but still wants to be respected and get paid? Researchers who should often obtain "bad results" still have families to feed. Given the current perverse incentives, its totally understandable why there is a plethora of low "quality" papers.


Because industry requires veritable results? And academia doesn't (as you point out). Easier to be a hack in an academic environment.


I’m just getting started learning Geometric Algrebra. Out of curiosity, is that specific error listed anywhere?

I’d like to look at it.


This PDF goes into detail on Joy Christian's algebra error.

http://www.math.leidenuniv.nl/~gill/christian.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: