One of the best classes I ever had was a stats class where the prof assigned a paper a week for us to review and mark up any and all faults. In the beginning, the papers were from journals like The Canadian Journal of Forest Research (which we came back with drenched in red ink), by the end we were punching holes in the Lancet.
Granted, the papers and progression were hand-picked by the prof. However, the final effect was that the students became fairly competent at reviewing papers in general.
I'd like to see some sort of channel or forum dedicated to similar critiques.
Sadly, much of the academic publishing world seems to be geared around "scratch my back" and not disrupting funding agencies, publishers, and/or senior (academic) department officials.
There may be criticism, yes, but it seems far more defensive than objective --- protecting turf.
Defence against the Dark Arts^W^WCertified Crackpots does remain a concern, of course.
I like his points on active reading of research paper. In my seminar we always had to ask critical question when we were reading.
I also believe there is two way to read a paper. One way is to be critical (normal case). This is necessary if we want to learn something.
Another way is to pretend the author is correct, and try to implement things as quick as possible (applying research approach). The reason is to establish a small "framework" to start our research for ourselves or a baseline. After that we can be critical about the authors work.
For active reading we were taught to go through abstract, results and conclusion. In case of graphics paper that uses ML, we can immediately notice artifacts, inconsistency and minor disturbances. Then, we can be critical on their method, on their claim and reason why things are that way!
Fantastic article from Gwern, as usual.
He lays out some approaches to distinguish fake research using AI-generated ones.
However I think he may be optimistic about the effort needed to correct the course on junk science and the ability of humans to discriminate between real and fake.
There was a game posted on hn where you had a real title/abstract and a fake one, and you had to guess the correct one, and I actually performed worse than a coin toss.
Any heuristic -e.g., this is such an unreadable string of jargon that no human would have tried to publish something like this- was actually violated by some articles.
Of course, it was mostly papers outside of my expertise, like geology or biology.
One would think that after having read more than a hundred papers, they would perform better.
Especially as some very advanced technical points about either biology or technology drive more policy with direct impact.
One mistake has always been to present "the science" as something well established.
An extraordinary number of debunked claims have percolated to the general public, bad science has driven policy, the limits of knowledge and scientific reductionism can be felt in many fundamental domains (e.g. gardening, dietetics).
Science would be more actually described as a faith that the process of that multi-player publishing game eventually converges to the truth.
In any case, evolution does not wait on anyone, so it will be interesting to see how people and the system evolve as the fake generation improves in quality beyond reasonable means.
Granted, the papers and progression were hand-picked by the prof. However, the final effect was that the students became fairly competent at reviewing papers in general.