These sorts of articles raise so many thoughts and emotions in me. I was trained as a computational biologist with a little lab work and ran gels from time to time. Personally, I hated gels- they're finicky, messy, ugly, and don't really tell you very much. But molecular biology as a field runs on gels- it's the priimary source of results for almost everything in molbio. I have seen more talks and papers that rested entirely a single image of a gel which is really just some dark bands.
At the same time, I was a failed scientist: my gels weren't as interesting, or convincing compared to the ones done by the folks who went on to be more successful. At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed, although I did assume that folks didn't do a good job of organizing their data, and occasionally published papers that were wrong simply because they confused two images.
Would I have been more successful if fewer people (and I now believe this is a common occurrence) published fraudulent images of gels? Maybe, maybe not. But the more important thing is that everybody just went along with this. I participated in many journal clubs where folks would just flip to Figure 3, assume the gel was what the authors claimed, and proceed to agree with (or disagree with) the results and conclusions uncritically. Whereas I would spend a lot of time trying to understand what experiment was actually run, and what th e data showed.
Similar - when I was younger, I would never have suspected that a scientist was committing fraud.
As I've gotten older, I understand that Charlie Munger's observation "“Show me the incentive and I will show you the outcome.” is applicable everywhere - including science.
Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.
I think my favorite Simpsons gag is the episode where Lisa enlists a scientist (voiced by Stephen Jay Gould) to run tests to debunk some angel bones that were found at a construction site.
In the middle of the episode, the scientist bicycles up to report, dramatically, that the tests "were inconclusive".
In the end, it's revealed that the bones were a fraud concocted by some mall developers to promote their new mall.
After this is revealed, Lisa asks the scientist about the tests. He shrugs:
"I'm not going to lie to you, Lisa. I never ran the tests."
It's funny on a few levels but what I find most amusing is that his incentive is left a mystery.
Well, the incentive is that he didn't want to run the tests out of laziness (i.e. he lacked an incentive to run them). He ran to Lisa to give his anticlimactic report not to be deceptive, but rather he just happened to be cycling through that part of town and just needed to use the bathroom really badly.
To be honest, it's difficult to tell if the subplot makes sense on purpose, or if the writers just wanted to make a joke and it just happened to end up making sense. I don't think I had ever put the three scenes together before now.
One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.
> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity.
Perhaps they should have taught you to be less sure of that. So many takes in movies that ended up being the best one are where a punch accidentally did land, something is ad-libbed, a dialogue is mixed up, etc.
To take an example of a very critically acclaimed show: in Breaking Bad the only reason we got Jonathan Banks in the role of Mike is because Bob Odenkirk had a scheduling conflict, and Banks improvised a slap during his audition. Paul Aaron even complained about it indicating that he would not have agreed to it.
It seems like there is a lot of serendipitous in writing and production. That's not what it was about. The point is how much agonizing and second guessing it takes and how many alternatives explored and how many takes, etc before something, anything makes it in the final product.
The lucky break is first a result of a lot of planning and work - and it gets analyzed to death before included - and then probably re-inforced here or there elsewhere. (So that for me, I do notice when I hear movie or TV dialog as completely natural and said exactly right. It's exceptional.)
This is a cartoon though, significantly less adlibbing, everything has already been storyboarded and scripted out etc.
Pixar's approach to making their movies is a fascinating highly iterative process going through many story boards and internal showings using simplistic graphics before proceeding to the final stage to produce a polished product. I wonder how Simpsons do it.
> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.
Not everything, for example I read somewhere that chess "fight" in Tween Peaks was random and didn't adhere to chess rules because no one really paid attention to record or follow moves.
The entire writing room was Harvard grads and people who went on to accomplish impressive things in the industry (eg Conan O’Brien was a writer, David X Cohen was a writer and then went on to cocreate Futurama with Groening). The early writing team was one of the sharpest ever assembled and dismissing it as a “goofy cartoon” is missing the talent behind it just like if you dismissed Futurama in that way.
More often than not in scientific fraud I've seen the underlying motives be personal beliefs than financial. This is why science needs to be much stronger in weeding out the charlatans.
It's actually quite clever from the part of the scientist.
The incentive would be money, maybe the pay for doing this test was not good enough.
Or maybe the scientist was motivated by thirst of discovering something good for humanity like cure for cancer and didn't want to get distracted by other things. Funding is also needed but angel bones are clearly impossibility. Why even spend time on disproving that? But if she had engaged in discussion with people clearly believing in this nonsense it would have taken too much time. Saying, the tests are inconclusive lets her be distanced from all this and allow people to leave her alone, mostly that the groups will continue their disputes among themselves.
That's a good one. In my experience, corruption is almost always disguised as neglect and incompetence. Corrupt people meticulously cover their tracks by coming up with excuses to show neglect; some of them only accept bribes that they can explain away as neglect where they have plausible deniability. It doesn't take much brainpower to do well, just malicious intent and knowing the upper limits.
IMO, Hanlon's razor "Never attribute to malice that which can be adequately explained by stupidity" is a narrative which was created to condition the masses into accepting being conned repeatedly.
On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.
So can "stupidity". If something is possible for a human to do, it's something that's possible for any sufficiently-enabled/supported human to do. I've heard it put that the inability to understand or do something is a matter of not having acquired the necessary prerequisites. So, the incentives to control stupidity are the incentives to acquire and apply the prerequisite skills or knowledge.
Yes and in addition malice is enough times predictable while incompetence is just a quantum void where the probabilities are inverted and your hard earned intuition doesn't help you...
I don't seem to be able to edit this anymore, but there is a grievous gap in the writing: "Barry Appelman, for a long time the boss of all the Unix engineers at AOL."
I wouldn't attribute malice to Hanlon's razor, but yes, even dogs and small children know how to play dumb and the children just keep getting better at it.
Ehh... I think neglect and incompetence are super common. I have a sink full of dishes downstairs to prove it. I think corruption, while not rare, is still far rarer. Horses over zebras still (at least in the US).
‘Sufficiently advanced’ is the key term, e.g. if your sink was located on the premises of 5 star hotel then that would probably be indistinguishable from malice.
> On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.
I think you have things backwards. Being dumb is the default. It takes ability and effort and help to get smarter. Animals and children are dumber than us. Do you think they realize it?
Perversely many who are dumb are trapped thinking they are not dumb:
A dumb person (like a dumb child or animal) are what they are one should not attribute malice. Better to try to see things from their point of view and perhaps help them be smarter. This is what I try to do.
Your other remarks are 100% just the point above was sticking out hence my comment.
I feel that stupidity is evil in the same way as that a shark might be perceived as evil. You could explain it away as "It's not their fault, it's in their nature, they don't know better" but if it's in their nature to cause people harm, if anything, it makes the label more applicable from my perspective.
Dunning Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities.
That is to say some of the incompetent are so incompetent they can’t distinguish between their incompetence and an actual expert. This is exhibited very publicly in some contestants of the American Idol genre of shows.
D&K ironically misengineered their tests and inadventently misconstrued their data due to floor and ceiling effects. If you ran the gamut of their tests against random noise you get similar results.
If humanity is to mature, we should be an open book when it comes to incentives and build a world purposefully with all incentives aligned to the outcomes we collectively agree upon.
(I keep mentioning this but no one seems to be picking up on it.) There is an algorithm that was developed in the late 80's in the context of therapy that could be used to align incentives and collectively agree on outcomes.
The algorithm is a simple recursive procedure where the guide or therapist evokes the client's motivation or incentive for an initial behaviour and then for each motivation in turn until a (so-called) "core state" is reached. In crude pseudo-code it would be something like:
FOO = <some "presenting problem">
M = evoke_motivation(FOO)
until is_core_state(M):
M = evoke_motivation(M)
Generalizing, motivations form a DAG that bottoms out in a handful of deep and profound spiritual "states". These states seem to be universal. Walking the DAGs of two or more people simultaneously until both are in core states effectively aligns incentives automatically, at least that's what I suspect would happen.
That's definitely true and there's lots of craziness around it. However, the best estimates for therapy and it's effects suggest that it's mostly a provider effect rather than anything in the theory.
Which is to say, a lot of this stuff works because you expect it to.
In re: this "NLP is pseudoscience" business, I've lost patience with it. First, I'm living proof of NLP's efficacy. Second, I don't go around suggesting homeopathy or astrology or pyramid power, okay? Like Ron Swanson "my recommendation is essentially a guarantee."
In terms of a Venn diagram the region representing people who have experience with NLP and the region representing people who think NLP is pseudoscience are disjoint, they do not overlap. As in I have never found anyone who claims that NLP is pseudoscience who has also admitted to having any experience with it. That is not science, eh? To the extent that mainstream scientists don't take NLP seriously they make themselves ridiculous. So yeah, in this one instance, ignore the scientists and look at the strange thing anyway, please? Humor me?
Now NLP is not scientific (yet) and it doesn't pretend to be (although many promoters do talk that way, and that's wrong and they shouldn't do that) and in fact there's a video online (I'll link to if I find it) where the co-founder addresses this point and says "it's not scientific".
However it does work. So it seems imperative to do science to it!?
At the time it was developed there were dozens of schools of psychology on the one hand[1] and academic psychologists on the other and the two groups did not talk to each other. NLP ran afoul of the academic psychologists in the mid 1980's and they closed ranks against it and haven't bothered themselves with it since. Again, I think it would be fantastic if we would do science to it and figure out what these algorithms are actually doing.
In any event the important thing is that the tools and techniques that have been developed are rigorous and repeatable. E.g. this "Core Transformation Process" works. That's primary data on which the science of psychology should operate not ignore.
I don't think that's really going to work. People won't list all their incentives, because some of them are implicit and others are embarrassing or "creepy". Others will absolutely judge you for what incentivizes your actions, therefore hiding them is the status quo.
If you say that your incentive for working out is to look good and be popular with the ladies then people will judge you for it, even if it's exactly the truth. If you say that you work out "for health" everyone will applaud you for what you're doing. And yet the outcome is going to be the same.
I could be wrong, but I took the parent's comment to mean that we should design incentive structures transparently, instead of obscuring them or outright ignoring the whole concept when engineering society.
You got it backwards. It's not about being transparent about what you want to achieve, it's about being transparent about what others expect you to achieve on your current position.
To check my understanding: say your current position is "unemployed." You would think that the expectation for you is to "get a job", but to get a job is extremely difficult. You have to navigate an almost adversarial job market and recruiting process, often for months. It's essentially a massive negative incentive, considering all of the effort and grief involved. So, the incentives aren't aligned with the desired outcome; the skittishness of each individual hiring company to make sure that they don't get screwed by a bad hire has warped the entire dynamic. Is this a good example?
This is a weird take, assuming the average researcher cannot be an average Joe, and also that average people aren't also worried about their livelihood...you might want to revisit your view of the world.
The initial comment makes the 2 mutually exclusive. You reframing it doesn't change what the original comment said. You also blew past the more important of the 2 points: that regular people care about their livelihoods as well.
And it's not at all clear if that education does anything other than magnify their intellectual predispositions. The smart people can make great strides, but the stupid will be stupid louder and harder. And the average may well just be... more average.
There's a problem that if you care, as an average person, it's hard to do much with it. Every few years you can vote left or right, which unless you happen to live in a marginal constituency or swing state, has no effect.
>he Democratic party is left of center on social issues, even compared to Europe.
Actually, the Democratic Party is mostly libertarian (or classically liberal, if you like, which is, inherently right wing) on social issues -- preferring to allow people to make their own choices WRT their bodies rather than seeking government control of reproductive health and other forms of bodily autonomy.
Individual rights and personal agency are not "left wing," except in the eyes of the authoritarian far right (or far left) who seek control over all else.
So no. The Democratic Party has a solidly center-right agenda/ideology -- no collectivism, individual rights not curtailed by the state, freedom of thought and religion, etc.
Despite what some folks may say, there are no Marxists in the US Democratic Party.
That's not to say that the Democratic Party is the ideal. Far from it. But to place them on the absolute "left" is ridiculous on its face.
It's only "left wing" as compared with the far right (read: evangelical christians, white nationalists, xenophobes, etc.) Republican Party who want to limit women's reproductive choices, force the religious doctrines of the Christian church down everyone's throats and spout xenophobic and long debunked genetic tropes related to melanin content.
If humanity is to mature, we must be critical and take responsibility for ourselves, particularly when the alignment of others are concerned. Such as starting by disagreeing with everything, and validate for one's own.
> Arguably some have figured how to game the system to advance their careers
lol arguably? i would bet my generous, non-academia, industry salary for the next 10 years, that there's not a single academic with a citation count over say ... 50k (ostensibly the most successful academic) that isn't gaming the system.
- signed someone who got their phd at "prestigous" uni under a guy with >100k citations
Terence Tao has well over 50K citations. Maybe one can argue that he’s gaming the system because he alone can decide what problems are deemed to be interesting by the broader community, but he can’t help that.
> Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.
I’ve talked to multiple professors about this and I think it’s not because they don’t care about science. They just care more about their career. And I don’t blame them. It’s a slippery slope and once you notice other people who start beating you, then it’s very hard to stay on the righteous pad[note]. Heck I even myself in the PhD have written things I don’t agree with. But at some point you have to pick your battles. You cannot fight every point.
In the end I also don’t think they care that much about science. Political parties often push certain ideas more or less depending on their beliefs. And scientist know this since they will often write their own ideas such that it sounds like it solves a problem for the government. If you think about it, it’s kind of a miracle that sometimes something good is produced from all this mess. There is some beauty to that.
[note] I’m not talking about blatant fraud here but about the smaller things like accepting comments from a reviewer which you know are incorrect, or using a methodology that is the status quo but you know is highly problematic.
The Manhattan project was a government project that was run like a startup.
If such a project happened today, academic scientists would be trying to figure out ways to bend their existing research to match the grants. Then it would take another 30 years before people started to ask why nothing has been delivered yet.
Lots of people doing research find this depressing to the point of quitting. Many of my peers left research as they couldn't stomach all this nonsense. In experimental fields, the current academic system rewards dishonesty so much that ugly things have become really common.
In my relatively short career, I have been asked to manipulate results several times. I refused, but this took an immense toll, especially on two occasions. Some people working with me wanted to support me fighting dishonesty. But guess what, they all had families and careers and were ultimately not willing to do anything as this could jeopardize their position.
I've also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed. The current review system favors mono-cultures and winner-takes-it-all scenarios.
For these reasons, I think industrial labs will be doing much better. Incentives there are not that perverse.
> Academic scientists' careers are driven by publishing, citations and impact.
Publishing and citations can and are gamed, but is impact also gamed on a wide scale? That one seems harder to fake. Either a result is true and useful, or it's not.
That attitude coincides with current delusion in our society that science is perpetuating a fraud at the level of religions whose leaders are trying to control their flock for financial and sexual gain.
A broken system that incentivizes fraud over knowledge is a real problem.
An assertion that scientists chase the money by nature is a dangerous one that will set us back to the stone age when instead we should be traversing the space as a whole.
> Similar - when I was younger, I would never have suspected that a scientist was committing fraud.
Unfortunately many less bright people seem to interpret this as "never trust science", when in reality science is still the best way to push humanity forward and alleviate human suffering, _despite_ all the fraud and misaligned incentives that may influence it.
At some point, the good scientists leave and the fraudsters start to filter for more fraudsters. If that goes on, its over- the academia has gone. Entirely. It can not grow back. Its just a building with conman in labcoats.
My suggestion stands: Give true scientists the ability to hunt fraudsters for budgets. If you hunt and nail down a fraudster, you get his funding for your research.
I mean, the replication crisis had come and gone, about 5 years now. The fraudsters are running the place and have been for at least the last half decade, full stop.
It becomes a survival bias: if people can cheat at a competitive game (or research field) and get away with it, then at the end you'll wind up with only cheaters left (everyone else stops playing).
You could improve the situation by incentivizing people to identify cheaters and prove their cheating. If being a successful cheater-hunter was a good career, the field would become self-policing.
This approach opens its own can of worms (you don't want to overdo it and create a paranoid police-state-like structure), but so far, we have way too little self-policing in science, and the first attempts (like Data Colada) are very controversial among their peers.
As they say: the scum rises to the top, true for academia, politics etc, any organization really.
Quote: "The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing"
My own nuanced take on it:
Incompetent people are quick to grab authority and power. On the other hand principled, competent people are reluctant to take on positions of authority and power even when offered. For these people positions of power a)have connotations of a tyrant b) are uninteresting. (i.e technical problems are more interesting) . Also the reluctance of principled people to form coalitions to keep out the cheaters, because they are a divided bunch themselves exacerbates the problem, where as the cheaters often can collude together (temporarily) to achieve their nefarious goals.
And thus we have the Earth. Where all looks like a broken MMO in every direction. Everybody refuses to participate, because it's 100% griefers, yet nobody can leave.
Materials Academica: Doping + Graphene = feces papers (https://pubs.acs.org/doi/pdf/10.1021/acsnano.9b00184) "Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?" (Bonus joke! Crap is actually a better dopant material.)
Military: The saga of the Navy, Pacific Fleet, and Fat Leonard (https://en.wikipedia.org/wiki/Fat_Leonard_scandal) "exploited the intelligence for illicit profit, brazenly ordering his moles to redirect aircraft carriers, ships and subs to ports he controlled in Southeast Asia so he could more easily bilk the Navy for fuel, tugboats, barges, food, water and sewage removal."
I used to work with someone up until the point I realized they were so distant from any form of reality that they couldn't distinguish between fact or fiction.
Naturally, they are now the head of AI where they work.
Hacker news is completely flooded with “AI learns just like humans do” and “AI models the human brain” despite neither of these things having any concrete evidence at all.
Unfortunately it isn’t just bosses being fooled by this. Scores of people push this crap.
I am not saying AI has no value. I am saying that these idiots are idiots.
Similar story: computational biologist, my presentations involved statistics so people would come to me for help, and it often ended in the disappointing news of a null result. I noticed that it always got published anyway at whichever stage of analysis showed "promise." The day I saw someone P-hack their way to the front page of Nature was the day I decided to quit biology.
I still feel that my bio work was far more important than anything I've done since, but over here the work is easier, the wages are much better, and fraud isn't table stakes. Frankly in exchange for those things I'm OK with the work being less important (EDIT: that's not a swipe at software engineering or my niche in it, it's a swipe at a system that is bad at incentives).
Oh, and it turns out that software orgs have exactly the same problem, but they know that the solution is to pay for verification work. Science has to move through a few more stages of grief before it accepts this.
I'm mostly out now, but I would love to return to a more accountable academia. Often in these discussions it's hard to say "we need radical changes to publicly funded research and many PIs should be held accountable for dishonest work" without people hearing "I want to get rid of publicly funded research altogether and destroy the careers of a generation of trainees who were in the wrong place at the wrong time".
Even in my immediate circles, I know many industry scientists who do scientific work beyond the level required by their company, fight to publish it in journals, mentor junior colleagues in a very similar manner to a PhD advisor, and would in every way make excellent professors. There would be a stampede if these people were offered a return to a more accountable academia. Even with lower pay, longer hours, and department duties, MORE than enough highly qualified people would rush in.
A hypothetical transition to this world should be tapered. But even at the limit where academia switched overnight, trainees caught in such a transition could be guaranteed their spots in their program, given direct fellowships to make them independent of their advisor's grants, given the option to switch advisor, and have their graduation requirements relaxed if appropriate.
It's easy to hem and haw about the institutional knowledge and ongoing projects that would invariably be lost in such a transition, even if very carefully executed. But we have to consider the ongoing damage being done when, for example, biogen spends thousands of scientist-years and billions of dollars failing to make an alzheimers drug because the work was dishonest to begin with, or when generations of trainees learn that bending the truth is a little more OK each year.
What's amazing to me is that journals don't require researchers to submit their raw data. At least, as far as I know.
The only option for someone who wants to double check research is to completely replicate a study, which is quite a bit more expensive than double checking the researcher's work.
Journals are incentivized to publish fantastic results. Organizing raw data in a way that the uninitiated can understand presents serious friction in getting results out the door.
The organizations who fund the research are (finally) beginning to require it [0][1], and some journals encourage it, but a massive cultural shift is required and there will be growing pains.
You could also try emailing the corresponding authors. Any good-faith scientist should be happy to share what they have, assuming it's well organized/legible.
It's becoming more common for journals to have policies which require that raw data be made available. Here's some background: https://en.wikipedia.org/wiki/FAIR_data
One of the purposes of a site on which I work (https://fairsharing.org) is to assist researchers in finding places where they might upload their data (usually to comply with publishers' requirements).
Replicating the results from someone's original data is difficult and time consuming, and other researchers aren't getting paid to do that (they're getting paid to do new research). And of course the (unpaid) reviewers don't have time either.
Re: the role of (gel) images as the key aspect of a publication. To me this is very understandable, as they convey the information in the most succinct way and also constitute the main data & evidence. Faking this is so bold that it seemed unlikely.
The good news IMO: more recent MolBio methods produce data that can be checked more rigorously than a gel image. A recent example where the evidence in form of DNA sequencing data is contested: https://doi.org/10.1128/mbio.01607-23
I think this statement is either meaningless or incorrect. At the very least your conclusion is context dependent.
That being said, I ran gels back in the stone ages when you didn't just buy a stack of pre-made gels that slotted into a tank.
I had to clean my glass plates, make the polyacrylamide solution, clamp the plates together with office binder clips and make sure that the rubber gasket was water tight. So many times, the gasket seal was poor and my polyacrylamide leaked all over the bench top.
I hated running them. But when they worked, they were remarkably informative.
Count me in the club of failed scientists. In my case it was the geosciences, I would spend hours trying to make all my analysis reproducible and statistically sound while many colleagues just published preliminary simulation results obtaining much more attention and even academic jobs. On the flip side, the time spent improving my data processing workflows led to good engineering jobs so the time wasn't entirely wasted
> raise so many... emotions in me... and I now believe [faking gels] is a common occurrence
On the other hand, shysters always project, and this thread is full of cringe vindications about cheating or faking or whatever. As your "emotions" are probably telling you, that kind of generalization does not feel good, when it is pointed at you, so IMO, you can go and bash your colleagues all you want, but odds are the ones who found results did so legitimately.
Regarding "shysters always project": it rings true to me, but given the topic, I'm primed to wonder how you could show that empirically, and if there's any psychology literature to that effect.
Because the amount of pencil-whipped "peer review" feedback I've received could fit in a shoe box, because many "reviewers" are looking for the CV credit for their role and not so much the actual effort of reviewing.
And there's no way to call them out on their laziness except maybe to not submit to their publication again and warn others against it too.
And, to defend their lack of review, all they need to say to the editor anyway is: "I didn't see it that way."
Many solutions involving posting data in repositories or audits are being discussed in the comments.
But given that many people are saying that they noticed and quit academia, how about also creating a more direct 'whistleblower' type of system, where complaints (with detailed descriptions of the fraud or a general view on what one sees in terms of loose practices) goes to some research monitoring team which can then come in and verify the problems.
> how about also creating a more direct 'whistleblower' type of system
There needs to first be a system of checks and balances for this to work. The people at the top already know and condone the behavior; who are the whistleblowers reporting to?
"We represent the top scientists in our field; these are a group of grad students. Who are you going to believe?"
And of course they can easily shut anyone down with two words: "science denier"
Gels tell you quite a lot, its what question you are asking that is more relevant to the results being useful over the technique. Of course people lie and cheat in science. Wet lab and dry lab. So many dry lab papers for example are out there where code are supposedly available “by request” and we take the figures on faith.
This is why institutions break down in the long run in any civilization. People like you, people of principle are drown out my agents acting exclusively in their own interest without ethics.
It happens everywhere.
The only solution to this is skin in the game. Without skin in the game the fraudsters fraud, the audience just naively goes along with it, and the institution collapses under the weight of lies.
To me, at the time, successful would have been getting a tenure-track position at a Tier 1 university, discovering something important, and never publishing anything that was intentional fraud (I'm OK with making some level of legitimate errors that could need to be retracted).
Of those three, I certainly didn't achieve #1 or #2, but did achieve #3, mainly because I didn't write very much and obsessed over what was sent to the editor. Merely being a non-fraud is only part of success.
(note: I've changed my definition of success, I now realize that I never ever really wanted to be a tenured professor at a Tier 1 university, because that role is far less fulfilling that I thought it would be).
That is not enough to most people. And if it is enough for others, then it is probably because they were fortunate enough to fall back on something better.
> At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed
Fraud I suspect is only tip of the iceberg, worse still is delusion that what is taught is factually correct. A large portion of mainstream knowledge that we call 'science' is incorrect.
While fraudulent claims are relatively easy to detect, claims that are backed up by ignorance/delusion are harder to detect and challenge because often there is collective ignorance.
Quote 1: "Never ascribe to malice that which is adequately explained by incompetence"
Quote 2:"Science is the belief in the ignorance of experts"
Side note: I will not offer to back up my above statements, since these are things that an individual has to learn it on their own, through healthy skepticism, intellectual integrity and inquiry.
Don't hate the player hate the game. Governments made scientist only survive if they show results and specifically the results they want to see. Otherwise no anymore grants and you are done. Whether the results are fake or true does not matter
"Science" nowadays is mostly BS, while the scientific method (hardly ever used in "science" nowadays) is still gold.
Do hate the player. People are taught ethics for a reason: no set of rules and laws are sufficient to ensure integrity of the system. We rely on personal integrity. This is why we teach it to our children.
You have agency. Yes - the system provides incentives. However, you are not some pass-through nothingness to just accept any incentives. You can chose to not accept the incentives. You can leave the system. You're lucky - it's not a totalitarian system. There will be another area of life and work where the incentives align with your personal morals.
Once you bend your spine and kneel to bad incentives - you can never walk completely upright again. You may think and convince yourself that you can stay in the system with bad incentives, play the game, but still somehow you the player remain platonically unaffected. This is a delusion, and at some level you know it too.
Who knows? If everyone left the system with bad incentives, it maybe that the bad system collapses even. It's a problem of collective action. The chances are against a collapse, that it will continue to go on for some time. So don't count on collapse. And even if one was to happen in your time, it will be scorched earth post collapse for some time. Think as an individual - it's best to leave if you possibly can.
You are clearly deeply disconnected from the actual practice of research.
The best you can really say is that the statistics chops of most researchers is lacking and that someone researching say caterpillars is likely to not really understand the maths behind the tests they're performing. It's not an ideal solution by any means but universities are starting to hire stats and cs department grads to handle that part.
I'm the furthest thing from a scientist unless you count 3,000 hours of PBS spacetime, but I love science and so science/academia fraud to me, feels kinda like the worst kinda fraud you can commit. Financial fraud can cause suicides and ruin in lives, sure, but I feel like academic fraud just sets the whole of humanity back? I also feel that through my life I've (maybe wrongly) placed a great deal of respect and trust in scientists, mostly that they understand that their work is of the upmost importance and so the downstream consequences of mucking around are just too grave. Stuff like this seems to bother me more than it rationally should. Are people who commit this type of science fraud just really evil humans? Am I over thinking this? Do scientists go to jail for academic fraud?
Pick up an old engineering book at some point, something from mid 1800's or early 1900's and you'll quickly realize that the trust people put on science isn't what it should be. The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you're not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself. If you go to the social sciences then the amount of crock that gets published is incredible.
As a quick example, any book about electricity from the early 1900's will include quite serious sections about the positive effects of electromagnetic radiation (or "EM field therapies"), teach you about different frequencies and modulations for different illnesses and how doctors are applying them. Today these devices are peddled by scammers of the same ilk as the ones that align your shakras with the right stone on your forehead.
Going to need some citations here since the texts that I'm familiar with from that time period are "A Treatise on Electricity and Magnetism" by Maxwell (mid-late 1800s) and "A History of the Theories of Aether and Electricity" by E. T. Whittaker, neither of which mentions anything of the sort. I suspect you are choosing from texts that at the time likely would not have been considered academic or standard.
The default state of the human brain almost seems to be a form of anti-science, blind faith in what you already believe, especially if you stand to gain personally from what you believe being true.
What is most incredible to me is even knowing and believing the above, I fall prey to this all the time.
the best example is psychology. the entire field needs to be scrapped and started over, nothing you read on any of those papers can be trusted, it's just heaping piles of bad research dressed with a thin veil of statistical respectability.
We use EM radiation for illnesses and doctors apply them. It's one of the most important diagnostic and treatment options we have. I think what you're referring to is invalid therapies ("woo" or snake oil or just plain ignorance/greed) but it's hard to distinguish those from legitimate therapies at times.
I think the error is putting trust in scientists as people, instead of putting trust in science as a methodology. The methodology is designed to rely on trusting a process, not trusting individuals, to arrive at the truth.
I guess it also reinforces the supreme importance of reproducibility. Seems like no research result should be taken seriously until at least one other scientist or group of scientists are able to reproduce the result.
And if the work isn't sufficiently defined to the point of being reproducible, it should be considered a garbage study.
There is no way to do any kind of science without putting trust in people. Science is not the universe as it is presented. Science is the human interpretation of observation. People are who carry out and interpret experiments. There is no set of methodology you can adopt that will ever change that. "Reproducibility" is important, but it is not a silver bullet. You cannot run any experiment exactly in the same way ever.
If you have independent measurements you cannot rule out bias from prior results. Look at the error bars here on published values of the electron charge and tell me that methodology or reproducibility shored up the result. https://hsm.stackexchange.com/questions/264/timeline-of-meas...
The way I sum it up is: science is a method, which is not equivalent to the institution of science, and because that institution is run by humans it will contain and perpetrate all the ills of any human group.
This error really went viral during the pandemic and continues to this day. We're in for an Orwellian future if the public does not cultivate some skeptic impulse.
Science is an anarchic enterprise. There is no "one scientific method", and anyone telling you there is has something to sell to you (likely academic careerism). https://en.wikipedia.org/wiki/Against_Method
How does this work for things like COVID vaccines, where waiting for a reproduction study would leave hundreds of thousands dead? Ultimately there needs to be some level of trust in scientific institutions as well. I do think placing higher value on reproducibility studies might help the issue somewhat, but I think there also needs to be a larger culture shift of accountability and a higher purpose than profit.
You're far from a scientist, so it's easy for you to put scientists/academia on a pedestal.
For most of the people who end up in these scandals, this is just the day job that their various choices and random chance led up to. they're just ordinary humans responding to ordinary incentives in light of whatever consequences and risks they may or may not have considered.
Other careers, like teaching, medicine, and engineering have similar problems.
As a scientist, I agree, although for not quite the reason you gave. Scientists are given tremendous freedom and resources by society (public dollars, but also private dollars like at my industry research lab). I think scientists have a corresponding higher duty for honesty.
Jobs at top institutions are worth much more than their nominal salary, as evidenced by how much those people could be making in the private sector. (They are compensated mostly in freedom and intellectual stimulation.) Unambiguously faking data, which is the sort of thing a bad actor might do to get a top job, should be considered at least as bad a moral transgression as stealing hundreds of thousands or perhaps a few million dollars.
(What is the downside? I have never once heard a researcher express feeling threatened or wary of being falsely/unjustly accused of fraud.)
In my view, prosecuting the bad actors alone will not fix science. Science is by its own nature a community because only a small number of people have the expertise (and university positions) to participate. A healthy scientific discipline and a healthy community are the same thing. Just like the "tough on crime" initiative alone often does not help a problematic community, just punish scientific fraud harshly will not fix the problem. Because the community is small, to catch the bad actors, you will either have insiders policing themselves, or have an non-expert outsiders rendering judgements. It's easy for well-intention-ed policing effort to turn into power struggles.
This is why I think the most effective way is to empower good actors. Ensure open debate, limit the power of individuals, and prevent over concentration of power in a small group. These efforts are harder to implement than you think because they run against our desire to have scientific superstars and celebrities, but I think they will go a long way towards building a healthy community.
I agree with you, science fraud is terrible. It pollutes and breaks the scientific method. Enormous resources are wasted, not just by the fraudster but also by all the other well meaning scientists who base their work on that.
In my experience no, most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives.
Scientist has become just a job, you find all kinds of people there.
As far as I know no-one goes to jail, worst thing possible (and very rare) is losing the job, most likely just the reputation.
It's complicated. Historically scientific fraud could be construed as 'good-intentioned' - typically a researcher in a cutting edge field might think they understood how a system worked, and wanting to be first to publish for reasons of career advancement, would cook up data so they could get their paper into print before anyone else.
Indeed, I believe many academic careers were kicked off in this manner. Where it all goes wrong is when other more diligent researchers fail to reproduce said fraudulent research - this is what brought down famous fraudster Jan Hendrik Schön in the field of plastic-based organic electronics, which involved something like 9 papers in Science and Nature. There are good books and documentaries on that one. This will only be getting worse with AI data generation, as most of those frauds were detected by banal data replication, obvious cuts and pastes, etc.
However, when you add a big financial driver, things really go off the rails. A new pharmaceutical brings investors sniffing for a big payout, and cooking data to make the patentable 'discovery' look better than it is is a strong incentive to commit egregious fraud. Bug-eyed greed makes people do foolish things.
People like us think scientists care about big-money things, but they largely don't care about that stuff as much as they care about prestige in their field. Prominent scientists get huge rewards of power and influence, as well as indirect money from leveraging that influence. When you start to think that way, the incentives for fraud become very "minor" and "petty" compared to what you are thinking of.
> Stuff like this seems to bother me more than it rationally should.
It's bothering you a rational amount, actually. These people have done serious damage to lots of lives and humanity in general. Society as a whole has at least as much interest in punishing them as it does for financial fraudsters. They should burn.
> There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.
From a distance the practice of science in early modern and Enlightenment times might look like the disinterested pursuit of knowledge for its own sake. If you read the detailed history of the times you'll see that the reality was much more messy.
Generally, the fields that have a Nobel in them attract the glory hounds and therefore the fraudsters. The ones that don't, like geology or archeology for example, don't get the glory hounds.
Anytime you see champagne bottles up on a professor's top shelf with little tags for Nature publications (or something like that), then you know they are a glory hound.
When you see beer bottles in the trash, then you know they're in it for more than themselves.
It seems like this could ultimately fall under the category of financial fraud, since the allegations are that he may have favorably misrepresented the results of drug trials where he was credited as an inventor of the drug that's now worth hundreds of millions of dollars.
Evil is a much simpler explanation than recognizing that if you were in the same position with the same incentives, you would do the same thing. It's not just one event, it's a whole career of normalizing deviation from your values. Maybe you think you'd have morals that would have stopped you, maybe those same morals would have ensured you were never in a position to PI research like that.
Scientific fraud can also compound really badly because people will try to replicate it, and the easiest results to fake are usually the most expensive...
I also watched almost all episodes of PBS Spacetime. Some of them multiple times. I'm so happy that Spacetime exists and also that Matt was recruited as a host (in place of Gabe). Highly recommended channel, superb content!
It is the same flavor of fraud as financial fraud. It is about personal gain, and avoiding loss.
This kind of fraud happens because scientists are rewarded greatly for coming up with new, publishable, interesting results. They are punished severely for failing to do that.
You could be the department's best professor in terms of teaching, but if you aren't publishing, your job is at risk at many universities.
Scientists in Academia are incentivized to publish papers. If they can take shortcuts, and get away with it, they will. That's the whole problem, that's human nature.
This is why you don't nearly as many industry scientists coming out with fraudulent papers. If Shell's scientists publish a paper, they aren't rewarded for that, if they come up with some efficient new way to refine oil they are rewarded, and they also might publish a paper if they feel like it.
A lot of companies reward employees for publications. Mine certainly does. Also an oil company may not be such a great example since they directly and covertly rewarded scientists for publishing papers undermining climate change research.
As a collective endeavor to seek out higher truth, maybe some amount of fraud is necessary to train the immune system of the collective body, so to speak, so that it's more resilient in the long-term. But too much fraud, I agree, could tip into mistrust of the entire system. My fear is that AI further exacerbates this problem, and only AI itself can handle wading through the resulting volume of junk science output.
This is pretty funny. I usually hear this kind of language when a religious person is so devastated when their priest or pastor does something wrong that it causes them to leave their religion altogether. Are you going to do the same thing for scientism?
I'm not a particularly religious person, I didn't realize what you described is something that happens with any great frequency. Never the less, I suppose one is able to leave a particular place of worship and not leave a religion, as it is with any way people form their views on something societal like this, it's on a spectrum? Religion, Politics, Science, Sex, Education, whatever.
This sort of behavior is only going to worsen in the coming decades as academics become more desperate. It's a prisoner's dilemma: if everyone is exaggerating their results you have to as well or you will be fired. It's even more dire for the thousands of visa students.
The situation is similar to the "Market for lemons" in cars: if the market is polluted with lemons (fake papers), you are disincentivized to publish a plum (real results), since no one can tell it's not faked. You are instead incentivized to take a plum straight to industry and not disseminate it at all. Pharma companies are already known to closely guard their most promising data/results.
Similar to the lemon market in cars, I think the only solution is government regulation. In fact, it would be a lot easier than passing lemon laws since most labs already get their funding from the government! Prior retractions should have significant negative impact on grant scores. This would not only incentivize labs, but would also incentivize institutions to hire clean scientists since they have higher grant earning potential.
My recommendation is for journals to place at least equal importance to publishing replications as for the original studies.
Studies that have not been replicated should be published clearly marked as preliminary results. And then other scientists can pick those up and try to replicate them.
And institutions need to give near equal weight to replications as to original research when deciding on promotions. Should be considered every researchers responsibility to contribute to the overall field.
We can solve this at the grant level. Stipulate that for every new paper a group publishes from a grant, that group must also publish a replication of an existing finding. Publication would happen in pairs, so that every novel thing would be matched with a replication.
Replications could be matched with grants: if you receive $100,000 grant, you'd get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can't just choose your group's previous thing.
I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.
I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.
Replication is over-emphasised. Attempts to organise mass replications have struggled with basic problems like papers making numerous claims (which one do you replicate?), the question of whether you try to replicate the original methodology exactly or whether you try to answer the same question as the original paper (matters in cases where the methodology was bad), many papers making obvious low value findings (e.g. poor children do worse at school) and so on.
But the biggest problem is actually that large swathes of 'scientists' don't do experiments at all. You can't even replicate such papers because they exist purely in the realm of the theoretical. The theory often isn't even properly written down! They will tell you that the paper is just a summary of the real model, which is (at best) found in a giant pile of C or R on some github repo that contains a single commit. Try to replicate their model from the paper, there isn't enough detail to do so. Try to replicate from the code, all you're doing is pointlessly rewriting code that already exists (proves nothing). Try to re-derive their methodology from the original question and if you can't, they'll just reject your paper as illegitimate criticism and say it wasn't a real replication.
Having reviewed quite a lot of scientific papers in the past six years or so, the ones that were really problematic couldn't have been fixed with incentivized replication.
So then, how on earth does this stuff even get published? What exactly is it that we're all doing here?
If a finding either cannot be communicated enough for someone else to replicate it, or cannot be replicated because the method is shoddy, can we even call that science?
At some level I know that what I'm proposing isn't realistic because the majority of science is sloppy. P-hacking, lack of detail, bad writing, bad methods, code that doesn't compile, fraud. But maybe if we tried some version of this, it would cause a course correction. Reviewers, knowing that someone actually would attempt to replicate a paper at some point down the road, would be far more critical of ambiguity and lack of detail.
Papers that are not fit to be replicated in the future, whose claims cannot be tested independently, are actually not science at all. They are worth less than nothing because they take up air in the room, choking out actual progress.
That correct. Fundamentally the problem is foundations and government science budgets don't care. As long as voters or Bill Gates or whoever believes they're funding science and progress the money flows like water. There's no way to fix it short of voting in a government that totally defunds the science budget. Until then everyone benefits from unscientific behaviour.
The amazing thing is that it all works out in the end and science is still making (quite a lot of) progress.
That's also the reason why we shouldn't spend all of our time and money checking and replicating things just to make sure noone publishes fraudulent/shoddy results. (We should probably spend a little more time and money on that, but not as much more as some people here seem to suggest).
Most research is in retrospect useless nonsense. It's just impossible to tell in advance. There is no point in checking and replicating all of it. Results that are useful or important will be checked and replicated eventually. If they turn out to be wrong (which is still quite rare), a lot of effort is wasted. However, again, that's rare.
If the fraud/quality issues get worse (different from "featuring more frequently and prominently in the news"), eventually additional checks start to make sense and be worth it overall. I think quite a lot of progress is happening here already, with open data, code, pre-registration of studies, better statistical methods, etc, becoming more common.
I think a major issue is the idea that "papers are the incontestable scientific truth". Some people seem to think that's the goal, or that it used to be the case and fraud is changing that now, however, this was never the case and it's not at all the point of publishing research. I think a major gain would be to separate in the public perception the concepts, understanding and reputations of science vs. scientific publishing.
This stuff happens in Computer Science too. Back around 2018 or so I was working on a problem that required graph matching (a relaxed/fuzzy version of the graph isomorphism problem) and was trying algorithms from many different papers.
Many of the algorithms I tried to implement didn't work at all, despite considerable effort to get them to behave. In one particularly egregious (and highly cited) example, the algorithm in the paper differed from the provided code on GitHub. I emailed the authors trying to figure out what was going wrong, and they tried to get funding from me for support.
My manager wanted me to right a literature review paper which skewered all of these bad papers, but I refused since I thought it would hurt my career. Ironically the algorithm that ended up working the best was from one of the more unknown papers, with few citations.
You should be able to build an entire career out of replications: hired at the best universities, published in the top journals, social prestige and respect. To the point where every novel study is replicated and published at least once. Until we get to that point, there will be far fewer replications than needed for a healthy scientific system.
Replications are not very scientifically useful. If there were flaws in the design of the original experiment, replicating the experiment will also replicate the flaws.
What we should aim for is confirmation: a different experiment that tests the underlying phenomenon that was the subject of the first paper.
I'd be careful about that. Faking replications is even easier than faking research, so if you place a lot of importance on them, expect the rate of fraud in replication studies to explode.
The problem with putting the onus on the journals is there is no incentive for them to reward replications. Journals don't make money on replicated results. Customers don't buy the replication paper they just read the abstract to see if it worked or not.
I do like the idea of institutions giving tenure to people with results that have stood the test of time, but again, there is no incentive to do so. Institutions want superstar faculty, they care less about whether the results are true.
The only real incentive that I think can be targeted is still grant money, but I would love to be proved wrong.
> And then other scientists can pick those up and try to replicate them.
unless there are grants specifically for that purpose, then it's not going to happen; and it's hard to apply for a grant just to replicate someone else's results verbatim. (usually you're testing the theory but with a different experiment and set of data which is much more interesting than simply repeating what they did with their data; in fact replicating it with a different set of data is important in order to see if the results weren't cherry-picked to fit the original dataset).
I think it’s a great idea. It would also give the army of phds an endless stream of real tangible work and a way to quickly make a name for themselves by disproving results.
It seems surprisingly hard to counter scientific fraud via a system change. The incentives are messed up all the way around.
If the older author is your advisor and you feel one of their juniors is cutting corners or the elder is cutting corners, you better think twice about what move will help your career. If confirming a recent result counts toward tenure, then presto you have an incentive for fraudulent replication (what's the chance it's incorrect anyway? The original author is a big shot.) Going against the previous acclaimed result takes guts especially in a small field where it might kill your career if YOU got it wrong somehow - So you need to have much stronger results than the original research, and good luck with that. We might say "this is perfect work for aspiring student researchers, and done all the time" - to reimplement some legendary science experiment - but no, not when it's a leading edge poorly understood experiment, and not when that same grad student is already running to try and produce original research themselves.
The big funders might dedicate money to replicated research that everybody is enthusiastic about (before everyone relies on it). But some research takes years to run. Other research is at the edge of what's possible. Other research is led by a big shot nobody dares to take on. Etc etc. So where is the incentive then? The incentive might be to take the money, fully intending to return an inconclusive result.
Some research is taken on now. But only AFTER it's relied on by lots of people. Or much later when better ideas had the time to emerge on how to test the idea more cleverly i.e. cheaper and faster. And that's not great because costly in all the wasted effort by others, based on a fraudulent result. And all the mindshare the bad result now has.
While Akerlof's Market for Lemons did consider cases where government intervention is necessary to preserve a market, like with health insurance markets (Medicare), he describes the "market for lemons" in the used car market as having been solved by warranties.
If someone brings a plum to a market for lemons, they can distinguish the quality of their product by offering a warranty on its purchase, something that sellers of lemons would be unwilling to do, because they want to pass the cost burden of the lemon onto the purchaser.
The full paper is fairly accessible, and worth a read.
Not sure how this could be applied to academia, one of the problems is that there can be significant gaps between perpetrating fraud and having it discovered, so the violators might still have an incentive to cheat.
> if everyone is exaggerating their results you have to as well or you will be fired.
Is this really the case the though? Isn't the whole point of tenure (or a big selling point at least) insulating academics from capricious firings?
The big question I have is that there are names on these fraudulent papers, so why are these people still employed? If you generate fictitious data to get published, you should lose any research or teaching job you have, and have to work at McDonald's or a warehouse for the rest of your life. There are plenty of people who want to be professors that we can eliminate the ones who will lie while doing it without losing much (perhaps anything). If your job was funded by taxpayer funds there should be criminal charges associated with willfully and knowingly fabricating data, results, or methods. At that point you're literally lying in order to steal taxpayer funds, it's no different than a city manager embezzling or grabbing a stack of $20 bills out of the cash register.
I wonder if there are any studies on whether fraud increased after the Bayh-Dole Act. There's certainly fraud for prestige, that's pretty expected. But mixing in financial benefits increases the reward and brings administrators into play.
The incentive structures in science has been relatively stable since I entered the field in 1980 (neuroscience, developmental biology, genetics). Quality and quantity of science is extraordinary, but peer review is worse than bad. There are almost no incentives to review the work of your colleagues properly. It does not pay bills and you can make enemies easily.
But there was no golden era of science to look back on. It has always been a wonderful productive mess—much like the rest of life. As least it moves forward—and now exceedingly rapidly.
Almost unbelievably, there are far worse crimes than fraud that we completely ignore.
There are crimes associated with social convention in science of the type discussed by Karl Herrup with respect to 20 years of misguided focus on APP and abeta fragments in Alzheimer’s disease:
This could be called the “misdemeanors of scientific social inertia”. Or the “old boys network”.
There is also an invisible but insidious crime of data evaporation. Almost no funders will fund data preservation. Even genomics struggles but is way ahead in biomedical research. Neuroscience is pathetic in this regard (and I chaired the Society for Neuroscience’s Neuroinformatics Committee).
I have a talk on this socio-political crime of data evaporation.
It could also have a chilling effect on a lot of breakthrough research. If people are willing to put out what they mostly think is right, it might set back progress decades as well.
BS governmental desperation to show any "result" (even if it is fake) is what brought us here. As scientist have to show more fake results to get more grants.
Removing the government from science could help, not the other way around.
People just went through the last five years and will go to their graves defending what they saw first hand. To admit that maybe those moves and omissions weren’t helpful would be to admit their ideology was wrong. And that can not be.
If I have learned anything over 40 years, is that the number of people who actually live in a way consistent with hypothesis testing, data collection, evidence evaluation framework required to have scientific confidence in future action or even claims is effectively zero
That includes people who consider themselves professional scientists, PhD‘s authors, leaders etc.
The only people I know who live “scientifically” consistently are people considered “neurodivergent”, along the autism-adhd-odd spectrum, which forces them into creating the type of mechanisms that are actually scientific and as required by their conditions.
Nevertheless, we should expect better from people; and on average need to do better in aligning how they think to how science, when robustly demonstrated, demonstrates with staggering predictability how the world works, compared to all other methods of understanding the universe.
The fact that the people carrying the torch of science don’t live up to the standard is expected - hence peer review.
This is an indictment of the incentives and pace at which bad science is revealed (like in this case) is always too slow, but science is the one place where eventually you’re going to either get exposed as a fraud or never followed in the first place.
There’s no other philosophy that has a higher bar of having to conform with all versions of reality forever.
I would just like to point out the irony of claiming that people live in a way inconsistent with scientific rigour, based solely on personal experience.
I think you’re suggesting that I’m making a conclusion without sufficient evidence - hence the “irony”
Recall I’m discussing how people live, namely that they don’t live based on their own claims as to how to live. You’d have to evaluate my behaviors to derive if my claim is ironic.
However I’m Happy to provide that epistemological chain if requested
The reason many people hate children is because children are not satisfied with the level of epistemology that most people can provide them, and have no compunction in saying “that answer is unsatisfactory”
Hence why institutional pedagogy is so often rote and has nothing to do with understanding - when we know science of learning says that every human craves understanding (montessori, piaget etc…)
In fact, the shortest way to break the majority of people’s brains is to ask them one of the following questions:
- Can you Explain the reasoning behind your behavior?
- How would you test your hypothesis?
- What led you to the conclusion you just stated?
- Can you clarify the assumptions embedded in your claim?
-Have you evaluated the alternatives to your position?
It's a dilemma- do you want to be virtuous or do you want to maximize your money? I get a sense around here that only the law matters (morals be damned) and we do whatever work pays best.
That feels extreme. Zero is a cold, dark, lonely number. Maybe it’s correct—i dont know. Ive worked on only a couple of projects in this space, and while the incentives certainly involved publishing, i dont feel that it equated to abandoning the SciMethod. Instead, it was the cost to pay for the ability to continue doing science.
Can you really stand by ZERO? How about a 1%. Meet me somewhere above zero, or, if you’d be so kind, make a compelling case why were truly rock bottom.
The American version of the cultural Revolution is about to begin, and everybody recognizes that the labor class is coming
So everybody’s trying to align themselves with a victimized group as closely to reality as possible
To such an extent where people are actively making up victimization reasons such that they can find themselves in an affinity group with other victims so they are safe from prosecution during the troubles
In spite of having a full commit log (with GitHub verified commits!!!) of both the code AND the paper, both arxiv and the journal didn't seem to care or bother at all.
Anyhow, I highly recommend reading the for better science blog. It's incredible how rampant fraud truly is. This applies to multiple nobel prize winners as well. It's nuts.
Can you speak more to the “not caring at all” bit? I believe you, but how did you engage them? Did you end up publishing your work eventually?
forbetterscience seems like a good idea, but the writing style, the images, and even the about page gave me pause on if this is a reliable site for trustworthy science commentary
After almost a year, with an unsurmountable amount of open-source evidence and the thief having had every single paper he has ever written retracted for fraud!! the best the journal did is to add a notice: https://www.mdpi.com/2674-113X/2/3/20
Arxiv cared even less. They allowed the thief to DMCA strike me multiple times. He even managed to take down the real version of the paper by claiming that it was his: https://arxiv.org/abs/2308.04214
> Did you end up publishing your work eventually
No. When I tried to do so, I was actually rejected from a conference because their plagiarism detecting system labelled that I was trying to publish something that already been published (what was stolen).
From the article, it seems the engagement came in the form of DMCA take-down requests from university lawyers... which the publication then largely ignored for a considerable period of time (possibly due to counter-DMCA).
In an unrelated scientific field, EE, I recently witnessed how the DMCA process could be used by an "engineer" to silence criticism of his hybrid vehicle battery "upgrades" [2] — similar to Australian company DCS's snafu/lawsuit [1].
Just disgusting, these vultures that know [how to steal/lie/obfuscate] just well-enough to be dangerous... including how to manipulate our DMCA system to their dishonest advantage.
Huh. Sounds like the research needs to be forked to several different hosting providers, preferably ones not based in the US with its insane DCMA laws.
As a scientist who has published in the neuroscience space, I don’t what to say other than the incentives in academia are all messed up. Back in the late 90s, NIH made a big push on ‘translational research”, that is, researchers were strongly encouraged to demonstrate their research had immediate, real world benefits or applications. Basic research and the careful, plodding research needed to nail down and really answer a narrow question was discouraged as academic navel-gazing.
On one hand, it seems the push for immediate real world relevance is a good thing. We fund research in order that society will benefit, correct? On the other hand, since publications and ultimately funding decisions are based on demonstrating real world relevance, it’s little surprise scientists are now highly incentivized to hype their research, p-hack their results, or in rare cases, commit outright fraud in an attempt to demonstrate this relevance.
Doing research that has immediate translational benefits is a tall order. As a scientist you might accomplish this feat a few times in your career if you’re lucky. The rest of the corpus of your work should consist of the careful, mundane research the actual translational research will be based upon. Unfortunately it’s hard to get that foundational, basic, research published and funded nowadays, hence the messed-up incentives.
There's evidence that the turning point was in the 90s but I suspect the real underlying problem is indirect funds as a revenue stream for universities, combined with the imposition of a for-profit business model expectation from politicians at the state and other levels. The expectation changed from "we fund universities to teach and do research" to "universities should generate their own income", which isn't really possible with research, so federal funding filled the gap. This lead to the indirect fund firehose of cash, pyramid scheme labs, and so forth and so on. It sort of became a feedback loop, and now we are where we are today.
Translational research is probably part of it but I think it's part of a broader hype and fad machine tied to medicine, which has its own problems related to rent-seeking, regulatory capture, and monopolies, among other things. It's one giant behemoth of corruption fed by systemic malstructurings, like a biomedical-academic complex of problematic intertwined feedback loops.
I say this as someone whose entire career has very much been part of all of it at some level.
Good points, thanks. As I’m sure you’re aware, the indirect rates at some universities are above 90%. That is, for every dollar that directly supports the research, almost another dollar goes to the university for overhead. Much of this overhead is legitimate: facilities and equipment expenses, safety training, etc… but I suspect a decent portion of it goes to administrative bloat, just as much as the education-only part of the university has greatly increased administrative bloat over the last 30-40 years.
Another commentator made a separate point about how professors don’t always get paid a lot, but they make it up in reputation. Ego is a huge motivator for many people, especially academics in my observation. Hubris plays no small part in the hype machine surrounding too many labs.
The Retraction Watch website does a good job of reporting on various cases of retractions and scientific misconduct [1].
Like many others, I hope that a greater focus on reproducibility in academic journals and conferences will help reduce the spread of scientific misconduct and inaccuracy.
> "UCSD neuroscientist Edward Rockenstein, who worked under Masliah for years, co-authored 91 papers that contain questioned images, including 11 as first author. He died in 2022 at age 57."
They say nothing else about this. But looking at Rockenstein's obituary, indications are that it was suicide. (It was apparently sudden, at quite a young age, and there are many commenters on his memorial page "hoping that his soul finds peace," and expressing similar sentiments.)
I shared this article with an MD/PhD friend who has done research at two of the three most famous science universities in America ... and she said "this [not this guy, this phenomenon] is why I left science."
Maybe it's like elite running - everyone who stays competitive above a certain level is cheating, and if you want to enjoy watching the sport, you just learn to look the other way. Except that the stakes for humanity are much higher in science than in sport.
Blatant fraud is rare in physics, engineering, chemistry. Lying is rare. Quality is high at the highest institutions of physics and chemistry. Exaggerated claims occur, but much less than in day to day life. Top visibility work is quickly reproduced. Reproduction is the essence of science.
Did you google "fraud in physics" or "fraud in chemistry"? (I just did.)
> Exaggerated claims occur, but much less than in day to day life.
"Day-to-day life" does not lay the foundation for millions of dollars in followup research, or set the direction of a grad student's research, i.e. their career.
> It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position.
This is absolutely something that we should routinely be doing, though.
It's pretty similar to the level of distrust in the software engineering job interview process.
Pick your poison, to some extent. Better would be to not have to do it after-the-fact, but to vet better at every intermediate step, but it's hard. Just a very difficult people problem.
Agreed. There!s uproar over coding interviews, which makes no sense to me. We give easy peasy code reviews to smoke check claimed skills. 4 of 5 candidates do absolutely terribly on very easy stuff, relative to their claimed skillset. No, our bar isn’t high—fraud (resume fraud) is sadly very real.
Maybe we need instruments that sign results cryptographically and use a blockchain mechanism to establish provenance. We should have cameras that can establish that published images have not been modified (or at least provide raw and adjusted pairs--digital radiology has the concept of a "presentation state" that I think could work).
In theory at least research should be auditable to a lab notebook. The problem with photos and such is you can't tell if it was modified before it was pasted to the page and large datasets just can't be put in to paper. And electronic notebooks I've used tend to be even more annoying than paper (too rigidly formatted and not adaptive to workflow optimization but it's difficult to explain).
Anyway those sorts of things that establish provenance should also protect against deep learning. You may be able to create fake data, but can you deep fake data that was signed by Nikon device #XYZ with cryptographicly confirmed hashes published to a blockchain 3 years ago at the time the data was generated?
yeah it sounds a little bit absurd to me. It's just basic due diligence. You don't not run a background check on a potential employee just bc their resume looks good and they got a reference. In those cases you still go, "Annoying we have to wait because we want this person on board NOW and it's a fairly shallow investigation that 99% of the time doesn't reveal anything even if there is something, but it's the standard procedure."
I'm not a researcher or academic, but when I think of roughly how long it takes me to do meaningful deep work and produce a project of any significance, I'm struck by the fact that his 800 papers isn't a red flag? Even if you allocate ~3 months per paper, that's over 200 years of work. Is it common for academics to produce research papers in a matter of days?
From the article:
Masliah appeared an ideal selection. The physician and neuropathologist conducted research at the University of California San Diego (UCSD) for decades, and his drive, curiosity, and productivity propelled him into the top ranks of scholars on Alzheimer’s and Parkinson’s disease. His roughly 800 research papers, many on how those conditions damage synapses, the junctions between neurons, have made him one of the most cited scientists in his field.
It's kind of like, when reporters say a CEO built [insert ridiculously complex product here], ex: ascribing the success of OpenAI to Sam Altman, or Apple to Steve Jobs. Sure there were important in setting the direction, and allocating resources but they didn't actually do the work.
Similarly, the heads of famous science labs have lots of talented scientists who want to work with them. The involvement of a lab director varies wildly, but for the hyper productive, famous ones, it's largely the director curating great people, providing scientific advice, and setting a general research direction. The lab director gets named on all these papers that get generated from this process.
So 800 papers isn't necessarily a red flag if the director is great at fundraising and has lots of graduate students/post docs doing the heavy lifting.
More than likely many of those authorships were "honorary", that is Masliah "lent" his (once-famous) name to help others publish their own work. He likely provided little actual contribution to many of these papers.
As such one would normally only give an author "full" credit (and responsibility) if they appear as either first or last in the list of authors. In the biosciences these are the positions indicating substantial contributions to the published work.
His co-authors are now going to be very annoyed as association with this "honorary" author will now cast doubt upon their own work.
Over 20 years, that’s 40 per year on average. He’s emeritus from UCSD and I don’t see his old lab page online, not sure how big it was. But my PI’s lab had 13 last year and has 11 people. If Masliah had around 33 people that would be a pretty normal papers per capita.
Most neuroscience papers of the type Masliah published are the result of at least 2 person-years of hands-on work (and up to 10 or 15 person-years for large papers).
800 papers over 25 years would therefore need a minimum staff of 64 full time researchers for the entirety of those 25 years. Masliah didn't have this.
For most papers on which Masliah is an author, the majority of the work will have been performed in other labs, with Masliah and those under him contributing to a greater or lesser extent. Such collaborative work is not a bad thing (assuming everyone is honest).
Web.archive has a shot of his now-deleted lab page:
Among other things my physics career taught me: anyone who is listed as an author on more than 200 papers is almost definitely a plagiarist, in the sense of a manager who adds his or her name to the papers of the underlings in his or her lab. When I was still bothering to go to conferences I would sometimes have fun with them (the male variety is easy to spot: look for the necktie) by asking detailed questions about the methodology of the research. They never have any idea how the work was actually done.
> Founder, CEO, and chief engineer of SpaceX.
CEO and product architect of Tesla, Inc.
Owner, CTO and Executive Chairman of X (formerly Twitter).
Founder of The Boring Company, X Corp., and xAI.
Co-founder of Neuralink, OpenAI, Zip2, and X.com (part of PayPal)
Depends on your definition of fraud. Musk is obviously not chief engineer of SpaceX while actively working at Twitter, Tesla, and Neuralink. The founding claims aren’t that unbelievable though, founding 10 companies in 30 years isn’t that hard.
I would call it heavy exaggeration.
Yet, he is able to answer most of deep technical questions related to his technologies, right on the spot. And his answers are well thought, concise and factual, i.e. not the handwavy crap you can expect from a CEO of his scale.
This also seems like a problematic practice. Perhaps we should start expecting a shorter author list, and have separate credit for advisers, small contributions, overseeing, etc.
The amazing part about this to me is that the only reason the authors were caught is image manipulation. The fraud in numbers and text? Not so easy to uncover.
Many journals now require all versions of a gel image that is used in a figure. So, you’d have to fake the full image that is cropped down to the lanes used in the figure. I think there aren’t as many of those raw images around to train AI on… yet.
I predict it will get even worse than that, in the next couple of decades I expect any document or work that has a substantial reward associated with it, either financially or in terms of career advancement or a grade for critical course work in one's major, or penalty such as indictment or conviction, to be backed by a time stamped stack of developing documentation, drafts, revisions, with these time stamp validated against a trusted custodial clock and a seed random string marking the start of work, recorded in some immutable public form.
Accompanying to finish document will be a hash of all of these works along with their associated timestamps, originals of can be verified if necessary to prove a custodial chain of development over a plausible period of time and across multiple iterations of the development of the work - a kind of signed time-lapse slideshow of its genesis from blank page to finished product as if it had a mandatory and global "track changes" flag enabled from the very beginning - by which the entire process can be proved an original human-collaborated work and not an insta-generated AI fiction.
I actually thought that digital timestamps would have been a great use-case for blockchains. They are publicly available and auditable. If you're working from hashes, you don't necessarily need to make the raw data public, just the hash. It is a use-case that has an intrinsic value to the data generator and the future auditor (so you could charge something for it). I know there was some work done on this, but I think it lost momentum due to trying to generate crypto as a value storage medium.
The gold bugs really set back that entire field: the quasi-religious pursuit of “trustless” designs made everything more expensive, but so many problems are far more tractable with trusted third parties both for cost and reduced attack potential because institutional/professional reputations are harder to build than getting n% consensus on a cryptocurrency and don’t have the built-in bug bounty problem.
For example, imagine if university libraries ran storage systems based on Merkle trees with PKI signatures and researchers used those for their papers, code, data inventory (maybe not petabytes of data but the hashes of that data), etc. If there were allegations of misconduct you’d be able to see that whole history establishing that when things were changed and by whom, and someone couldn’t fudge the data without multiple compromised/complicit people in a completely different department (a senior figure can pressure a grad student in their field but they have far less leverage over staff at the library), and since you’re not sharing a database with the entire world you have a much easier time scaling with periodic cross checks (e.g. MIT and Caltech could cross-sign each other’s indexes periodically so you could have confidence that nobody had altered the inventory without storing the actual collection).
Sounds complicated. You could just demand the lab log books. They are supposed to be dated and countersigned. Standard practice is to counter sign outside your group.
The YC company that wanted to sell fake survey results (yes they really had a launch HN with that idea) will surely be the first to sell fake science results next. YC disrupting sciences
Eventually AI will also be able to reliably audit papers and report on fraud.
There may be newer AI methods of fraud, but it will only buy you time. As both progress, committing to record a fraud generated by technology will almost certainly be detected by a later technology.
I would guess that we're within 10 years of being able to automatically audit the majority of papers currently published. That thought must give the authors of fraudulent papers the heebee jeebies.
The problem is that detecting fraud is fundamentally harder than generating plausible fraud. This is because ultimately a very good fraud producer can simply produce output that is identically distributed to non-fraud.
For the same reason, tools that try to detect AI-generated text are ultimately going to lose the arm's race.
It's not a race though. Once the fraud is committed to record it can no longer advance in sophistication. Mechanisms for detection will continue to advance.
I think the argument is that if you produce your fraud from an appropriate probability distribution, any "detection" method other than independently verifying the results is snake oil.
Is there no liability for the author? There are billions of dollars wasted in drug trials and research that can be tied to this fraud. Surely they can face some legal issues due to this?
Not only are there billions of dollars wasted, there are many, many lives wasted. If the billions had gone in a direction that was actually promising, maybe there would be treatments that would have saved millions of person-years of quality lifetime. This person is basically a mass-murderer.
Like all things in life that have risks of fraud, negligence or potential failure, insurance could be the answer.
Want to publish in a peer reviewed paper? Well then your institution or you should take out a bond or insurance policy that guarantees your work is accurate. The insurance amount would fluctuate based on how big of impact this study could have. Is it a drug that will be consumed by millions? Big insurance policy. Is it a behavioral study without much risk... small insurance policy.
Is a a person in an institution found caught committing fraud, well now then all papers from that institution now have higher premiums.
Did you sign off on a peer reviewed paper that was fraud? Well now your premiums are going up also.
Insurance costs too high to publish? Well then keep doing research until the underwriters are satisfied that your work isn't fraud and adjust the premiums down.
It adds a direct near-term economic incentive to publish honestly and punishes those that abuse the system.
In other words, you are suggesting more stringent peer review conducted by insurance companies. And because insurance companies are too small to have sufficient in-house expertise on every topic, the reviews will be usually done by external consultants. The costs might be from $10k for simple papers to hundreds of thousands for large complex papers.
The insurance model does not really work when the cost of evaluating the risks far outweighs the expected risks.
That is like saying my insurance company has to follow me around for a week while I drive before they can underwrite a policy. If there is money to be made, and money to be lost, the actuaries will find a way.
The problem could be, that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.
You are not the first person in the world to own a home or drive a car. Insurance companies can offer you cost-effective insurance, because you are doing effectively the same things as many other people.
Science is largely about doing novel things and often being the first person in the world to try something. In order to understand the risks, you have to understand the actual research, as well as the personalities and personal lives of the people doing it.
Then there is the question of perverse incentives. Research fraud is not a random event but an intentional action by the people who take the insurance. If they manage to convince you to underwrite their research, they know that the consequences of getting caught will be less severe than without the insurance, making fraud more likely. Normally intentional fraud would not be covered by the policy, but here covering it would be the explicit purpose of the insurance.
Insurance companies insure one off events all the time. You can literally insure anything, its just a matter if the premiums outweigh what you perceive as the risk. "Uninsurable" just means the price is too high to be considered practical.
The research might be novel, but the procedures for research and publication are very similar. So insurance companies would just make sure that you followed a protocol which minimizes their risk.
perverse incentives are taken into account by insurance. Insuring someone is always a adversarial back and forth to determine if they are being truthful or not. Which is why Life insurance companies require a physical. They don't just have you self report and then accept it as fact.
Industry professionals like lawyers and doctors carry malpractice insurance. A lawyer can still commit fraud. Insurance isn't a black and white thing. It is a sliding scale that ties risk to a monetary value.
Its not rocket science. Just actuarial science. ;)
> The research might be novel, but the procedures for research and publication are very similar.
This is wrong.
Some time ago, I completed the checklists for publishing a paper in a somewhat prestigious multidisciplinary journal. Large parts of the lists were about complying with various best practices and formal requirements in different fields. I often didn't even understand the questions outside my field. And the questions nominally within my field were often category errors. They assumed a mode of doing research that was far from universal. Overall, the process was more frustrating than (let's say) applying for a US visa.
I think you are desperately trying to fit something black and white rather than thinking critically that there is a spectrum of research, some of which is similar to others which can easily have procedures for insuring and others that are more complex that require more diligence from the insurance company. Just like nearly every single thing an insurance company does.
Yes there is novel research that has never been done before? So what? That doesn't change if you can get insurance or not. Thats a failed argument from the beginning.
Anyways you don't seem to be having a discussion in earnest and instead you seem to be intentionally disregarding large pieces of the above arguments and trying to shoehorn in your idea that if there is unique research being done that it means that it is impossible to tell the risk of anything. Kinda silly.
The cases that would require more diligence from the insurance company are the kind of research that should be encouraged. Breakthroughs are more likely to happen when people take risks and try something fundamentally new, instead of adhering to the established forms. Your insurance model would discourage such research by making it more expensive.
Additionally, even if we assume that the insurance model is a good idea, it should be tied to individual researchers, not universities. The entire model of university research is based on loose networks of independent professionals nominally employed by various organizations. Universities don't do research, they don't own or control the projects, and they don't have the expertise to evaluate research. They are just teaching / administrative organizations that provide services in exchange for grant overheads.
> that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.
What type of research would that be? Just publish it online without insurance and everyone will treat it as it unverified and uninsured... separate from other research that is.
Once the risk of the publishing research has gone down (i.e. reputable peers approve, or the findings were replicated), the cost of the insurance goes down also.
if something is so costly to insure, there would be a reason and thus the system works.
If it is possible to advance your career by publishing uninsured research then we've just renamed the problem, although I do like the idea of adding this structure. Eventually there could be so much of it that it would become an accepted norm that your research isn't actually published in a journal until five years after you informally publish it. Other scientists in the field have to be abreast of the latest findings, so now these informal publications are the true journals.
I see your point, the success of this would have to align with a change in the broader academia to only cite research from insured researchers.
The "organic" way this would happen is if there was a shift so that journals with insured research are far more valuable than uninsured research. Or perhaps if companies started suing researchers for negligence and fraud and recuperate costs if they used research that was later proved to be fraud.
In the literary world, anyone can publish a book, but a book from o'reilly caries with it a different level of authority and diligence then a self published book or blog post.
So the shift would have to be that your career can't advance without publishing a bonded and insured paper.
But that is not how research works in Academia. They have to follow the bleeding edge of the field, or they may be doing work of their own that is already irrelevant. They will not wait until a consortium of insurance companies and underwriters have done the actuarial analysis and come up with an underwriting product that the institution has funded (and what is the institution's business model for recovering this cost in a field of pure research, anyway?)
> you are suggesting more stringent peer review conducted by insurance companies
Absolutely not. Underwriters are smart. They use other variables and methods for determining risk. They don't need to directly recreate and peer review the research themselves.
I was thinking about it: If I come across someone seriously injured, try to help them, and accidentally hurt them, I'm protected (in many places) by Good Samaritan laws.
But if a health care professional does the same thing, and does something negligent, then they are usually liable. They are professionals and are held to a different standard. Similarly, that's why lawyers keep writing: this is not legal advice and you are not my client.
Perhaps a professional in science should have higher standards. Obviously they shouldn't be sued for being wrong - that would destroy science, disregard the scientific method's means to address inaccuracy, and go against science's nature as the means to develop new knowledge. But intentionally deceiving people perhaps should be illegal and/or create liability: When you publish something, people depend on its fundamental honesty and will act on it.
The US has the Office for Research Integrity which can prosecute scientific fraud cases, but it only does a handful of cases per year.
To put the scale of this problem in perspective, the ORI was set up in the 1970s after Congress became concerned at widespread reports of scientific fraud. It clearly didn't work, but hangs around regardless.
It's ultimately a culture problem. Until academics have the same level of respect as ordinary corporate employees, you're going to get judges and juries who let them off scott free.
The line between outright fraud, bad methods correctly implemented, messy data, and implementation bugs is fuzzy. Trying to criminalize anything not very very clearly #1 quickly turns into a case of “show me the man and I’ll show you the crime”. You think groupthink in academia is bad just wait until professional disputes lead to jail time for the loser.
The fact that some areas are gray shouldn't prevent us from demanding legal consequences when the fraud is gross and deliberate, as appears to be the case here.
There are unfortunately very rarely consequences for academic fraud. It's not just that we only catch a small fraction — mostly the most brazen image manipulation — but these cases of blatant fraud happen again and again, to resounding silence.
Ever so rarely, there may be an opaque, internal investigation. Mostly, it seems that academia has a desire to not make any waves, keep up appearances, and let the problem quiet down on its own.
The people doing the investigation have a vested interest in keeping it quiet.
It's like the old quote... "If you commit fraud as an RA that's your problem. If you commit fraud as the head of department that's the university's problem."
And occasionally a grad student who discovers academic dishonesty, and complains internally (naively trusting administrators to have humility and integrity), has their career ended.
I suppose a silver lining to all the academic fraud exposés of the last few years is that more grad students and faculty now know that this is a thing, and one that many will try to cover up, so trust no one.
Another silver lining might be that fellow faculty are more likely to believe an accusation, and (if they are one of the awful people) less likely to think they can save funding/embarrassment/friend by neutralizing the witness.
(ProTip: If the success of your dishonesty-reporting approach is predicated on an internal administrator having humility and integrity, realize that those qualities are the opposite of what has advanced a lot of academic careers.)
Only fix I can see is making scientific fraud criminal. But it has to be straight fraud and not just bad science.
I can't imagine any other vocation where you can take public and private money, then cheat the stakeholders into thinking they got what they payed for, only to just walk away from it all when you are found out. Picture a contractor claiming to have build a high-rise for a developer, doctored photos of it, and then just go oops moneys all gone with no consequences when the empty lot is discovered years later.
It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position
I really feel stupid asking experienced developers to do FizzBuzz. Not one has ever failed. But I have heard tons of anecdotes of utterly incompetent developers being weeded out by it.
Everyone seems to acknowledge this is a problem, but refuse to believe it actually affects anything when it comes time to "trust the science". Yes, science is corrupted, but all the results can be trusted, and the correct answer is always reached in the end. So, is it really a problem? Or not?
Another example of the phenomenon where people can realize something when considering it from an abstract perspective but not at a realtime object level is psychological bias and imperfect rationality. If the topic of discussion is an article about bias, rare is the person who will deny the phenomenon, and many enthusiastically admit to suffering from the problem themselves. But if the topic of discussion is something else and one was to suggest the phenomenon may be in play: opposite reaction. During realtime cognition, that knowledge is inaccessible.
I honestly think if some serious attention was paid to this and various other real world paradoxes around us, we could actually make some forward progress on these problems for a change.
A key skill for any scientist is to differentiate quality work and science that can be easily faked.
The Alzheimer's and Parkinson's fields are too easy to fake, and too difficult to replicate. The new ideas are only ~20 years old. Big pharma companies are understandably wary of published papers.
When people say "trust the science", they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.
TL;DR: Some science should absolutely be trusted, some shouldn't. It's not surprising that you can't make blanket statements on a superfield ranging from germ theory to cold fusion.
> When people say "trust the science", they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.
When people say "trust the science" they're usually referring to fairly recent developments. Covid vaccines were in development and testing for just over 18 months before being mandated and were certainly not replicated on a large scale by disinterested 3rd parties before being mandated. The idea that we can have effective scientific policy without trust in scientific institutions is just... not accurate.
Exactly. Nobody needs to be told to "trust the science" on gravity and electricity, nobody asks to consult scientific consensus. The argument only arises for the more suspicious niches.
It's a matter of how established the science actually is.
Questioning novel science is one thing but questioning if the Earth is flat or Germ Theory is another thing all together. The problem with skeptics is that they sometimes hang around conspiracists.
It's hard to not discount these people when the person next to them thinks black people are biologically inferior. Then when those skeptics don't distance themselves or don't explicitly condemn those bad actors, it brings to question if their positions are born of skepticism or some strange prejudice, and that they merely constructed the cover of skepticism to hide their strange prejudices.
For example, during the Covid pandemic there was a lot of questioning around masks. In hindsight, the answer is obvious: it doesn't really matter if masks were or were not effective, because they're essentially free to wear. Even in the worst case, nobody is actually hurt.
But there were many, maybe millions, of mask deniers who would simply refuse to wear them. They were doing this because of institutional distrust and political motivations, not because they truly believed the masks were dangerous. And this is the trouble: these people are skeptics, but they're skeptics with an end-goal of political destabilization, i.e. they're dangerous.
When you mix it all together, which people often do to themselves, it's discredits the very thought process.
> it is only us conspiracy theorists who suffer from delusional cognition
Of course not, but if you, say, think the Earth is flat you are delusional. That's just what it is, and I'm not gonna hand hold crazy people when I tell them they're crazy.
The issue is when crazy people assimilate, or rather try to infiltrate, groups of educated skeptics. Now they all look crazy, and that's a problem.
Or do you mean people who didn't deny Covid? Hmm I would demonstratablely less harm than you. Because even if masks are almost useless, that's better than nothing, right?
Simply being contrarian for the sake of it isn't impressive, it's kind of sad. Sometimes the big dogs are just right. If you can't articulate their motivation, objective value gain, methods, etc, then you're probably just crazy.
Nobody fucking cares about politics mate. People can’t breath with them on hot crowded city buses or for 9 hours straight when working. Your just a cuck who wore his face happy trying to justify his cowardice now to himself. Nobody else is interested in your shit ideas and theories
See, this is what I mean. People who take a skeptical approach to masks aren't doing it for scientific reasoning, they're doing it to avoid being a "cuck".
This type of mentality actively discredits skeptics, because nobody wants to be lumped in with that. There're genuinely very smart people who were/are skeptical of many Covid policies, but unfortunately, they have to stand next to you. Which, of course, makes them look very stupid. It's a tough problem.
Yes, if you don't believe in Global Warming, you are just stupid. I'm not gonna hold your hand when I make you aware of your intellectual insufficiencies - you are stupid.
Now that you know you're stupid, you can either choose to reinforce your stupidity by living in a delusion or you can do a bit of research and catch up to the average human. I don't care either way, but you're passed the point of claiming ignorance. Eventually the stupidity is self-enforced, meaning you and others will go out of your way to ensure you are stupid.
There are, and they've been in practice for many decades.
However, I give people the benefit of the doubt and assume they have a functional brain. Therefore, I conclude if someone "doesn't believe" in climate change, that is a choice. Not a matter of ignorance.
I do not pity you enough to spit in your face with hand-holding and euphemisms. There is a deliberate choice and I'll treat you as such.
Is imagining shortcomings on my behalf and then categorizing them as factual to use as evidence in an argument a part of these superior approaches you mention?
If I was to do the same to you, would you not protest?
I'm not imagining a shortcoming, rather I'm doing the opposite. I'm assuming you've done the proper research around climate change so I'm not going to patronize you with it. Therefore, I conclude you are not ignorant, you're willfully contrarian.
If you interpret that as a worse outcome, here's a thought: stop being willfully contrarian. Sometimes the most popular and most researched opinion is correct. You gain nothing by being contrarian.
Being skeptical is good. Being skeptical means you require a wealth of evidence to believe something. Well, if you don't believe in climate change, you're NOT skeptical - you're just an obnoxious contrarian. Because we have a wealth of evidence and I'm assuming you've reviewed it.
The virus is just going to go into people’s eyes dumbass. There are millions of cucks and weak men in western societies that didn’t exist 50 years ago. These men would have had deeper voices, excellent eye sight, thick heads of hair, followed logic, been brave ... now we have porn addicted gamer simps with nasally voices pretending to be scared of catching a head cold because they’re only too happy to bow down and be submissive with the added bonus that they can hide their disgusting eyes and faces in public , essentially enforcing mass cardboard box over head wearing with these “face nappies”.
> There are millions of cucks and weak men in western societies that didn’t exist 50 years ago
Yes, go back 50 years ago then. When we had so much more racism, when homosexuals were treated like dogs, when women were beaten for sport and nobody cared.
Those types of people died off not by some conspiracy. They died off because they were a cancer on society, a tumor on mankind. They died off because nobody liked them, except others of their ilk.
What you call "weak" I consider strong. We have the strength today to solve problems. We don't lynch black people anymore, we don't beat women anymore. Men are no longer scared to be themselves. I mean, people like you shiver in your timbers when you see a slightly feminine man - do you not understand the irony in that? How pathetic that makes you? Are you really so stupid that it's right in front of your eyes and you can't see it?
If it's the past you crave, I have doubts about your character. Go talk to an older gentleman and see what they've seen. We've moved on, either figure it out or die in the past. We're not gonna wait around and hold the hands of the weakest of our kind to catch up - you will be left behind.
I wonder if there's evidence of fraud _increasing_ or if the detection methods are just improving.
In my last workplace, self-evaluation (and, therefore, self-promotion) was mandatory on a semi-annual cycle and heavily tied to compensation. It's not surprising that it became a breeding ground for fraud. Outside of a strong moral conviction (which I would argue is in declining), these sorts of systems will likely always be targets for fraudulent behavior.
You're definitely seeing the consequences of papers being written in the past when large-scale fraudulent analysis wasn't that feasible, and now you have all this tech that can scoop it up and look for those "needle in a haystack" instances of fraud.
I'm thinking about all the plagiarism issues uncovered with the publications of the former Harvard president Claudine Gay (and, similarly, of Neri Oxman, the wife of Bill Ackman, who basically was exposed due to Ackman's campaign against Gay). I looked over all the instances of plagiarism in detail, and, while not excusing them, they seemed like less of egregious theft of others' ideas and more like laziness/sloppiness. But I could easily imagine that laziness/sloppiness being fostered by an idea of "How could someone really check this word-for-word anyway?"
Well, now we have tech that makes it almost trivially easy to expose this type of misconduct.
That's an interesting perspective. We tend to make judgments based on what's possible today, not twenty years from now (obviously, there are exceptions, like privacy). So it's easy to fall into sloppiness and not expect consequences. Maybe there's a lesson here...
I've said so many times, but we need to go back to a system where it is possible to make a career in science and get funding for replicating other people's work to verify the results.
This leads to a tragedy of the commons. Say a random nation, say, Sweden, devotes 100% of their governmental and university research budgets toward replication.
70% of the studies they attempt are successfully replicated.
20% are inconclusive or equivocal.
10% are clearly debunked.
Now the world is richer, but Sweden? No return on investment for the Swedes, other than perhaps a little advanced notice on what hot new technologies their sovereign funds and investors ought not to invest in.
A bloc of nations, say NAFTA/CAFTA-DR, or the European Union, might be more practical.
That's the carrot. As for the stick, bad lawyers can get disbarred, bad doctors can get "unboarded". Some similar sort of international funding ban/blacklist for bad researchers would be useful.
I applaud that approach. The first year of a Ph.D. program could be reformulated to become 75% replicating the research of others, preferably that of unaffiliated research organizations.
A lot of this research is very involved and esoteric, requiring specialized equipment found only in one place, so some would be very hard to replicate. If what Theranos was doing (or claiming to do) was easy to replicate, it would've imploded years prior to when it did. So not all fraud could be detected, but a lot of the low-hanging fraud, especially in the psychological and pharmacological fields, could be quickly identified. Such a system would be a substantial upgrade and I applaud your suggestion. A smaller country could blaze the trail, because "big boys", like the U.S., are too set in their ways.
I think this suggestion contains the implicit bias that “replication isn’t important or challenging”, hence you leave it to trainees. Actually, replication is incredibly challenging. Put PhD students on it, and they’ll be convinced the original study was fraud for 4 years until they finally have the skill to get it right!
Alternately, look at one recent example of massive waste as a result of accepting fraudulent research as valid.
> Hundreds of millions of dollars and [16] years of research across an entire field may have been wasted due to potentially falsified data that helped lay the foundation for the leading hypothesis of what causes Alzheimer’s disease.
Wouldn’t science in total be impossible to fund if this argument were true? What advantage does Sweden have from doing science and publishing if everyone else gets to use it and they could just wait for someone else to do it? If this was how it worked, wouldn’t every scientist work in secret and never publish anything?
Anecdotally, during my (fairly short-lived) academic career, in which I did research with three different groups, 2/3 of them were engaging in fraudulent research practices. Unfortunately the one solid researcher I worked for was in a field I wasn't all that interested in continuing in, and as a naive young person who believed in the myth of academic freedom and didn't really understand the funding issue, I jumped ship to another field, and found myself in a cesspool of data manipulation, inflated claims, and all manner of dishonest skullduggery.
It all comes down to lab notebooks and data policies. If there is no system for archiving detailed records of experimental work, if data is recorded with pencils so it can later be erased and changed, if the PI isn't in the habit of regularly auditing the world of grad students and postdocs with an eye on rigor and reproduciblity, then you should turn around and walk out the door immediately.
As to why this situation has arisen, I think the corporatization of American academics is at fault. If a biomedical researcher can float a false claim for a few years, they can spin their research off to a startup and then sell that startup to a big pharmaceutical conglomerate. If it fails to pan out in further clinical trials, well, that's life. Cooking the data to make it look attractive to an investor - in the almost completely unregulated academic environment - is a game that many bright-eyed eager beavers are currently playing.
As supporting evidence, look at mathematical and astronomical research, the most fraud-free areas of academics. There's no money to be made in studying things like galactic collisions or exoplanets, the data is all in the public domain (eventually), and with mathematics, you can't really cook up fraudulent proofs that will stand the test of time.
> mathematical and astronomical research, the most fraud-free areas of academics. There's no money to be made
So we're systemically safeguarding the quality of astronomy research, by setting up a gradient (at MIT: restaurant catering for business talks, pizza for CS, stale cookies for astronomy) to draw off some flavors of participants and thus concentrate others?
When I was in my doctoral program I had some pretty promising early results applying network analysis to metabolic networks. My lab boss/PI was happy to advertise my work and scheduled a cross-departmental talk to present my research in front of ~100 professors or so. While I was making a last-minute slide for my presentation I realized one chart looked a little off and I started looking into the raw data. I soon realized that I had a bug in my code that invalidated the last 12 months of calculations run on our HPC cluster. My conclusions were flat out wrong and there was nothing to salvage from the data. I went to my lab boss the night before the talk and told him to cancel it and he just told me to lie and present it anyways. I didn't think that was moral or scientifically sound and I refused. It permanently damaged my professional relationship with him.
No one else I talked to seemed particularly concerned about this, and I realized that a lot of people around me were bowing to pressure to fudge results here and there to keep up the cycle of publicity, results, and funding that the entire academic enterprise relied upon. It broke a lot of the faith I had been carrying in science as an institution, at least as far as it is practiced in major American research universities.
Coding errors are a really common source of fraud unfortunately. You did the right thing but the vast majority don't. Given a choice between admitting the grant money was wasted, the exciting finding isn't real, everyone who cited your work should retract their papers or just covering it up, the pressure to do the latter is enormous.
During COVID I talked to a guy who used to do computational epidemiology. He came to me because I'd written about the fraud that's endemic in that field and wanted to get stuff off his chest. He was a research programmer, assisting scientists. One of the stories he told involved checking the code for a model written in FORTRAN. He discovered it was mis-using an FFI and using pointer values in equations instead of the dereferenced values. Everything the program had ever calculated was garbage. He checked and it had been used in hundreds of papers. After emailing the authors with a bug report, he got a reply half an hour later saying the papers had been checked and the results didn't change so nothing needed to be done.
Little known fact: the COVID model that drove lockdowns in the UK and USA was nothing but bugs. None of the numbers it produced can be replicated. When people pointed out that this was a problem academics went on the attack, claimed none of the criticism was legitimate because it didn't come from experts, and of course simp journalists went along with all of it. They got away with it completely and will probably do it again in future. Even in this thread you can see people defending COVID science as the good stuff! It was all riddled with fraud.
Part of the issue is that scientists are mostly self taught coders who aren't interested in coding. They frequently use a standard of: if it looks right, it is right. The whole thing becomes a circular exercise in reinforcing their priors.
>the COVID model that drove lockdowns in the UK and USA was nothing but bugs
I would love a source but I believe this given my experience with coding standards in the computational biology space, especially given that some of my own previous work and teaching touched on those models, and I couldn't believe any of the models that were publicized because they were at odds with what I thought the scientific consensus was about spread and containment models (96 hours after patient 0 it's pointless to attempt to restrict movement).
> My conclusions were flat out wrong and there was nothing to salvage from the data.
Wow that’s pretty crazy. I have to say many times in my career I’ve been writing a paper and realized “** there’s a bug”, and had to redo everything. But the overall conclusion never changed because the idea was grounded from several different angles(usually the pieces fit together even better). One bug might invalidate your result, but even if your code was correct the underlying assumptions behind the code could be wrong! I think the real issue was your boss wasn’t active enough in your work to make it robust to coding mistakes.
>I think the real issue was your boss wasn’t active enough in your work to make it robust to coding mistakes.
That's a major issue and it went beyond coding, I picked a very well known and influential advisor and eventually discovered he didn't really direct any research or write papers, that was all the research assistants and postdocs. I was pretty much left on my own and expected to surface with a paper to put his name on.
It’s time that someone starts a thing similar in appearance to GitHub, but for science (datasets, images, calculations, scripts) and then, if journals required it, it just might get traction and make fraud science easy to spot.
Add another aspect here that LaTeX is a bit outdated in 2024 (I know that’s controversial! Sorry) and that we can do a lot better for digesting and displaying information than A4 sheets of paper, for example responsiveness, audit/comment logs/references to individual paragraphs/revision logs, and the ability to click figures and see underlying data or high resolution copies. This would be great in a web-based editor medium. Also the ability to “fork” a paper would be fantastic. And to automatically track and generate references, then roll it up as back/forward reference analytics for the authors so they can see impact.
> Add another aspect here that LaTeX is a bit outdated in 2024 (I know that’s controversial! Sorry)
I met Leslie Lamport seven or eight years ago and asked him what a completely modern LaTeX might look like. He replied
“well, we won’t be using PDFs in twenty years” and so it would need to be something completely different. Something interactive, with depth. Remembering, of course, to focus on quality content first and quality presentation second.
In a world with LLMs, this question becomes ever more interesting - why write a literature review if one can be generated?
I think the important thing is capturing information in basic blocks (text, images, etc) and having the flexibility to reflow it later for any modern presentation mode, be it ingestion by LLM, listening to it, or just rendering it on desktop, mobile.
Translation and a11y is another important consideration here.
I'm surprised that people are surprised by science being done in non-scientific ways.
I got a taste of this in my high school honors biology class. I decided to do a survey of redwing blackbirds in my town. I had a great time, there was a cemetery across the street from my house with a big pond, where 6-8 males hung out. I was excited when later in the season several females also arrived and took up residence.
I eagerly wrote up my results in a paper. I thought I did "A" level work but was distressed when the teacher gave me B- or C+. She said "My husband and I are birdwatchers who have published papers on redwing mating habits in the area, and we haven't seen any females this year. Neither did one of your classmates who watched redwings in her neighborhood." While she did not directly in writing accuse me of fraud, she strongly implied it.
I told her to grab her binoculars and hang out at the cemetery one morning. She declined, as she was a published authority and didn't need to actually observe with her own eyes. IIRC I had photos but they were from faraway with a Kodak Instamatic (this was the mid-'80s), so she didn't accept those as evidence.
I often wonder if my life would have gone in a different direction if I had a science teacher who actually followed the scientific method of direct observation! It didn't come easy to me, but I was very interested in science before this showed me clearly that science is just another human endeavor, replete with bias, ego, horseshit, perverse incentives, and gatekeeping.
Scale this experience out to tens of thousands of young people. These kinds of people should not be teaching! A good teacher is capable of fearlessly admitting to a room of children that they were wrong and the students were right, or better yet that they have no idea what the answer is!
We have done a great disservice to human intellect to have mistaken the gift that empiricism gives of predicting the world, with knowledge of the world itself of which we possess almost nil.
In the future those who commit fraud are not likely leave trace in Western blot and photomicrograph audit.
When the experiments are significant, double blind is not enough. You need external auditors when conducting experiments. Preferably separate team making experiments from those who design them.
My career has been in this space (medical research, not neuroscience) and I honestly cannot fathom how this happened. I don't understand how a researcher can wake up one day, manipulate data, and then show it to others. I feel bad for everyone's time that was wasted in building off this research, likely other's careers were chartered based on the basis of this research. What a shame.
What I don't get is that people claim that the incentives are skewed because highly cited paper get you the top jobs. However, assume that a significant subset of the citation are citing because they require the fraudulent result, then this will increase the chance that it would be eventually exposed... and quickly.
That is assume that person publishes results that: "factor X seems to lead to outcome Y". Many other scientists will then start trying to establish the low-hanging fruit result: "something that looks like factor X seems to lead something that looks like outcome Y". In other words they will be performing a sort of replication but in a novel way. If the result is fraudulent, then none of these results will materialise. In other words I don't get how a paper can be fraudulent AND highly-cited without escaping scrutiny, unless we are talking of a fraud mafia.
Here I am using the field of pure mathematics as a mental model. Assume a person publishes a mathematical result with a flawed proof that escapes scrutiny. If this result is used by sufficient number of mathematicians (especially the lemmas used to prove the theorem) then fairly quickly it will end up generating self contradictory results.
For all the complaints about AI generated content showing up in scientific journals, I'm exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.
Ex: Finding when the exact same image appears in multiple publications, but with different captions/conclusions.
The evidence in this case came from one individual willing to volunteer hundreds of hours producing a side by side of all the reports. But clearly that doesn't scale.
I'm hoping it won't have the same results as AI Detectors for schoolwork, which have marked many legitimate papers as fraud, ruining several students' lives in the process. One even marked the U.S. Constitution as written by AI [1].
It's fraud all the way down, where even the fraud detectors are fraudulent. Similar story to the anti-malware industry, where software bugs in security software like CrowdStrike, Sophos, or Norton cause more damage than the threats they prevent against.
> For all the complaints about AI generated content showing up in scientific journals, I'm exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.
How would this work? AI can't even detect AI generated content reliably.
Not in a zero shot approach. But LLMs are more than capable of solving a similar scenario to the one presented:
- Parse all papers you want to audit
- Extract images (non AI)
- Diff images (non AI)
- Pull captions / related text near each image (LLM)
- For each image > 99% similarity, use LLM to classify if conclusions are different (i.e. highly_similar, similar, highly_dissimilar).
Then aggregate the results. It wouldn't prove fraud, but could definitely highlight areas for review. i.e. "This chart was used in 5 different papers with dissimilar conclusions"
Wouldn’t it be cool if people got credit for reproducing other people’s work instead of only novel things. It’s like having someone on your team that loves maintaining but not feature building.
LLMs might find some specific indications of possible fraud, but then fraudsters would just learn to avoid those. LLMs won’t be able to detect when a study or experiment isn’t reproducible.
Of course, but increasing the difficulty of committing fraud is still good. Fraudsters learn to bypass captchas as well, but they still block a ton of bad traffic.
Won't the scientist use some relatively secure/private model to fraud-check their own work before submitting? If it catches something, they would just improve the fraud.
This is terribly terribly frustrating. For every one of these cheats there are hundreds of honest, extremely hard-working ETHICAL scientists who toil 60 hours a week doing the thing they love. It is also terribly frustrating that, being human after all, smooth talkers with a confident stride, an easy smile, eager to shake hands can and do quickly climb the academic ladder, especially the administrative latter.
This makes me terribly sad.
And in the behavioral and social “sciences,” the fraud is just off the charts. If psychologists wanted to prove that healing crystals worked — if that was the cause du jour — there’d be journals filled to the brim with “research” “proving” their efficacy.
I spent almost 10 years of my life as a founder of a mental health technology startup and the day we got acquired was a huge relief — I could finally get out of that industry — an industry that is much more about academic politics than actually solving anything. Seeing the maneuverings behind the scenes of the DSM-V, diagnostic codes, etc., was profound enough to destroy any idealism I might have felt towards that industry. (And yes, it’s an industry.)
Luckily in fields such as climate science or virology, there is never fraud. Good thing too since a lot of our governmental policies result from those fields. (And yes, that is sarcasm.)
“Science” feels very much like the Catholic Church — many people with good intentions, but there have been enough people participating in bad things that it poisons the entire institution and degrades whatever little faith people might have had remaining.
On a tangent, this video[0] from Sabine Hossenfelder about academics in general is eye opening. In comments, veritasium[1] agrees:
>After finishing my PhD I went to a university-led session on ‘What Comes Next.’ What I heard sounded a lot like “now, you beg for money.” It was so depressing to think about all the very clever people in that room who had worked so very hard only to find out they had no financial security and would be spending most of their days asking for money. I realised that even what I thought of as the ‘safe path’ was uncertain so I may as well go after what I truly want. That led me here.
I can't manage to be really surprised. We already know many people will cheat when the incentives are right. And when the law of the land is “publish or perish”, then some will publish by any means necessary. Thinking “this subsegment of society is so honorable, they won't cheat” would be incredibly naive.
But if the NIH had done that in 2016, they wouldn't be in the position they're in now, would they? How many people do we need to check? How many figures do we have to scrutinize? What a mess.
This is the core problem with science today. Everyone is trying desperately to publish as much, and as fast, as they can. Quantity over quality. That quantity dictates jobs, fellowships, grants, and careers. Dare I saw we have a "doping" problem in science and not enough controls. Especially when it comes to "some" countries feverish output of papers that have little to no scientific value, cannot be replicated, full of errors, but at least it's published and they can get a job.
For a long time the numbers have been manipulated and continue to be so, seemingly due to national pride.
Scholars disagree about the best methodology for measuring publications’ impact, however, and other metrics suggest the United States is still ahead—but barely.
> There's also a proposed Alzheimer's therapy called cerebrolysin, a peptide mixture derived from porcine brain tissue. An Austrian company (Ever) has run some small inconclusive trials on it in human patients and distributes it to Russia and other countries (it's not approved in the US or the EU). But the eight Masliah papers that make the case for its therapeutic effects are all full of doctored images, too.
I am thinking if some type bounty program which would take sufficient proof on fraud would work. Sadly I don't think anyone will fund it. And those participating likely won't be taken well in the circles...
As a scientist, I'm so glad that we're forced to publish all our primary/secondary data along with the publication itself. It's stored in a repository which is "locked" when the DOI (digital object identifier) is generated. Overall, the publishing process is tedious and frustrating, but this extra work is crucial and cases like this makes that very clear. However, in most of the recent cases you didn't even need to look at the data as even the publication itself shows the misconduct.
> But if the NIH had done that in 2016, they wouldn't be in the position they're in now, would they? How many people do we need to check? How many figures do we have to scrutinize?
I personally know two PhDs who faked a large portion of their data in order to complete the dissertation process. The reality is that you can get stuck in the research phase because genuine, large sample-size quantitative data is often extremely difficult if not impossible to obtain, and in the cases I personally know, they simply mocked it in a realistic way. And there’s no way to know since the surveys are often anonymous.
Reminder that these people are only caught because they photoshopped Western blots.
Even more widespread is when PIs just throw out data that don't agree with their hypothesis, and make you do it again until the numbers start making sense.
It's atrocious, but so common that if you're not doing this, you're considered dumb or weak and not going to make it.
Many PIs end up mentally justify this kind of behavior (need to publish / grant deadline / whatever) — even at the protest of most of the lab members.
Those who refuse to re-roll their results — those who want to be on the right side of science — get fired and black balled from the field.
And this is at the big famous universities you've all heard of
Technical/Academic people might hate "influecer" culture for its crassness, but whenever fame/popularity is the primary goal, this is the only social dynamic.
People are not outraged in academia that the primary goal is fame/popularity (rather than knowledge, technical ability), they're outraged that someone is cheating in this game to get ahead.
This is happening across the spectrum tbh, as the world becomes increasingly monocultural and winner-takes-it-all social schema. People talk about anthopocene, but look at human social cultures : the millions of ways of living (with dignity mind you) sustained by < 1B population from as late as a 100 years back is now down to 1 or 2 at best.
In such a vast pool, this kind of stuff is not only bound to happen, but is the optimal way forward (okay, may be not such blatant stuff). Honor-code etc. are BS unenforceable measures that are game-theoretically unstable (and kill-off populations that stick to it). See what the finance industry does for instance.
In every industry right now there appear to be a lot of people running cover. I have a personal belief, with the exception of a few industries, 50% of managers are simply running cover. This is easy to explain:
1/ Nothing follows people
2/ Jobs were easy to get in the last 3 years (this is changing FAST)
3/ Rinse and repeat and stay low until you're caught.
Perhaps the root of all evil is "publish or perish". I am long out of research, working at a teaching college, and yet I am still expected to publish. Idiocy.
Academic fraud is also enabled by lack of replication. No one gets published by replicating someone else's work. If one could incentivize quality replication, that could help.
>>>> "..sleuths began to flag a few papers in which Masliah played a central role, posting to PubPeer, an online forum where research publications are discussed and allegations of misconduct often raised. In a few cases, Masliah or a co-author replied or made corrections. Soon after, Science spotted the posts, and because of Masliah’s position and standing decided to take a deeper look."
I am conforted that there are still real journalists such as those at science, doing fantastic work and pulling on a thread, wherever it may lead , reputations be damned.
Kudos to the PubPeer scientists for spotting the problem. Hat tip to you.
Last but not least, never forget that the free flow of information allowed this fraud to be uncovered. Truth and "moderation" (of the censorship/disinformation kind) cannot simultaneously exist.
Why we would expect academia to be different from anything else these days? Fraud is how you get ahead. It is how you gain competitive advantage. When everyone is cheating, the only way to win is to cheat smarter. Fraud is the end result of the dreams that motivate people to be better than they. are.
With that off my moobs ... for those interested in the broader topic, I highly recommend Science Fictions, by Stuart Ritchie. The audiobook is also excellent.
I'm not a working scientist, and I found it completely engaging. Worth it just for the explanation of p-hacking.
Do folks here know how expensive it is to develop a drug? How much work it takes to get it through the pipeline? How much time, heartache, effort, has gone wasted? How many patients given false hope? This is tragic on so many levels
While I agree this is a big problem, science should never be defined by a single article.
I was always taught that science is a tree of knowledge where you build off previous positive results, all of which collapse when an ancestor turns out to be false.
As a funders to almost of these research studies, we also need to introduce some mechanisms which will impart a compounding fear in minds of these criminals as year passes.
Basically a wrong study results over the years may ended up affecting millions (if not billions) of people. Someone(at every level of the chain) should pay a compounding punishment for a verified fraud.
At the same time, this shouldn't prevent a Nobel upcoming scientist being bold. After all, science is all about pushing the boundaries of understanding or doing.
> But at the same time I have a lot of sympathy for the honest scientists who have worked with Masliah over the years and who now have to deal with this explosion of mud over parts of their records.
Ah, I see how you could misunderstand. In context, this sentence was contrasting between those who knew, and those who didn't know about the fraud. To make my point more clear:
I'm not a scientist because of fraud and other reason related to academia, but I thought one of the tennets of an experiment was reproducibility. Were his experiments reproduced independently? Why not?
I think major scandals such as this one are essential, and we need more of them.
Why? The misaligned incentives that drive (in my opinion) otherwise-well-meaning human beings to fraud in the biomedical sciences stem from competition for increasingly-scarce resources, and the deeply and fundamentally-broken culture that develops as a result. The only thing that will propel the needed culture shift is for the people who provide the money to see, from the visibility provided by such scandals, just how bad the problem is, and to basically withdraw funding unless and until the changes happen.
Some of those changes include:
1. Reducing competition for funds by reducing the number of research-focused faculty positions (a.k.a. principal investigators, or PIs) across the board. When people's livelihoods depend on the ridiculous 5% odds of winning an important grant competition, they WILL cheat. As it stands, 20 well-funded scientists are probably more productive than 100 modestly-to poorly-funded, most of whom will do nothing meaningful or useful while trying to show "productivity" until the next funding cycle.
2. Reducing competition for funds by providing reasonably-assured research funding, tied to a diversity of indices of productivity, NOT just publications. As an example, a PI should be hired with the understanding that they'll need `x` dollars over the next 10 years to do their work. If those dollars aren't available, the person shouldn't be hired.
3. Reducing the number of PhD- and post-doctoral trainees across the board. These folks are mostly used as cheap labor by people who are well-aware, and don't care, that there will likely be no jobs for them.
4. Turning those PhD and post-doctoral positions into staff scientist positions, for people who want to do the research, but don't want the hassle of lab management. Staff scientist positions already exist, but in the current environment, when a PI can pay a postdoc $40k a year to work 80 - 100 hours a week, versus a staff scientist $80k a year to work 40 hours a week, guess which they pick.
5. Professionalizing the PhD stream. A person with a PhD in the biomedical sciences should be a broadly-capable individual able to be dropped, after graduation, into an assortment of roles, either academic or industrial. Right now, the incentive to produce publications tends to create people who are highly expert in a tiny, niche area, while having variable to nil competencies in anything else. Professionalization increases the range of post-PhD options for these folks, only one of which is academia. As it stands now, there's the tendency to feel that one has nothing if one doesn't have publications -- which increases the tendency towards fraud.
I don't know why this would be surprising. There's nothing more obvious than the fact that research is riddled with both fraud and laughably shoddy work.
If you're an academic and want to use the fastest publishing stack ever created that also helps guide you to building the most honest, true thing you could create, I have built Scroll and ScrollHub specifically for you.
Once, at 3Com, Bob Metcalfe introduced a talk by one of his MIT professors with the little joke, "The reason academic politics is so vicious is that nothing's at stake."
The guy said, "That depends on whether you consider reputation 'nothing.' "
I guess what that shows is, you can always negotiate and compromise over money, but reputation is more of a binary. An academic can fake some work, and as long as he's never called on it, his reputation is set.
So yeah, a little more fear of having one's reputation ruined would go a long way towards fixing science.
A caveat that "reputation", like competence, is more variagated and localized than is often appreciated. As with someone who is highly competent and well regarded in their own subfield, while simultaneously rather nutter about some nearby subfield where they don't actually work.
One can have a reputation like "good, but beware they have a thing for <mechanism X>". Or "ignore their results using <technique> - they see what they want to see". Subtext that gets passed among professors chatting at conferences, and to some extent to their students, but otherwise isn't highly accessible.
When people speak of creating research AI's using just papers... that's missing seemingly important backchannels. And corresponding with authors. Attempting research AI as developing-world professionally-isolated professor.
But this is really a societal/political issue: since we decided that economic capital is king and symbolic capital not that much… (This is really the story of the last four decades or so.)
Well, this is about Pierre Bourdieu, and he had a few things to say about academia, as in Homo Academicus.
And I'm not sure what example could illustrate the problem with the lopsided valuation of economic capital and the general devaluation of symbolic capital (as compared to pre-1980s, we have since undergone a social revolution of considerable dimensions, which is also why there isn't an easy fix) better than this one.
Socio-economic issues aren't one-dimensional, in fact they're very complex. Most of our systems and beliefs are socially constructed.
Humans are, by our biology, social creatures. Modern humanity more than ever before. If you're not considering the social effects, then IMO you're not addressing anything of value.
Not many people in the academic/technical people realize this, often for their entire lives. In their naive worldview, they cannot even imagine that people can stoop that low.
(embarrassingly and shamefully I used to be one of those naive people)
The problem being, we have "economized" academia, by things like "publish or perish", a citation pseudo stock market or third party funding, and all incentives are built around this pseudo-economy. Which also imports all the common incentives found in economy…
I have always said that while professors get paid less money than in industry, they are compensated in reputation to make up for it. Status and reputation are the currency of academia.
Intrinsic to the article is, arguably, a significant cause of fraud in this field: The article talks about fraud as if it's done by the 'other' - by someone else, other than the article's author (or their audience).
The solution starts when you say, 'we committed fraud - our field, our publication, the scientific enterprise. What are we going to do?'
Does the author really have no idea about these things? That they occur?
Does anyone know of an up-to-date or live visualization of the amount of scientific fraud? And perhaps also measuring the second order effects? i.e. poisoning of the well via citations to the fraudulent papers.
It's hard to tell at this point if it's just selection bias or if the scientific fraud problem has outgrown the scope of self-correction.
So things haven't changed in the 30 years since I left academic medicine. Par for the course given how grants and funding are carried out. This will continue to happen as the system design guarantees this outcome.
I would rather die than deliberately cause a humongous speed bump in the history of human understanding of the universe like this guy did. And the choice is never that stark. It's usually "id rather work in a less highly paid role".
To selfishly discard the collective attention of scientific experts for undue gain is despicable and should disqualify a person from all professional status indefinitely in addition to any legal charges.
I deeply respect anyone whose desires align with winning the collective game of understanding that science should be. I respect even more those folks who speak up when their colleagues or even friends seek to hack academia like this guy did.
I'm a recovering academic, and have not published since not long after defending my dissertation.
I blame this behavior entirely on "publish or perish". The demands for novel, thoughtful and statistically-significant findings is tremendous in academe, and this is the result: cheating.
I left professional academia because I resented the grind, and the push to publish ANYTHING (even reframing and recombining the same data umpteen times in different publications) in an effort to earn grants or attain tenure.
The academia system is broken, and it cannot be repaired with minor edits, in my opinion. This is a tear out and do over scenario for the academic culture, I'm afraid.
I've been saying this for years and have been punished for that. Even here.
I've done Biology and CS for almost 20 years now, I've worked at four of the top ten research institutions in the world. The ratio of honest to bullshit academics is alarmingly low.
Most of these people should be in jail. Not only do they commit academic fraud, many of them commit other types of crimes as well. When I was a PhD student, my 4 year old daughter was kidnapped by staff at KAUST. Mental and physical abuse is quite common and somewhat "accepted" in these institutions. Sexual harassment and sexual abuse is through the roof.
I am very glad that, slowly, these things are starting to vent out. This is one real swamp that needs to be drained.
Some smartass could come up and say "where is your evidence for this?". This is what allows this abhorrent behavior to thrive. Do you think these people are not smart enough to commit these crimes in covert ways? The reason why they do it is because they know no one will find out and they will get away with it.
What's the solution? I've thought about this a lot, a lot. I think a combination of policies and transparency could go a long way.
Because of what they did to me, I am fully committed to completely destroy and expunge people who do these things from academia. If you, for whatever reason, would like to help me on this mission, shoot me an email, there's a few ideas already taking shape towards that goal.
"Four of the top ten" research institutions is probably part of the reason for your experiences. I went to an elite private undergrad as a scholarship student and was sexually abused by the son of high powered lawyers, probably awful people themselves, who targeted scholarship students, international students, etc. because we were vulnerable with no recourse. I then went to a highly ranked but not super sexy public school for my PhD and my experience has been significantly better.
Bad actors are attracted to glamor and prestige because they're part of the cloaks and levers they use to escape consequences. Bad actors are far less attracted to, just as an example, living in Wisconsin, Michigan, or Indiana and telling people at conferences that they work at UW rather than Cambridge. UCs are also vastly more welcoming and supportive of working and middle class students than HYPSM even at the graduate level. That doesn't mean that you won't find any assholes at these places, and go too low in the rankings and you'll see ugly competition over scarce resources, but there's a sweet spot where more honorable people who aren't chasing prestige cluster and you'll find more support and recourse. Public schools ranked 5-15 are best for students without significant, significant social savvy and other forms of protection, IMO.
So sorry to know you're one of the victims of these idiots.
>scholarship students, international students, etc. because we were vulnerable with no recourse
That's very accurate, this is a big target group prone to being abused.
>Bad actors are attracted to glamor and prestige because they're part of the cloaks and levers they use to escape consequences.
Yes, it could definitely be that the higher you go the more rotten it becomes, for the reasons you mentioned. The Epsteins of the world hang around those places for a reason.
Shoot me an email (check profile), I'll be very glad to get your feedback on what is being done to fight against this.
We are fine now. That was four years ago. Our embassy intervened and eventually she was released and we were able to fly back home.
I'm not 100% satisfied with how they handled the situation (they took a while to react to the issue) but in the end we were able to leave that place and I'm happy with that.
If there's this much overt, deliberate fraud and dishonesty in all of our research institutions, the quantities of soft lying and fudging are inconceivable.
We need to seriously rethink our approaching to stewarding these institutions and ideas, public trust is rightfully plummeting.
He probably refers to Sylvain Lesné's previously detected Alzheimer fraud, a hugely influencal doctored paper. And now the #1 Alzheimer researcher Eliezer Masliah is also a fraudster.
Science is the best way we have of understanding reality, but sadly it is mediated by humans. Just because a human is a scientist, it doesn't make them infallible.
I think the worst part has been lost in the noise.
There were, and currently are, people suffering from Parkinsons disease whom are being subjected to greater suffering, knowingly, to further this person's career.
This is Nazi and Tuskegee experiment level evil. This person should go to jail. Not US jail, international jail. These are crimes against humanity.
Oh wow, it was not just some guy publishing fradulent papers in fradulent journals that nobody reads or cites. He had giant impact, was cited tens of thousands of times!
I hate the thought that researchers and drug developers may have wasted their effort and dollars developing drugs based on one extremely selfish person's bogus results.
Is it time for periodic AI-driven audits of papers. Some types of audits may be easy—Western blots for example. But many edge cases will require lots of sleuthing or preferably open access to all files and data. Obviously paying for your own audit sets up the incentives the wrong way.
Alzheimer’s research has been a mess for 30 years as Karl Herrup argues persuasively in How Not to Study a Disease:
Not clear whether it would be a net benefit, adding constraints and complexity to the scientific process which will be skipped whenever possible by underpaid labrats. Also, GIGO.
Peer reviews are very surface-level, often delegated to inexperienced students, and not incentivized well to do any deep analysis except checking for proper references (the incentive here being making the author cite you or your friends). Been that student.
Glad the title here is "Fraud, so much fraud" and not "Research misconduct". I hope that Masliah is charged with federal wire fraud.
In cases like this where the fraud is so blatant and solely done for the purposes of aggrandizing Masliah's reputation (and getting more money), and where it caused real harm, we need to treat these as the serious crimes that they are.
I wonder if a market-driven approach could work here, where hedge funds hire external labs to attempt to reproduce the research underlying new pharmaceutical companies or trials and then short the companies whose results they can’t replicate before results get reported.
And now a whole generation of doctors will probably be “treating patients” using these “findings”. See eg COVID where it became obvious that the ventilators are killing people and then we kept people hooking up to them for several more months
> A former NIA official who would only speak if granted anonymity says he assumes the agency did not assess Masliah’s work for possible misconduct or data doctoring before he was hired.
> Indeed, NIH told Science it does not routinely conduct such reviews, because of the difficulty of the process. “There is no evidence that such proactive screening would improve, or is necessary to improve, the research integrity environment at NIH,” the agency added.
LOL. Here are your tax dollars at work, Americans.
Aw shucks, better luck next time. I bet each of you hackers possess exactly the humanist, ethics focused, inclusive, science based, data driven solution "we" need to fix this problem. If only it wasn't for those bad people who made this bad system turning all the good people into bad people!
If you are familiar with academia you'll realize the academic dishonesty policy is essentially the playbook by which academics behave. The author is surprised that Eliezer Masliah purportedly had instances of fraud spanning 25 years. I bet the author would be even more surprised to find out that most academics are like that for the entire duration of their career. My favorite instance is Shing-Tung Yau, who is still a Harvard professor, who attempted to steal Grigori Perelman's proof of Poincare's conjecture (a millenium prize problem <https://www.claymath.org/millennium-problems/> that comes with a $1MM prize and $10k/mo for the rest of one's life; Perelman rejected all of it.)
I mean, get this: an extremely gifted Mathematician living on a measly salary in Russia had had his millenium prize almost stolen by a Harvard professor. What more evidence do you need?
From personal experience, it is all I've seen. Could anyone be in a position to extrapolate to all of academia without speaking from personal experience? I'm not speaking of all academics (hence 'most'). It's a statement similar to "Hollywood has a drug problem" or something of that sort.
My advice to anyone going into Hollywood would be to stay away from drugs; my advice to anyone going into academia is to treat every interaction as if you've just sat at a poker table in Las Vegas.
I work in Hollywood. I am not sure it has more of a drug problem more than say tech or finance. Maybe it does-- I don't know. The point is when a celebrity is a drug addict you hear about it. When a banker or a lawyer is you don't.
Our experience of things has a lot of bias toward what we want to hear. Generalization plays into sterotypes and ideology.
I believe that tech and finance also have a drug problem. Those that sell expensive drugs like cocaine go after rich clients. You work in Hollywood, but have you been attending wild private parties? I've worked in academia and I was in the thick of it, I've experienced first hand the fraud I'm talking about, and it was a large part of my experience, not some side note. Perhaps it's an uncomfortable truth that academia is in the state it is in, but again, it is of utmost importance to warn younger people to its perils. (Act as if you're at a poker table at all times.) In any case, how do you know that it isn't your biases that prevent you from considering what I describe? What is so surprising with the claim that people who are very incentivized to steal and commit fraud do so if they are not punished for it?
edit: and it's not things I've heard, instead it is direct experiences, i.e. people stole my work, and things like that. As a graduate student to watch professors come to you with problem X, take what you've said (an actual solution) and publish a paper without attribute, that sort of thing; to report it and have nothing be done about it, et cetera, and on it goes, it's just instance after instance of such behavior, or the million ways in which they are careful to trick you into working on their problems without receiving attribute. One such trick for example, that again happened to me, is that after a conference talk I got into an e-mail discussion where I explained my approach; I was told that "they already have these results" (the trick here was to divulge less in the talk than what was currently known in order to be able to avoid "significant progress by another person" in the case another person does share new progress that they have already established, and hence not having to share attribution.) It turned out that our discussion was enough for them to go from n=3,4 to a general formula involving primes, because I pointed out a certain property they had not noticed. This is just a single example of the sorts of tricks, aside from total fraud, that happen, and one of the milder incidents I had happen to me.
I extrapolate to all of academia, but not to all academics (persons working in academia). My methodology is based on my intuition and my experiences. Already in this YC article the comments appear to be akin to the first meeting between battered housewives. You don't have to believe me or others, I'm just issuing a warning to anyone thinking of getting into academia: be alarmed and alert, and always careful. It's nothing like the movies portray academia to be, instead it's a thieves den, or a poker table, etc, you get the point.
The damage this person and he accomplices blew to science and the reputation of medical research in this moment in time is enormous.
The first thing that comes to mind is that this outing of such blatant fraud will be inevitably quoted by hordes of novaxxers and anti-science cultists for years to come.
"The crises that face science are not limited to jobs and research funds. Those are bad enough, but they are just the beginning. Under stress from those problems, other parts of the scientific enterprise have started showing signs of distress. One of the most essential is the matter of honesty and ethical behavior among scientists.
The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists. There is little doubt that the perpetrators in these cases felt themselves under intense pressure to compete for scarce resources, even by cheating if necessary. As the pressure increases, this kind of dishonesty is almost sure to become more common.
Other kinds of dishonesty will also become more common. For example, peer review, one of the crucial pillars of the whole edifice, is in critical danger. Peer review is used by scientific journals to decide what papers to publish, and by granting agencies such as the National Science Foundation to decide what research to support. Journals in most cases, and agencies in some cases operate by sending manuscripts or research proposals to referees who are recognized experts on the scientific issues in question, and whose identity will not be revealed to the authors of the papers or proposals. Obviously, good decisions on what research should be supported and what results should be published are crucial to the proper functioning of science.
Peer review is usually quite a good way to identify valid science. Of course, a referee will occasionally fail to appreciate a truly visionary or revolutionary idea, but by and large, peer review works pretty well so long as scientific validity is the only issue at stake. However, it is not at all suited to arbitrate an intense competition for research funds or for editorial space in prestigious journals. There are many reasons for this, not the least being the fact that the referees have an obvious conflict of interest, since they are themselves competitors for the same resources. This point seems to be another one of those relativistic anomalies, obvious to any outside observer, but invisible to those of us who are falling into the black hole. It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors. Peer review is thus one among many examples of practices that were well suited to the time of exponential expansion, but will become increasingly dysfunctional in the difficult future we face.
We must find a radically different social structure to organize research and education in science after The Big Crunch. That is not meant to be an exhortation. It is meant simply to be a statement of a fact known to be true with mathematical certainty, if science is to survive at all. The new structure will come about by evolution rather than design, because, for one thing, neither I nor anyone else has the faintest idea of what it will turn out to be, and for another, even if we did know where we are going to end up, we scientists have never been very good at guiding our own destiny. Only this much is sure: the era of exponential expansion will be replaced by an era of constraint. Because it will be unplanned, the transition is likely to be messy and painful for the participants. In fact, as we have seen, it already is. ..."
Unfortunately, sometimes someone becomes a bad example. That doesn't make them a "scapegoat", the favored defense of people like that.
A scapegoat is something that takes on all the sins of a lot of others who skate free. If Masliah is the only one who ever suffers, then he IS a scapegoat, but if this article serves to uncover a lot of other bad actors, then he's not. And if his example serves to warn a lot of other scientists to clean up their acts, then his suffering is a benefit.
The language of the article is as low as it is loaded. This is just Derek Lowe covering for the fact that “Science” magazine and the like have let this scoundrel (and many more like him) carry on, without hindrance, for an entire career; pointing the finger anywhere and everywhere but at the journals themselves. None of this is an isolated incident. It is widespread! There is a new scapegoat every month.
I had a feeling academia was just run a ran by people letting blatant fraud, exploitation and abuse of phd students, stealing during peer-review, and just other forms of plagiarism, fraud, and exploitation slide by. They let it slide by because correcting these things would lead to massive changes in academia that might put them out of jobs.
Every year that feeling becomes more certain. Glad I quit the track in grad school.
I feel terribly for all the incredibly smart and hard working academics that remain honest and try to make it work. They do what they love, otherwise they wouldn't do such intensive work with so much sacrifice.
It is really disheartening too because academia only turns on the "honesty filter" when it comes to minor grad students that pissed off the wrong people. But you can do all this fraud constantly and become president of harvard if you know the right politics.
Dishonest lot. I hope karma is real so they get what is coming to them for taking advantage of people that just love to increase humanity's knowledge.
You're being downvoted because you're correct—HN is an eco chamber for zealous regurgitation of opinions of the academy and media—institutions that have decayed. It's been happening slowly for awhile, but now things are starting to come apart at the seams.
It is really annoying because a common response is
"We know academia is bad. But this is the best we have and it is hard to improve"
when that is false on two counts.
1. If you had said the same thing before 2016 or covid, people would not agree that academia is rife with fraud or worthy of skepticism.
2. The same people saying the system dismiss how can it be improved are the same ones that would suffer from disruption as you say. They have the power to dismiss these arguments to begin with.
When I hear someone say, 'We know academia is flawed, but it's the best we have, and it's hard to improve,' I can't help but feel a deep, seething frustration.
It's profoundly insulting and grotesque—on par with excusing the inexcusable.
Accepting this degree of mediocrity is as repulsive as tolerating the most heinous acts imaginable. I've confronted people directly with this, to their face, because to me, it's inconceivable how anyone can be okay with such a vile acceptance of the status quo.
If society was even slightly capable of rational action . . . (legally, I cannot complete this sentence).
You are being downvoted because you extrapolating from one fraud case to call all scientists dishonest.
I can do it too. Person named SpaceManNabs made a bad post. There for all posts by SpaceManNabs, and probably all posts on HackerNews are bad. A dishonest lot.
> from one fraud case to call all scientists dishonest
I specifically mention that the majority of scientists are not dishonest. The majority of scientists are not running academia. The majority of scientists are suffering from this system, to differing degrees.
If I were as rude as you, I'd extrapolate on reading ability, especially since it is not just one fraud case.
Regardless, even if I was wrong on that, all my other criticisms of academia still stand, like exploitation of the phd students. I really hope the grad student unions get what they want.
I appreciate your response though. Makes me feel confident that it is just salty people on HN that hate truth, because otherwise, why would you mischaracterize what I said?
It's wrong to think that because there is reports of fraud or systematic error in science you shouldn't trust it. I'm sure all those things exist. But they also exist in every other institution with a lot less self-reflection and self-correction.
Nassim Taleb said that people think weathermen are terrible predictors of the future. He says meteorology is among the most accurate sources of predictions in our lives. But we can easily validate it and we see the mistakes. If we had as much first hand experience with other types of predictions we'd appreciate the accuracy of weatherman. My point is: just because you know the flaws in a system don't assume it isn't better than another.
“MASLIAH, 65, TRAINED in medicine and neuropathology at the National Autonomous University of Mexico (UNAM), earning his medical degree in 1982 and completing a residency in pathology in 1986. He married a U.S. resident who also studied medicine at UNAM. They relocated to San Diego after Masliah’s training.”
Universities became tax funded and the consequences is warm bodies filling chairs. I have experience with a number of big name unis in the U.S. they are all about office and national politics. It's not about the work and hasn't been for a while now.
Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.
> Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.
This... it's no longer about value its about optics... Problem exists in most industries now. The pendulum needs to swing back the other way before it's too late to stop the decay...
On the plus side, this is the kind of stuff you could screen pretty easily with large model machine learning. Not that there is a business in identifying scientific fraud, doing that with fraudulent government documents would probably have a better ROI (at least for the tax payer), but clearly we need a repository if every image/graph that has been published as evidence to start.
It would be something you could offer to journals perhaps as a business. Sort of "peer reviewed and fraud analyzed" kinda service.
What is truly sad for me is the 'wrong paths' many hard working and well meaning scientists get deflected down while someone cheats to get more 'impact' points.
Here's the article: https://www.science.org/content/article/research-misconduct-...
These sorts of articles raise so many thoughts and emotions in me. I was trained as a computational biologist with a little lab work and ran gels from time to time. Personally, I hated gels- they're finicky, messy, ugly, and don't really tell you very much. But molecular biology as a field runs on gels- it's the priimary source of results for almost everything in molbio. I have seen more talks and papers that rested entirely a single image of a gel which is really just some dark bands.
At the same time, I was a failed scientist: my gels weren't as interesting, or convincing compared to the ones done by the folks who went on to be more successful. At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed, although I did assume that folks didn't do a good job of organizing their data, and occasionally published papers that were wrong simply because they confused two images.
Would I have been more successful if fewer people (and I now believe this is a common occurrence) published fraudulent images of gels? Maybe, maybe not. But the more important thing is that everybody just went along with this. I participated in many journal clubs where folks would just flip to Figure 3, assume the gel was what the authors claimed, and proceed to agree with (or disagree with) the results and conclusions uncritically. Whereas I would spend a lot of time trying to understand what experiment was actually run, and what th e data showed.
Similar - when I was younger, I would never have suspected that a scientist was committing fraud.
As I've gotten older, I understand that Charlie Munger's observation "“Show me the incentive and I will show you the outcome.” is applicable everywhere - including science.
Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.
I think my favorite Simpsons gag is the episode where Lisa enlists a scientist (voiced by Stephen Jay Gould) to run tests to debunk some angel bones that were found at a construction site.
In the middle of the episode, the scientist bicycles up to report, dramatically, that the tests "were inconclusive".
In the end, it's revealed that the bones were a fraud concocted by some mall developers to promote their new mall.
After this is revealed, Lisa asks the scientist about the tests. He shrugs:
"I'm not going to lie to you, Lisa. I never ran the tests."
It's funny on a few levels but what I find most amusing is that his incentive is left a mystery.
Well, the incentive is that he didn't want to run the tests out of laziness (i.e. he lacked an incentive to run them). He ran to Lisa to give his anticlimactic report not to be deceptive, but rather he just happened to be cycling through that part of town and just needed to use the bathroom really badly.
The writers of these episodes were really on another level considering it was a cartoon.
Lisa's first word is still a personal favourite of mine, especially now as a father.
To be honest, it's difficult to tell if the subplot makes sense on purpose, or if the writers just wanted to make a joke and it just happened to end up making sense. I don't think I had ever put the three scenes together before now.
One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.
> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity.
Perhaps they should have taught you to be less sure of that. So many takes in movies that ended up being the best one are where a punch accidentally did land, something is ad-libbed, a dialogue is mixed up, etc.
To take an example of a very critically acclaimed show: in Breaking Bad the only reason we got Jonathan Banks in the role of Mike is because Bob Odenkirk had a scheduling conflict, and Banks improvised a slap during his audition. Paul Aaron even complained about it indicating that he would not have agreed to it.
It seems like there is a lot of serendipitous in writing and production. That's not what it was about. The point is how much agonizing and second guessing it takes and how many alternatives explored and how many takes, etc before something, anything makes it in the final product.
The lucky break is first a result of a lot of planning and work - and it gets analyzed to death before included - and then probably re-inforced here or there elsewhere. (So that for me, I do notice when I hear movie or TV dialog as completely natural and said exactly right. It's exceptional.)
This is a cartoon though, significantly less adlibbing, everything has already been storyboarded and scripted out etc.
Pixar's approach to making their movies is a fascinating highly iterative process going through many story boards and internal showings using simplistic graphics before proceeding to the final stage to produce a polished product. I wonder how Simpsons do it.
> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.
Not everything, for example I read somewhere that chess "fight" in Tween Peaks was random and didn't adhere to chess rules because no one really paid attention to record or follow moves.
Yes TV shows especially, they are under a lot of pressure to put them out on time so stuff isn’t always thought out fully.
Goofy cartoon but I always thought it was very cleverly done in parts. The laugh followed by "fuck life is actually like that" aftertaste.
The entire writing room was Harvard grads and people who went on to accomplish impressive things in the industry (eg Conan O’Brien was a writer, David X Cohen was a writer and then went on to cocreate Futurama with Groening). The early writing team was one of the sharpest ever assembled and dismissing it as a “goofy cartoon” is missing the talent behind it just like if you dismissed Futurama in that way.
What's her first word?
https://en.wikipedia.org/wiki/Lisa's_First_Word
Apparently it was "Bart". I had to look it up because I was curious as well.
I guess GP is referring to the episode, rather than the actual word . . .
More incentive to watch the 20min episode if you ever get the opportunity haha
I thought his incentive was to defend the idea of miracles/faith/angels/God.
More often than not in scientific fraud I've seen the underlying motives be personal beliefs than financial. This is why science needs to be much stronger in weeding out the charlatans.
[citation needed]
---
I conjecture the most common underlying motive is to embellish cv, and climb the academic ladder.
It's actually quite clever from the part of the scientist.
The incentive would be money, maybe the pay for doing this test was not good enough.
Or maybe the scientist was motivated by thirst of discovering something good for humanity like cure for cancer and didn't want to get distracted by other things. Funding is also needed but angel bones are clearly impossibility. Why even spend time on disproving that? But if she had engaged in discussion with people clearly believing in this nonsense it would have taken too much time. Saying, the tests are inconclusive lets her be distanced from all this and allow people to leave her alone, mostly that the groups will continue their disputes among themselves.
That's a good one. In my experience, corruption is almost always disguised as neglect and incompetence. Corrupt people meticulously cover their tracks by coming up with excuses to show neglect; some of them only accept bribes that they can explain away as neglect where they have plausible deniability. It doesn't take much brainpower to do well, just malicious intent and knowing the upper limits.
IMO, Hanlon's razor "Never attribute to malice that which can be adequately explained by stupidity" is a narrative which was created to condition the masses into accepting being conned repeatedly.
On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.
Barry Apppelman, for a long time the boss of all the Unix engineers, said malice was preferable to incompetence because malice would take breaks.
However malice is directed. When it doesn’t take breaks it does a lot more damage usually.
One can argue malice can be controlled with incentives at some level, though.
So can "stupidity". If something is possible for a human to do, it's something that's possible for any sufficiently-enabled/supported human to do. I've heard it put that the inability to understand or do something is a matter of not having acquired the necessary prerequisites. So, the incentives to control stupidity are the incentives to acquire and apply the prerequisite skills or knowledge.
Yes and in addition malice is enough times predictable while incompetence is just a quantum void where the probabilities are inverted and your hard earned intuition doesn't help you...
I don't seem to be able to edit this anymore, but there is a grievous gap in the writing: "Barry Appelman, for a long time the boss of all the Unix engineers at AOL."
Hmm, sure, but if you want to spot malice, look for the one not taking breaks.
I wouldn't attribute malice to Hanlon's razor, but yes, even dogs and small children know how to play dumb and the children just keep getting better at it.
True story: CEOs, cops,and politicians (and their appointees) are good at it as well.
Ehh... I think neglect and incompetence are super common. I have a sink full of dishes downstairs to prove it. I think corruption, while not rare, is still far rarer. Horses over zebras still (at least in the US).
‘Sufficiently advanced’ is the key term, e.g. if your sink was located on the premises of 5 star hotel then that would probably be indistinguishable from malice.
> On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.
I think you have things backwards. Being dumb is the default. It takes ability and effort and help to get smarter. Animals and children are dumber than us. Do you think they realize it?
Perversely many who are dumb are trapped thinking they are not dumb:
https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...
A dumb person (like a dumb child or animal) are what they are one should not attribute malice. Better to try to see things from their point of view and perhaps help them be smarter. This is what I try to do.
Your other remarks are 100% just the point above was sticking out hence my comment.
Yes this resonates.
I feel that stupidity is evil in the same way as that a shark might be perceived as evil. You could explain it away as "It's not their fault, it's in their nature, they don't know better" but if it's in their nature to cause people harm, if anything, it makes the label more applicable from my perspective.
While that may be a kind view, practically it is rarely a useful one. At least for the person holding it.
Especially when power, violence, money, or sex are involved.
Dunning Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities.
That is to say some of the incompetent are so incompetent they can’t distinguish between their incompetence and an actual expert. This is exhibited very publicly in some contestants of the American Idol genre of shows.
https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...
D&K ironically misengineered their tests and inadventently misconstrued their data due to floor and ceiling effects. If you ran the gamut of their tests against random noise you get similar results.
https://www.mcgill.ca/oss/article/critical-thinking/dunning-...
I posit that anyone who uses DK unironically is actually committing to the DK-paradox, something I'll leave you to define for yourself.
Reminds me of this quote:
> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."
-Stephen Jay Gould
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. “
– Mark Twain
Which he never said, making the not-quote doubly accurate
"Don't believe everything you read on the Internet."
- Abraham Lincoln
Maybe relevant: https://quoteinvestigator.com/2018/11/18/know-trouble/
I think he didn't want to run tests or present results that might be contrary to the mob's dogma, for fear of retribution.
Or it was merely useful excuse for the narrative about flawed humans and anti science vs science arc
If humanity is to mature, we should be an open book when it comes to incentives and build a world purposefully with all incentives aligned to the outcomes we collectively agree upon.
https://fs.blog/great-talks/psychology-human-misjudgment/
Charlie Munger's Misjudgment #1: Incentive-caused Bias https://www.youtube.com/watch?v=h-2yIO8cnvw
https://fs.blog/bias-incentives-reinforcement/
(I keep mentioning this but no one seems to be picking up on it.) There is an algorithm that was developed in the late 80's in the context of therapy that could be used to align incentives and collectively agree on outcomes.
The algorithm is a simple recursive procedure where the guide or therapist evokes the client's motivation or incentive for an initial behaviour and then for each motivation in turn until a (so-called) "core state" is reached. In crude pseudo-code it would be something like:
Generalizing, motivations form a DAG that bottoms out in a handful of deep and profound spiritual "states". These states seem to be universal. Walking the DAGs of two or more people simultaneously until both are in core states effectively aligns incentives automatically, at least that's what I suspect would happen.(I have no affiliation with these folks: Core Transformation Process https://www.coretransformation.org/ )
NLP and related stuff is not taken all that seriously among main stream scientists, you should probably say.
That's definitely true and there's lots of craziness around it. However, the best estimates for therapy and it's effects suggest that it's mostly a provider effect rather than anything in the theory.
Which is to say, a lot of this stuff works because you expect it to.
In re: this "NLP is pseudoscience" business, I've lost patience with it. First, I'm living proof of NLP's efficacy. Second, I don't go around suggesting homeopathy or astrology or pyramid power, okay? Like Ron Swanson "my recommendation is essentially a guarantee."
In terms of a Venn diagram the region representing people who have experience with NLP and the region representing people who think NLP is pseudoscience are disjoint, they do not overlap. As in I have never found anyone who claims that NLP is pseudoscience who has also admitted to having any experience with it. That is not science, eh? To the extent that mainstream scientists don't take NLP seriously they make themselves ridiculous. So yeah, in this one instance, ignore the scientists and look at the strange thing anyway, please? Humor me?
Now NLP is not scientific (yet) and it doesn't pretend to be (although many promoters do talk that way, and that's wrong and they shouldn't do that) and in fact there's a video online (I'll link to if I find it) where the co-founder addresses this point and says "it's not scientific".
However it does work. So it seems imperative to do science to it!?
At the time it was developed there were dozens of schools of psychology on the one hand[1] and academic psychologists on the other and the two groups did not talk to each other. NLP ran afoul of the academic psychologists in the mid 1980's and they closed ranks against it and haven't bothered themselves with it since. Again, I think it would be fantastic if we would do science to it and figure out what these algorithms are actually doing.
In any event the important thing is that the tools and techniques that have been developed are rigorous and repeatable. E.g. this "Core Transformation Process" works. That's primary data on which the science of psychology should operate not ignore.
[1] E.g. https://en.wikipedia.org/wiki/Esalen_Institute | https://en.wikipedia.org/wiki/Human_Potential_Movement | https://en.wikipedia.org/wiki/Humanistic_psychology
I don't think that's really going to work. People won't list all their incentives, because some of them are implicit and others are embarrassing or "creepy". Others will absolutely judge you for what incentivizes your actions, therefore hiding them is the status quo.
If you say that your incentive for working out is to look good and be popular with the ladies then people will judge you for it, even if it's exactly the truth. If you say that you work out "for health" everyone will applaud you for what you're doing. And yet the outcome is going to be the same.
I could be wrong, but I took the parent's comment to mean that we should design incentive structures transparently, instead of obscuring them or outright ignoring the whole concept when engineering society.
You got it backwards. It's not about being transparent about what you want to achieve, it's about being transparent about what others expect you to achieve on your current position.
To check my understanding: say your current position is "unemployed." You would think that the expectation for you is to "get a job", but to get a job is extremely difficult. You have to navigate an almost adversarial job market and recruiting process, often for months. It's essentially a massive negative incentive, considering all of the effort and grief involved. So, the incentives aren't aligned with the desired outcome; the skittishness of each individual hiring company to make sure that they don't get screwed by a bad hire has warped the entire dynamic. Is this a good example?
At this point I'm so used to this that I mentally translate "for health" into "to look good for the opposite sex".
It’s funny how those incentives are so well aligned.
That's reductive, some of use are doing it to look good for the same sex!
>outcomes we collectively agree upon.
lol, what are the chances?
The average Joe is interested in Dem vs Rep or what the latest show on Netflix.
The average researcher is worried about his livelihood, tenure etc.
We do achieve some things, though usually not by spending time pondering questions like,
> lol, what are the chances?
This is a weird take, assuming the average researcher cannot be an average Joe, and also that average people aren't also worried about their livelihood...you might want to revisit your view of the world.
No, it's not. A researcher might an average Joe, but that doesn't mean that the average researcher is the same as the average Joe.
The initial comment makes the 2 mutually exclusive. You reframing it doesn't change what the original comment said. You also blew past the more important of the 2 points: that regular people care about their livelihoods as well.
“Average joe”. Humanity evolves slowly over long periods. The average joe today is far more educated than one from 200 years ago.
Far more educated. But certainly not smarter.
And it's not at all clear if that education does anything other than magnify their intellectual predispositions. The smart people can make great strides, but the stupid will be stupid louder and harder. And the average may well just be... more average.
IQ levels has risen on average because of nutrients, not using lead and education.
An increase in blood lead from 10 to 20 micrograms/dl was associated with a decrease of 2.6 IQ
IQ score rises (Flynn effect) are most likely spurious and do not reflect any increase in actual on-the-spot problem-solving ability.
IQ scores were never intended to be used to compare across cohorts. To do so is invalid.
…and to still believe that IQ levels measure something meaningful is pretty average Joe anyway. Could be the rise of videogames for all we know.
Risen by a few points. Statistically significant but not really meaningful.
> Q levels has risen on average because of...
...nobody really knows why
[dead]
Average Joe does not care about politics…
If one only cares about democrats vs dems (or left vs "far right" in Europe) one also doesn't REALLY care about politics.
Caring = caring to understand how the system works and how the incentives work for participants in it.
If I'm super generous I would guess maybe 0,1 percent of the population cares by that definition.
I think you're a little too harsh in that judgement. I'd say it's at least 1%. Possibly even 5%.
It still means that they are massively, massively outweighed by loud tribalism.
There's a problem that if you care, as an average person, it's hard to do much with it. Every few years you can vote left or right, which unless you happen to live in a marginal constituency or swing state, has no effect.
>Every few years you can vote left or right,
If you're talking about the US, you can vote center-right (Democratic) or far right (Republican). There is no viable left wing party in the US.
From whose perspective and what are we considering right and left? The Democratic party is left of center on social issues, even compared to Europe.
>he Democratic party is left of center on social issues, even compared to Europe.
Actually, the Democratic Party is mostly libertarian (or classically liberal, if you like, which is, inherently right wing) on social issues -- preferring to allow people to make their own choices WRT their bodies rather than seeking government control of reproductive health and other forms of bodily autonomy.
Individual rights and personal agency are not "left wing," except in the eyes of the authoritarian far right (or far left) who seek control over all else.
So no. The Democratic Party has a solidly center-right agenda/ideology -- no collectivism, individual rights not curtailed by the state, freedom of thought and religion, etc.
Despite what some folks may say, there are no Marxists in the US Democratic Party.
That's not to say that the Democratic Party is the ideal. Far from it. But to place them on the absolute "left" is ridiculous on its face.
It's only "left wing" as compared with the far right (read: evangelical christians, white nationalists, xenophobes, etc.) Republican Party who want to limit women's reproductive choices, force the religious doctrines of the Christian church down everyone's throats and spout xenophobic and long debunked genetic tropes related to melanin content.
[flagged]
[flagged]
These posts are tiresome. They all boil down to "my view should be the middle".
You could just as well claim the Democrats are far left and the republicans center left.
The political "spectrum" is not a range of subjective opinions, it's a range of objectively documented ideas.
I don't know how well they can fit in a unidimensional scale though.
This is often done by contrasting the US with Europe, as if Europe is a political gold standard.
>These posts are tiresome. They all boil down to "my view should be the middle".
I don't claim that my views are, or should be, "the middle".
In fact, I didn't share my views at all.
Rather, I contrasted the US Republican Party with the US Democratic Party through the lens of the political spectrum.
Perhaps you think your views are "middle-of-the-road" and maybe they are. I have no idea what you think or believe.
But making the claim you did added absolutely nothing to the discussion, nor did it address anything I wrote. And more's the pity.
This can't be said often enough. We have two right wing parties in the US. That's it.
If humanity is to mature, we must be critical and take responsibility for ourselves, particularly when the alignment of others are concerned. Such as starting by disagreeing with everything, and validate for one's own.
Sounds like regulation to me ;)
I totally agree with you
What something bottom up, transparent in rules and reward structure and able to iterate quickly instead of something top down?
Also agree with the GP.
> with all incentives aligned to the outcomes we collectively agree upon
Some things are simply not possible.
> If humanity is to mature
Recognizing and working within the constraints is maturity
> Arguably some have figured how to game the system to advance their careers
lol arguably? i would bet my generous, non-academia, industry salary for the next 10 years, that there's not a single academic with a citation count over say ... 50k (ostensibly the most successful academic) that isn't gaming the system.
- signed someone who got their phd at "prestigous" uni under a guy with >100k citations
Terence Tao has well over 50K citations. Maybe one can argue that he’s gaming the system because he alone can decide what problems are deemed to be interesting by the broader community, but he can’t help that.
In this case, it might matter that mathematics aren't strictly speaking, a science.
> Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.
I’ve talked to multiple professors about this and I think it’s not because they don’t care about science. They just care more about their career. And I don’t blame them. It’s a slippery slope and once you notice other people who start beating you, then it’s very hard to stay on the righteous pad[note]. Heck I even myself in the PhD have written things I don’t agree with. But at some point you have to pick your battles. You cannot fight every point.
In the end I also don’t think they care that much about science. Political parties often push certain ideas more or less depending on their beliefs. And scientist know this since they will often write their own ideas such that it sounds like it solves a problem for the government. If you think about it, it’s kind of a miracle that sometimes something good is produced from all this mess. There is some beauty to that.
[note] I’m not talking about blatant fraud here but about the smaller things like accepting comments from a reviewer which you know are incorrect, or using a methodology that is the status quo but you know is highly problematic.
The Manhattan project was a government project that was run like a startup.
If such a project happened today, academic scientists would be trying to figure out ways to bend their existing research to match the grants. Then it would take another 30 years before people started to ask why nothing has been delivered yet.
Lots of people doing research find this depressing to the point of quitting. Many of my peers left research as they couldn't stomach all this nonsense. In experimental fields, the current academic system rewards dishonesty so much that ugly things have become really common.
In my relatively short career, I have been asked to manipulate results several times. I refused, but this took an immense toll, especially on two occasions. Some people working with me wanted to support me fighting dishonesty. But guess what, they all had families and careers and were ultimately not willing to do anything as this could jeopardize their position.
I've also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed. The current review system favors mono-cultures and winner-takes-it-all scenarios.
For these reasons, I think industrial labs will be doing much better. Incentives there are not that perverse.
The fact that it can still be considered science when intentional fraud is involved is a huge problem itself.
>Charlie Munger's observation
The more I've read about finances the more I've realized it can also be applied to many other things in the world due to its sheer objectivity.
On the other hand, I've also noticed most if not all of it is based on contexts and data from the mid 20th century. Interesting how that turns out.
> Academic scientists' careers are driven by publishing, citations and impact.
Publishing and citations can and are gamed, but is impact also gamed on a wide scale? That one seems harder to fake. Either a result is true and useful, or it's not.
How much do these same incentives apply to climate science, where huge amounts of money are now in play?
That attitude coincides with current delusion in our society that science is perpetuating a fraud at the level of religions whose leaders are trying to control their flock for financial and sexual gain.
A broken system that incentivizes fraud over knowledge is a real problem.
An assertion that scientists chase the money by nature is a dangerous one that will set us back to the stone age when instead we should be traversing the space as a whole.
[dead]
> Similar - when I was younger, I would never have suspected that a scientist was committing fraud.
Unfortunately many less bright people seem to interpret this as "never trust science", when in reality science is still the best way to push humanity forward and alleviate human suffering, _despite_ all the fraud and misaligned incentives that may influence it.
At some point, the good scientists leave and the fraudsters start to filter for more fraudsters. If that goes on, its over- the academia has gone. Entirely. It can not grow back. Its just a building with conman in labcoats.
My suggestion stands: Give true scientists the ability to hunt fraudsters for budgets. If you hunt and nail down a fraudster, you get his funding for your research.
All the fraudsters will nail the honest ones before they know what hit them.
I mean, the replication crisis had come and gone, about 5 years now. The fraudsters are running the place and have been for at least the last half decade, full stop.
It becomes a survival bias: if people can cheat at a competitive game (or research field) and get away with it, then at the end you'll wind up with only cheaters left (everyone else stops playing).
You could improve the situation by incentivizing people to identify cheaters and prove their cheating. If being a successful cheater-hunter was a good career, the field would become self-policing.
This approach opens its own can of worms (you don't want to overdo it and create a paranoid police-state-like structure), but so far, we have way too little self-policing in science, and the first attempts (like Data Colada) are very controversial among their peers.
As they say: the scum rises to the top, true for academia, politics etc, any organization really.
Quote: "The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing"
My own nuanced take on it:
Incompetent people are quick to grab authority and power. On the other hand principled, competent people are reluctant to take on positions of authority and power even when offered. For these people positions of power a)have connotations of a tyrant b) are uninteresting. (i.e technical problems are more interesting) . Also the reluctance of principled people to form coalitions to keep out the cheaters, because they are a divided bunch themselves exacerbates the problem, where as the cheaters often can collude together (temporarily) to achieve their nefarious goals.
This is why cheaters should never ever ever be allowed to play again with fair players, only with cheaters
And thus we have the Earth. Where all looks like a broken MMO in every direction. Everybody refuses to participate, because it's 100% griefers, yet nobody can leave.
Business: Can you get a law written to command the economy to give you money or never suffer punishments? Intel fabs (https://reason.com/2024/03/20/federal-handout-to-intel-will-...), Tesla dealers (https://en.wikipedia.org/wiki/Tesla_US_dealership_disputes), Uber taxis (https://www.theguardian.com/news/2022/jul/10/uber-files-leak...), ect... Are you wealthy enough there's nothing "normals" can really do? EBay intimidation scandal (https://en.wikipedia.org/wiki/EBay_stalking_scandal).
Economic Academia: Harvard Prof. Gino (https://www.thecrimson.com/article/2024/4/11/harvard-busines...)
Materials Academica: Doping + Graphene = feces papers (https://pubs.acs.org/doi/pdf/10.1021/acsnano.9b00184) "Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?" (Bonus joke! Crap is actually a better dopant material.)
Gaming: Roblox double cut on sales (that people mostly just argue about how enormous it is, because the math's purposely confusing) (https://news.ycombinator.com/item?id=28247034)
Politics: Was Santos ever actually punished?
Military: The saga of the Navy, Pacific Fleet, and Fat Leonard (https://en.wikipedia.org/wiki/Fat_Leonard_scandal) "exploited the intelligence for illicit profit, brazenly ordering his moles to redirect aircraft carriers, ships and subs to ports he controlled in Southeast Asia so he could more easily bilk the Navy for fuel, tugboats, barges, food, water and sewage removal."
Work: "Loyal workers are selectively and ironically targeted for exploitation" (https://www.sciencedirect.com/science/article/abs/pii/S00221...)
There's others, that's just already so many...
Anything not Forbidden is Compulsory.
I used to work with someone up until the point I realized they were so distant from any form of reality that they couldn't distinguish between fact or fiction.
Naturally, they are now the head of AI where they work.
Hacker news is completely flooded with “AI learns just like humans do” and “AI models the human brain” despite neither of these things having any concrete evidence at all.
Unfortunately it isn’t just bosses being fooled by this. Scores of people push this crap.
I am not saying AI has no value. I am saying that these idiots are idiots.
Calls to mind Isaac Asimov's "shotgun curve".
https://archive.org/details/Fantasy_Science_Fiction_v056n06_...
That story reminds me of this gem: https://pages.cs.wisc.edu/~kovar/hall.html
Similar story: computational biologist, my presentations involved statistics so people would come to me for help, and it often ended in the disappointing news of a null result. I noticed that it always got published anyway at whichever stage of analysis showed "promise." The day I saw someone P-hack their way to the front page of Nature was the day I decided to quit biology.
I still feel that my bio work was far more important than anything I've done since, but over here the work is easier, the wages are much better, and fraud isn't table stakes. Frankly in exchange for those things I'm OK with the work being less important (EDIT: that's not a swipe at software engineering or my niche in it, it's a swipe at a system that is bad at incentives).
Oh, and it turns out that software orgs have exactly the same problem, but they know that the solution is to pay for verification work. Science has to move through a few more stages of grief before it accepts this.
I'm mostly out now, but I would love to return to a more accountable academia. Often in these discussions it's hard to say "we need radical changes to publicly funded research and many PIs should be held accountable for dishonest work" without people hearing "I want to get rid of publicly funded research altogether and destroy the careers of a generation of trainees who were in the wrong place at the wrong time".
Even in my immediate circles, I know many industry scientists who do scientific work beyond the level required by their company, fight to publish it in journals, mentor junior colleagues in a very similar manner to a PhD advisor, and would in every way make excellent professors. There would be a stampede if these people were offered a return to a more accountable academia. Even with lower pay, longer hours, and department duties, MORE than enough highly qualified people would rush in.
A hypothetical transition to this world should be tapered. But even at the limit where academia switched overnight, trainees caught in such a transition could be guaranteed their spots in their program, given direct fellowships to make them independent of their advisor's grants, given the option to switch advisor, and have their graduation requirements relaxed if appropriate.
It's easy to hem and haw about the institutional knowledge and ongoing projects that would invariably be lost in such a transition, even if very carefully executed. But we have to consider the ongoing damage being done when, for example, biogen spends thousands of scientist-years and billions of dollars failing to make an alzheimers drug because the work was dishonest to begin with, or when generations of trainees learn that bending the truth is a little more OK each year.
What's amazing to me is that journals don't require researchers to submit their raw data. At least, as far as I know.
The only option for someone who wants to double check research is to completely replicate a study, which is quite a bit more expensive than double checking the researcher's work.
Journals are incentivized to publish fantastic results. Organizing raw data in a way that the uninitiated can understand presents serious friction in getting results out the door.
The organizations who fund the research are (finally) beginning to require it [0][1], and some journals encourage it, but a massive cultural shift is required and there will be growing pains.
You could also try emailing the corresponding authors. Any good-faith scientist should be happy to share what they have, assuming it's well organized/legible.
[0] https://new.nsf.gov/public-access [1] https://sharing.nih.gov/
It's becoming more common for journals to have policies which require that raw data be made available. Here's some background: https://en.wikipedia.org/wiki/FAIR_data
One of the purposes of a site on which I work (https://fairsharing.org) is to assist researchers in finding places where they might upload their data (usually to comply with publishers' requirements).
Replicating the results from someone's original data is difficult and time consuming, and other researchers aren't getting paid to do that (they're getting paid to do new research). And of course the (unpaid) reviewers don't have time either.
Re: the role of (gel) images as the key aspect of a publication. To me this is very understandable, as they convey the information in the most succinct way and also constitute the main data & evidence. Faking this is so bold that it seemed unlikely.
The good news IMO: more recent MolBio methods produce data that can be checked more rigorously than a gel image. A recent example where the evidence in form of DNA sequencing data is contested: https://doi.org/10.1128/mbio.01607-23
> don't really tell you very much
???
I think this statement is either meaningless or incorrect. At the very least your conclusion is context dependent.
That being said, I ran gels back in the stone ages when you didn't just buy a stack of pre-made gels that slotted into a tank.
I had to clean my glass plates, make the polyacrylamide solution, clamp the plates together with office binder clips and make sure that the rubber gasket was water tight. So many times, the gasket seal was poor and my polyacrylamide leaked all over the bench top.
I hated running them. But when they worked, they were remarkably informative.
Count me in the club of failed scientists. In my case it was the geosciences, I would spend hours trying to make all my analysis reproducible and statistically sound while many colleagues just published preliminary simulation results obtaining much more attention and even academic jobs. On the flip side, the time spent improving my data processing workflows led to good engineering jobs so the time wasn't entirely wasted
> raise so many... emotions in me... and I now believe [faking gels] is a common occurrence
On the other hand, shysters always project, and this thread is full of cringe vindications about cheating or faking or whatever. As your "emotions" are probably telling you, that kind of generalization does not feel good, when it is pointed at you, so IMO, you can go and bash your colleagues all you want, but odds are the ones who found results did so legitimately.
Regarding "shysters always project": it rings true to me, but given the topic, I'm primed to wonder how you could show that empirically, and if there's any psychology literature to that effect.
As long as it's all peer reviewed!!
I'm assuming /s above.
Because the amount of pencil-whipped "peer review" feedback I've received could fit in a shoe box, because many "reviewers" are looking for the CV credit for their role and not so much the actual effort of reviewing.
And there's no way to call them out on their laziness except maybe to not submit to their publication again and warn others against it too.
And, to defend their lack of review, all they need to say to the editor anyway is: "I didn't see it that way."
Many solutions involving posting data in repositories or audits are being discussed in the comments.
But given that many people are saying that they noticed and quit academia, how about also creating a more direct 'whistleblower' type of system, where complaints (with detailed descriptions of the fraud or a general view on what one sees in terms of loose practices) goes to some research monitoring team which can then come in and verify the problems.
> how about also creating a more direct 'whistleblower' type of system
There needs to first be a system of checks and balances for this to work. The people at the top already know and condone the behavior; who are the whistleblowers reporting to?
"We represent the top scientists in our field; these are a group of grad students. Who are you going to believe?"
And of course they can easily shut anyone down with two words: "science denier"
Gels tell you quite a lot, its what question you are asking that is more relevant to the results being useful over the technique. Of course people lie and cheat in science. Wet lab and dry lab. So many dry lab papers for example are out there where code are supposedly available “by request” and we take the figures on faith.
This is why institutions break down in the long run in any civilization. People like you, people of principle are drown out my agents acting exclusively in their own interest without ethics.
It happens everywhere.
The only solution to this is skin in the game. Without skin in the game the fraudsters fraud, the audience just naively goes along with it, and the institution collapses under the weight of lies.
The iron laws of bureaucracy are:
1) Nothing else matters but the budget
2) Over the long run, the people invested in the bureaucracy always win out over the people invested in the mission/point
Science is just as susceptible to 2) as anything else.
> The only solution to this is skin in the game.
Another solution is the opposite, no skin in the game
Remove cocrete incentives and pay salaries
I feel this way about every flashy startup with billion dollar valuations.
It seems amazing that they are pulling off what seems impossible.
Years later, we learn they really aren’t. They unjustifiably made a name for themselves by burning VC money instead of running a successful business.
Then hiring the uninteresting gel seems preferable.
> Would I have been more successful
What are you talking about? You _are_ successful. You're not a fraud like all those other tossers.
To me, at the time, successful would have been getting a tenure-track position at a Tier 1 university, discovering something important, and never publishing anything that was intentional fraud (I'm OK with making some level of legitimate errors that could need to be retracted).
Of those three, I certainly didn't achieve #1 or #2, but did achieve #3, mainly because I didn't write very much and obsessed over what was sent to the editor. Merely being a non-fraud is only part of success.
(note: I've changed my definition of success, I now realize that I never ever really wanted to be a tenured professor at a Tier 1 university, because that role is far less fulfilling that I thought it would be).
That is not enough to most people. And if it is enough for others, then it is probably because they were fortunate enough to fall back on something better.
Indeed! You would also been more "successful" selling drugs to teens, or trafficking with human organs. But you did not and that's a good thing.
> At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed
Fraud I suspect is only tip of the iceberg, worse still is delusion that what is taught is factually correct. A large portion of mainstream knowledge that we call 'science' is incorrect.
While fraudulent claims are relatively easy to detect, claims that are backed up by ignorance/delusion are harder to detect and challenge because often there is collective ignorance.
Quote 1: "Never ascribe to malice that which is adequately explained by incompetence"
Quote 2:"Science is the belief in the ignorance of experts"
Side note: I will not offer to back up my above statements, since these are things that an individual has to learn it on their own, through healthy skepticism, intellectual integrity and inquiry.
> A large portion of mainstream knowledge that we call 'science' is incorrect.
How do you know that? Can you prove it scientifically?
> claims that are backed up by ignorance/delusion
In that case, they are not "backed up"
> I will not offer to back up my above statements
> an individual has to learn it on their own, through ... inquiry
May I "inquire" about your reasoning?
Science sent us to the moon. “Do your own research” sent millions to their graves.
“Do your own research” is a movement that is fraught with grifting and basically foundationally just fraud to the core.
“Science” definitely has some fraudsters, but remains the best institution we have in the search for truth.
Don't hate the player hate the game. Governments made scientist only survive if they show results and specifically the results they want to see. Otherwise no anymore grants and you are done. Whether the results are fake or true does not matter
"Science" nowadays is mostly BS, while the scientific method (hardly ever used in "science" nowadays) is still gold.
Do hate the player. People are taught ethics for a reason: no set of rules and laws are sufficient to ensure integrity of the system. We rely on personal integrity. This is why we teach it to our children.
A true scientist never says, "trust me" or even worse, "trust the science."
https://www.youtube.com/watch?v=gnPFL0Dr34c
You have agency. Yes - the system provides incentives. However, you are not some pass-through nothingness to just accept any incentives. You can chose to not accept the incentives. You can leave the system. You're lucky - it's not a totalitarian system. There will be another area of life and work where the incentives align with your personal morals.
Once you bend your spine and kneel to bad incentives - you can never walk completely upright again. You may think and convince yourself that you can stay in the system with bad incentives, play the game, but still somehow you the player remain platonically unaffected. This is a delusion, and at some level you know it too.
Who knows? If everyone left the system with bad incentives, it maybe that the bad system collapses even. It's a problem of collective action. The chances are against a collapse, that it will continue to go on for some time. So don't count on collapse. And even if one was to happen in your time, it will be scorched earth post collapse for some time. Think as an individual - it's best to leave if you possibly can.
> Don't hate the player hate the game.
When the game is designed by the most succesful players you absolutely should hate the players for creating a shitty game.
You are clearly deeply disconnected from the actual practice of research.
The best you can really say is that the statistics chops of most researchers is lacking and that someone researching say caterpillars is likely to not really understand the maths behind the tests they're performing. It's not an ideal solution by any means but universities are starting to hire stats and cs department grads to handle that part.
"Nobody is ever responsible for their own actions. Economics predicting the existence of bad actors makes them not actually bad."
I'm the furthest thing from a scientist unless you count 3,000 hours of PBS spacetime, but I love science and so science/academia fraud to me, feels kinda like the worst kinda fraud you can commit. Financial fraud can cause suicides and ruin in lives, sure, but I feel like academic fraud just sets the whole of humanity back? I also feel that through my life I've (maybe wrongly) placed a great deal of respect and trust in scientists, mostly that they understand that their work is of the upmost importance and so the downstream consequences of mucking around are just too grave. Stuff like this seems to bother me more than it rationally should. Are people who commit this type of science fraud just really evil humans? Am I over thinking this? Do scientists go to jail for academic fraud?
Pick up an old engineering book at some point, something from mid 1800's or early 1900's and you'll quickly realize that the trust people put on science isn't what it should be. The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you're not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself. If you go to the social sciences then the amount of crock that gets published is incredible.
As a quick example, any book about electricity from the early 1900's will include quite serious sections about the positive effects of electromagnetic radiation (or "EM field therapies"), teach you about different frequencies and modulations for different illnesses and how doctors are applying them. Today these devices are peddled by scammers of the same ilk as the ones that align your shakras with the right stone on your forehead.
Going to need some citations here since the texts that I'm familiar with from that time period are "A Treatise on Electricity and Magnetism" by Maxwell (mid-late 1800s) and "A History of the Theories of Aether and Electricity" by E. T. Whittaker, neither of which mentions anything of the sort. I suspect you are choosing from texts that at the time likely would not have been considered academic or standard.
Science is eventually consistent. Scientists, individual papers, and the dogma at any given time may not be. It just takes a long time.
Is absolutely as much faith as religion. Just the basic assumption that the universe didn't come from anything is as baseless as that it does.
The default state of the human brain almost seems to be a form of anti-science, blind faith in what you already believe, especially if you stand to gain personally from what you believe being true.
What is most incredible to me is even knowing and believing the above, I fall prey to this all the time.
the best example is psychology. the entire field needs to be scrapped and started over, nothing you read on any of those papers can be trusted, it's just heaping piles of bad research dressed with a thin veil of statistical respectability.
We use EM radiation for illnesses and doctors apply them. It's one of the most important diagnostic and treatment options we have. I think what you're referring to is invalid therapies ("woo" or snake oil or just plain ignorance/greed) but it's hard to distinguish those from legitimate therapies at times.
Why do you need to go back 100-170 years, if it's an issue? Aren't there more recent examples?
This proves southpark was right, science is just another form of religion.
[dead]
I think the error is putting trust in scientists as people, instead of putting trust in science as a methodology. The methodology is designed to rely on trusting a process, not trusting individuals, to arrive at the truth.
I guess it also reinforces the supreme importance of reproducibility. Seems like no research result should be taken seriously until at least one other scientist or group of scientists are able to reproduce the result.
And if the work isn't sufficiently defined to the point of being reproducible, it should be considered a garbage study.
There is no way to do any kind of science without putting trust in people. Science is not the universe as it is presented. Science is the human interpretation of observation. People are who carry out and interpret experiments. There is no set of methodology you can adopt that will ever change that. "Reproducibility" is important, but it is not a silver bullet. You cannot run any experiment exactly in the same way ever.
If you have independent measurements you cannot rule out bias from prior results. Look at the error bars here on published values of the electron charge and tell me that methodology or reproducibility shored up the result. https://hsm.stackexchange.com/questions/264/timeline-of-meas...
The way I sum it up is: science is a method, which is not equivalent to the institution of science, and because that institution is run by humans it will contain and perpetrate all the ills of any human group.
This error really went viral during the pandemic and continues to this day. We're in for an Orwellian future if the public does not cultivate some skeptic impulse.
Science is an anarchic enterprise. There is no "one scientific method", and anyone telling you there is has something to sell to you (likely academic careerism). https://en.wikipedia.org/wiki/Against_Method
I think it is fine to put some trust into concrete individual scientists who have proven themselves reliable.
It is not fine to put trust into scientists in general just because they walk around in a lab coat with a PhD label on its front.
How does this work for things like COVID vaccines, where waiting for a reproduction study would leave hundreds of thousands dead? Ultimately there needs to be some level of trust in scientific institutions as well. I do think placing higher value on reproducibility studies might help the issue somewhat, but I think there also needs to be a larger culture shift of accountability and a higher purpose than profit.
You're far from a scientist, so it's easy for you to put scientists/academia on a pedestal.
For most of the people who end up in these scandals, this is just the day job that their various choices and random chance led up to. they're just ordinary humans responding to ordinary incentives in light of whatever consequences and risks they may or may not have considered.
Other careers, like teaching, medicine, and engineering have similar problems.
And management consulting. But probably not plumbing, where the work product is very concrete and easy to judge.
As a scientist, I agree, although for not quite the reason you gave. Scientists are given tremendous freedom and resources by society (public dollars, but also private dollars like at my industry research lab). I think scientists have a corresponding higher duty for honesty.
Jobs at top institutions are worth much more than their nominal salary, as evidenced by how much those people could be making in the private sector. (They are compensated mostly in freedom and intellectual stimulation.) Unambiguously faking data, which is the sort of thing a bad actor might do to get a top job, should be considered at least as bad a moral transgression as stealing hundreds of thousands or perhaps a few million dollars.
(What is the downside? I have never once heard a researcher express feeling threatened or wary of being falsely/unjustly accused of fraud.)
In my view, prosecuting the bad actors alone will not fix science. Science is by its own nature a community because only a small number of people have the expertise (and university positions) to participate. A healthy scientific discipline and a healthy community are the same thing. Just like the "tough on crime" initiative alone often does not help a problematic community, just punish scientific fraud harshly will not fix the problem. Because the community is small, to catch the bad actors, you will either have insiders policing themselves, or have an non-expert outsiders rendering judgements. It's easy for well-intention-ed policing effort to turn into power struggles.
This is why I think the most effective way is to empower good actors. Ensure open debate, limit the power of individuals, and prevent over concentration of power in a small group. These efforts are harder to implement than you think because they run against our desire to have scientific superstars and celebrities, but I think they will go a long way towards building a healthy community.
"Broken windows" policing has been shown to work, which I think is what you mean by "tough on crime."
I agree with you, science fraud is terrible. It pollutes and breaks the scientific method. Enormous resources are wasted, not just by the fraudster but also by all the other well meaning scientists who base their work on that.
In my experience no, most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives. Scientist has become just a job, you find all kinds of people there.
As far as I know no-one goes to jail, worst thing possible (and very rare) is losing the job, most likely just the reputation.
"most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives"
Maybe I'm too idealistic but why does following incentives with no regard for secondary consequences not evil?
That is what evil usually looks like. You expected horns and fire?
It's complicated. Historically scientific fraud could be construed as 'good-intentioned' - typically a researcher in a cutting edge field might think they understood how a system worked, and wanting to be first to publish for reasons of career advancement, would cook up data so they could get their paper into print before anyone else.
Indeed, I believe many academic careers were kicked off in this manner. Where it all goes wrong is when other more diligent researchers fail to reproduce said fraudulent research - this is what brought down famous fraudster Jan Hendrik Schön in the field of plastic-based organic electronics, which involved something like 9 papers in Science and Nature. There are good books and documentaries on that one. This will only be getting worse with AI data generation, as most of those frauds were detected by banal data replication, obvious cuts and pastes, etc.
However, when you add a big financial driver, things really go off the rails. A new pharmaceutical brings investors sniffing for a big payout, and cooking data to make the patentable 'discovery' look better than it is is a strong incentive to commit egregious fraud. Bug-eyed greed makes people do foolish things.
People like us think scientists care about big-money things, but they largely don't care about that stuff as much as they care about prestige in their field. Prominent scientists get huge rewards of power and influence, as well as indirect money from leveraging that influence. When you start to think that way, the incentives for fraud become very "minor" and "petty" compared to what you are thinking of.
> Stuff like this seems to bother me more than it rationally should.
It's bothering you a rational amount, actually. These people have done serious damage to lots of lives and humanity in general. Society as a whole has at least as much interest in punishing them as it does for financial fraudsters. They should burn.
There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.
Once it became a distinguished profession the incentives changed.
"When a measure becomes a target, it ceases to be a good measure"
> There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.
From a distance the practice of science in early modern and Enlightenment times might look like the disinterested pursuit of knowledge for its own sake. If you read the detailed history of the times you'll see that the reality was much more messy.
Goodhart's Law!
Generally, the fields that have a Nobel in them attract the glory hounds and therefore the fraudsters. The ones that don't, like geology or archeology for example, don't get the glory hounds.
Anytime you see champagne bottles up on a professor's top shelf with little tags for Nature publications (or something like that), then you know they are a glory hound.
When you see beer bottles in the trash, then you know they're in it for more than themselves.
It seems like this could ultimately fall under the category of financial fraud, since the allegations are that he may have favorably misrepresented the results of drug trials where he was credited as an inventor of the drug that's now worth hundreds of millions of dollars.
Evil is a much simpler explanation than recognizing that if you were in the same position with the same incentives, you would do the same thing. It's not just one event, it's a whole career of normalizing deviation from your values. Maybe you think you'd have morals that would have stopped you, maybe those same morals would have ensured you were never in a position to PI research like that.
Scientific fraud can also compound really badly because people will try to replicate it, and the easiest results to fake are usually the most expensive...
I also watched almost all episodes of PBS Spacetime. Some of them multiple times. I'm so happy that Spacetime exists and also that Matt was recruited as a host (in place of Gabe). Highly recommended channel, superb content!
It is the same flavor of fraud as financial fraud. It is about personal gain, and avoiding loss.
This kind of fraud happens because scientists are rewarded greatly for coming up with new, publishable, interesting results. They are punished severely for failing to do that.
You could be the department's best professor in terms of teaching, but if you aren't publishing, your job is at risk at many universities.
Scientists in Academia are incentivized to publish papers. If they can take shortcuts, and get away with it, they will. That's the whole problem, that's human nature.
This is why you don't nearly as many industry scientists coming out with fraudulent papers. If Shell's scientists publish a paper, they aren't rewarded for that, if they come up with some efficient new way to refine oil they are rewarded, and they also might publish a paper if they feel like it.
> If Shell's scientists publish a paper
A lot of companies reward employees for publications. Mine certainly does. Also an oil company may not be such a great example since they directly and covertly rewarded scientists for publishing papers undermining climate change research.
Can you go to jail for knowingly defrauding another entity out of money (such as grants). Yes. Absolutely.
Are you going to go to jail for fudging some numbers on your paper, not likely.
IMO faking a paper and then sticking that in your CV on a grant application is fraud enough to be worthy of jail.
As a collective endeavor to seek out higher truth, maybe some amount of fraud is necessary to train the immune system of the collective body, so to speak, so that it's more resilient in the long-term. But too much fraud, I agree, could tip into mistrust of the entire system. My fear is that AI further exacerbates this problem, and only AI itself can handle wading through the resulting volume of junk science output.
This is pretty funny. I usually hear this kind of language when a religious person is so devastated when their priest or pastor does something wrong that it causes them to leave their religion altogether. Are you going to do the same thing for scientism?
I'm not a particularly religious person, I didn't realize what you described is something that happens with any great frequency. Never the less, I suppose one is able to leave a particular place of worship and not leave a religion, as it is with any way people form their views on something societal like this, it's on a spectrum? Religion, Politics, Science, Sex, Education, whatever.
Science is the search for truth. Lying is anthemic to that.
How this happens given the near reverence provided to “peer review” is another question.
This sort of behavior is only going to worsen in the coming decades as academics become more desperate. It's a prisoner's dilemma: if everyone is exaggerating their results you have to as well or you will be fired. It's even more dire for the thousands of visa students.
The situation is similar to the "Market for lemons" in cars: if the market is polluted with lemons (fake papers), you are disincentivized to publish a plum (real results), since no one can tell it's not faked. You are instead incentivized to take a plum straight to industry and not disseminate it at all. Pharma companies are already known to closely guard their most promising data/results.
Similar to the lemon market in cars, I think the only solution is government regulation. In fact, it would be a lot easier than passing lemon laws since most labs already get their funding from the government! Prior retractions should have significant negative impact on grant scores. This would not only incentivize labs, but would also incentivize institutions to hire clean scientists since they have higher grant earning potential.
My recommendation is for journals to place at least equal importance to publishing replications as for the original studies.
Studies that have not been replicated should be published clearly marked as preliminary results. And then other scientists can pick those up and try to replicate them.
And institutions need to give near equal weight to replications as to original research when deciding on promotions. Should be considered every researchers responsibility to contribute to the overall field.
We can solve this at the grant level. Stipulate that for every new paper a group publishes from a grant, that group must also publish a replication of an existing finding. Publication would happen in pairs, so that every novel thing would be matched with a replication.
Replications could be matched with grants: if you receive $100,000 grant, you'd get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can't just choose your group's previous thing.
I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.
I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.
Replication is over-emphasised. Attempts to organise mass replications have struggled with basic problems like papers making numerous claims (which one do you replicate?), the question of whether you try to replicate the original methodology exactly or whether you try to answer the same question as the original paper (matters in cases where the methodology was bad), many papers making obvious low value findings (e.g. poor children do worse at school) and so on.
But the biggest problem is actually that large swathes of 'scientists' don't do experiments at all. You can't even replicate such papers because they exist purely in the realm of the theoretical. The theory often isn't even properly written down! They will tell you that the paper is just a summary of the real model, which is (at best) found in a giant pile of C or R on some github repo that contains a single commit. Try to replicate their model from the paper, there isn't enough detail to do so. Try to replicate from the code, all you're doing is pointlessly rewriting code that already exists (proves nothing). Try to re-derive their methodology from the original question and if you can't, they'll just reject your paper as illegitimate criticism and say it wasn't a real replication.
Having reviewed quite a lot of scientific papers in the past six years or so, the ones that were really problematic couldn't have been fixed with incentivized replication.
So then, how on earth does this stuff even get published? What exactly is it that we're all doing here?
If a finding either cannot be communicated enough for someone else to replicate it, or cannot be replicated because the method is shoddy, can we even call that science?
At some level I know that what I'm proposing isn't realistic because the majority of science is sloppy. P-hacking, lack of detail, bad writing, bad methods, code that doesn't compile, fraud. But maybe if we tried some version of this, it would cause a course correction. Reviewers, knowing that someone actually would attempt to replicate a paper at some point down the road, would be far more critical of ambiguity and lack of detail.
Papers that are not fit to be replicated in the future, whose claims cannot be tested independently, are actually not science at all. They are worth less than nothing because they take up air in the room, choking out actual progress.
That correct. Fundamentally the problem is foundations and government science budgets don't care. As long as voters or Bill Gates or whoever believes they're funding science and progress the money flows like water. There's no way to fix it short of voting in a government that totally defunds the science budget. Until then everyone benefits from unscientific behaviour.
> can we even call that science?
The amazing thing is that it all works out in the end and science is still making (quite a lot of) progress.
That's also the reason why we shouldn't spend all of our time and money checking and replicating things just to make sure noone publishes fraudulent/shoddy results. (We should probably spend a little more time and money on that, but not as much more as some people here seem to suggest).
Most research is in retrospect useless nonsense. It's just impossible to tell in advance. There is no point in checking and replicating all of it. Results that are useful or important will be checked and replicated eventually. If they turn out to be wrong (which is still quite rare), a lot of effort is wasted. However, again, that's rare.
If the fraud/quality issues get worse (different from "featuring more frequently and prominently in the news"), eventually additional checks start to make sense and be worth it overall. I think quite a lot of progress is happening here already, with open data, code, pre-registration of studies, better statistical methods, etc, becoming more common.
I think a major issue is the idea that "papers are the incontestable scientific truth". Some people seem to think that's the goal, or that it used to be the case and fraud is changing that now, however, this was never the case and it's not at all the point of publishing research. I think a major gain would be to separate in the public perception the concepts, understanding and reputations of science vs. scientific publishing.
> many papers making obvious low value findings (e.g. poor children do worse at school) and so on.
Why are these obvious low value papers a) getting grants, b) getting published, c) not permanently damaging the researchers' careers?
If you do bad work you eventually get fired, why don't we do the same thing with research academics who do bad work?
Isn't that the point that if they couldn't have been fixed that they were problematic in the first place?
This stuff happens in Computer Science too. Back around 2018 or so I was working on a problem that required graph matching (a relaxed/fuzzy version of the graph isomorphism problem) and was trying algorithms from many different papers.
Many of the algorithms I tried to implement didn't work at all, despite considerable effort to get them to behave. In one particularly egregious (and highly cited) example, the algorithm in the paper differed from the provided code on GitHub. I emailed the authors trying to figure out what was going wrong, and they tried to get funding from me for support.
My manager wanted me to right a literature review paper which skewered all of these bad papers, but I refused since I thought it would hurt my career. Ironically the algorithm that ended up working the best was from one of the more unknown papers, with few citations.
You should be able to build an entire career out of replications: hired at the best universities, published in the top journals, social prestige and respect. To the point where every novel study is replicated and published at least once. Until we get to that point, there will be far fewer replications than needed for a healthy scientific system.
Replications are not very scientifically useful. If there were flaws in the design of the original experiment, replicating the experiment will also replicate the flaws.
What we should aim for is confirmation: a different experiment that tests the underlying phenomenon that was the subject of the first paper.
I'd be careful about that. Faking replications is even easier than faking research, so if you place a lot of importance on them, expect the rate of fraud in replication studies to explode.
This is a very difficult problem to solve.
The problem with putting the onus on the journals is there is no incentive for them to reward replications. Journals don't make money on replicated results. Customers don't buy the replication paper they just read the abstract to see if it worked or not.
I do like the idea of institutions giving tenure to people with results that have stood the test of time, but again, there is no incentive to do so. Institutions want superstar faculty, they care less about whether the results are true.
The only real incentive that I think can be targeted is still grant money, but I would love to be proved wrong.
> And then other scientists can pick those up and try to replicate them.
unless there are grants specifically for that purpose, then it's not going to happen; and it's hard to apply for a grant just to replicate someone else's results verbatim. (usually you're testing the theory but with a different experiment and set of data which is much more interesting than simply repeating what they did with their data; in fact replicating it with a different set of data is important in order to see if the results weren't cherry-picked to fit the original dataset).
I think it’s a great idea. It would also give the army of phds an endless stream of real tangible work and a way to quickly make a name for themselves by disproving results.
journals have zero incentives to care about any of this.
It seems surprisingly hard to counter scientific fraud via a system change. The incentives are messed up all the way around.
If the older author is your advisor and you feel one of their juniors is cutting corners or the elder is cutting corners, you better think twice about what move will help your career. If confirming a recent result counts toward tenure, then presto you have an incentive for fraudulent replication (what's the chance it's incorrect anyway? The original author is a big shot.) Going against the previous acclaimed result takes guts especially in a small field where it might kill your career if YOU got it wrong somehow - So you need to have much stronger results than the original research, and good luck with that. We might say "this is perfect work for aspiring student researchers, and done all the time" - to reimplement some legendary science experiment - but no, not when it's a leading edge poorly understood experiment, and not when that same grad student is already running to try and produce original research themselves.
The big funders might dedicate money to replicated research that everybody is enthusiastic about (before everyone relies on it). But some research takes years to run. Other research is at the edge of what's possible. Other research is led by a big shot nobody dares to take on. Etc etc. So where is the incentive then? The incentive might be to take the money, fully intending to return an inconclusive result.
Some research is taken on now. But only AFTER it's relied on by lots of people. Or much later when better ideas had the time to emerge on how to test the idea more cleverly i.e. cheaper and faster. And that's not great because costly in all the wasted effort by others, based on a fraudulent result. And all the mindshare the bad result now has.
This is messed up.
While Akerlof's Market for Lemons did consider cases where government intervention is necessary to preserve a market, like with health insurance markets (Medicare), he describes the "market for lemons" in the used car market as having been solved by warranties.
If someone brings a plum to a market for lemons, they can distinguish the quality of their product by offering a warranty on its purchase, something that sellers of lemons would be unwilling to do, because they want to pass the cost burden of the lemon onto the purchaser.
The full paper is fairly accessible, and worth a read.
Not sure how this could be applied to academia, one of the problems is that there can be significant gaps between perpetrating fraud and having it discovered, so the violators might still have an incentive to cheat.
> if everyone is exaggerating their results you have to as well or you will be fired.
Is this really the case the though? Isn't the whole point of tenure (or a big selling point at least) insulating academics from capricious firings?
The big question I have is that there are names on these fraudulent papers, so why are these people still employed? If you generate fictitious data to get published, you should lose any research or teaching job you have, and have to work at McDonald's or a warehouse for the rest of your life. There are plenty of people who want to be professors that we can eliminate the ones who will lie while doing it without losing much (perhaps anything). If your job was funded by taxpayer funds there should be criminal charges associated with willfully and knowingly fabricating data, results, or methods. At that point you're literally lying in order to steal taxpayer funds, it's no different than a city manager embezzling or grabbing a stack of $20 bills out of the cash register.
Well you aren’t going to get tenure unless you distort your results and it’s hard to change established habits.
That, and you select for the kind of people who are willing to fake results to further their own careers.
I wonder if there are any studies on whether fraud increased after the Bayh-Dole Act. There's certainly fraud for prestige, that's pretty expected. But mixing in financial benefits increases the reward and brings administrators into play.
> ... as academics become more desperate.
Yes and ... we're already there.
The incentive structures in science has been relatively stable since I entered the field in 1980 (neuroscience, developmental biology, genetics). Quality and quantity of science is extraordinary, but peer review is worse than bad. There are almost no incentives to review the work of your colleagues properly. It does not pay bills and you can make enemies easily.
But there was no golden era of science to look back on. It has always been a wonderful productive mess—much like the rest of life. As least it moves forward—and now exceedingly rapidly.
Almost unbelievably, there are far worse crimes than fraud that we completely ignore.
There are crimes associated with social convention in science of the type discussed by Karl Herrup with respect to 20 years of misguided focus on APP and abeta fragments in Alzheimer’s disease:
https://mitpress.mit.edu/9780262546010/how-not-to-study-a-di...
This could be called the “misdemeanors of scientific social inertia”. Or the “old boys network”.
There is also an invisible but insidious crime of data evaporation. Almost no funders will fund data preservation. Even genomics struggles but is way ahead in biomedical research. Neuroscience is pathetic in this regard (and I chaired the Society for Neuroscience’s Neuroinformatics Committee).
I have a talk on this socio-political crime of data evaporation.
https://www.youtube.com/watch?v=4ZhnXU8gV44&embeds_referring...
you don't need regulation for a stable durable goods market. income and credit shocks cause turnover of good quality stock in the secondary market.
It could also have a chilling effect on a lot of breakthrough research. If people are willing to put out what they mostly think is right, it might set back progress decades as well.
BS governmental desperation to show any "result" (even if it is fake) is what brought us here. As scientist have to show more fake results to get more grants.
Removing the government from science could help, not the other way around.
Good luck with that sentiment here.
People just went through the last five years and will go to their graves defending what they saw first hand. To admit that maybe those moves and omissions weren’t helpful would be to admit their ideology was wrong. And that can not be.
If I have learned anything over 40 years, is that the number of people who actually live in a way consistent with hypothesis testing, data collection, evidence evaluation framework required to have scientific confidence in future action or even claims is effectively zero
That includes people who consider themselves professional scientists, PhD‘s authors, leaders etc.
The only people I know who live “scientifically” consistently are people considered “neurodivergent”, along the autism-adhd-odd spectrum, which forces them into creating the type of mechanisms that are actually scientific and as required by their conditions.
Nevertheless, we should expect better from people; and on average need to do better in aligning how they think to how science, when robustly demonstrated, demonstrates with staggering predictability how the world works, compared to all other methods of understanding the universe.
The fact that the people carrying the torch of science don’t live up to the standard is expected - hence peer review.
This is an indictment of the incentives and pace at which bad science is revealed (like in this case) is always too slow, but science is the one place where eventually you’re going to either get exposed as a fraud or never followed in the first place.
There’s no other philosophy that has a higher bar of having to conform with all versions of reality forever.
I would just like to point out the irony of claiming that people live in a way inconsistent with scientific rigour, based solely on personal experience.
I think you’re suggesting that I’m making a conclusion without sufficient evidence - hence the “irony”
Recall I’m discussing how people live, namely that they don’t live based on their own claims as to how to live. You’d have to evaluate my behaviors to derive if my claim is ironic.
However I’m Happy to provide that epistemological chain if requested
It's disheartening to think that the virtues you are told to have as a kid are considered "weak sauce" once you are an adult.
The reason many people hate children is because children are not satisfied with the level of epistemology that most people can provide them, and have no compunction in saying “that answer is unsatisfactory”
Hence why institutional pedagogy is so often rote and has nothing to do with understanding - when we know science of learning says that every human craves understanding (montessori, piaget etc…)
In fact, the shortest way to break the majority of people’s brains is to ask them one of the following questions:
- Can you Explain the reasoning behind your behavior?
- How would you test your hypothesis?
- What led you to the conclusion you just stated?
- Can you clarify the assumptions embedded in your claim?
-Have you evaluated the alternatives to your position?
But what virtues were rewarded when you were a kid?
My parents and grandparents would say I should be charitable, helpful, and share, but what actually made them happy? Beating other kids.
It's a dilemma- do you want to be virtuous or do you want to maximize your money? I get a sense around here that only the law matters (morals be damned) and we do whatever work pays best.
> effectively zero
That feels extreme. Zero is a cold, dark, lonely number. Maybe it’s correct—i dont know. Ive worked on only a couple of projects in this space, and while the incentives certainly involved publishing, i dont feel that it equated to abandoning the SciMethod. Instead, it was the cost to pay for the ability to continue doing science.
Can you really stand by ZERO? How about a 1%. Meet me somewhere above zero, or, if you’d be so kind, make a compelling case why were truly rock bottom.
OP said effectively zero, not zero. These are semantically very different.
It's funny you mention autism, adhd, and similar. It's something I believe the science is quite shaky on.
I've met so many people who self-diagnose with those "conditions", because, I think, they want the world to feel sorry for them, or something.
The American version of the cultural Revolution is about to begin, and everybody recognizes that the labor class is coming
So everybody’s trying to align themselves with a victimized group as closely to reality as possible
To such an extent where people are actively making up victimization reasons such that they can find themselves in an affinity group with other victims so they are safe from prosecution during the troubles
I was the victim of a pretty bizarre super in-your-face academic theft.
Someone snooped a half-finished draft of mine off GitHub and...actually got it published in a real journal: https://forbetterscience.com/2024/05/29/who-are-you-matthew-...
In spite of having a full commit log (with GitHub verified commits!!!) of both the code AND the paper, both arxiv and the journal didn't seem to care or bother at all.
Anyhow, I highly recommend reading the for better science blog. It's incredible how rampant fraud truly is. This applies to multiple nobel prize winners as well. It's nuts.
This guy sounds like someone who would be fun for Javier Leiva's PRETEND podcast [0]. You should reach out to Javier.
[0] https://pretendradio.org/
Can you speak more to the “not caring at all” bit? I believe you, but how did you engage them? Did you end up publishing your work eventually?
forbetterscience seems like a good idea, but the writing style, the images, and even the about page gave me pause on if this is a reliable site for trustworthy science commentary
After almost a year, with an unsurmountable amount of open-source evidence and the thief having had every single paper he has ever written retracted for fraud!! the best the journal did is to add a notice: https://www.mdpi.com/2674-113X/2/3/20
Arxiv cared even less. They allowed the thief to DMCA strike me multiple times. He even managed to take down the real version of the paper by claiming that it was his: https://arxiv.org/abs/2308.04214
> Did you end up publishing your work eventually
No. When I tried to do so, I was actually rejected from a conference because their plagiarism detecting system labelled that I was trying to publish something that already been published (what was stolen).
It was very traumatic.
>I believe you, but how did you engage them?
From the article, it seems the engagement came in the form of DMCA take-down requests from university lawyers... which the publication then largely ignored for a considerable period of time (possibly due to counter-DMCA).
In an unrelated scientific field, EE, I recently witnessed how the DMCA process could be used by an "engineer" to silence criticism of his hybrid vehicle battery "upgrades" [2] — similar to Australian company DCS's snafu/lawsuit [1].
Just disgusting, these vultures that know [how to steal/lie/obfuscate] just well-enough to be dangerous... including how to manipulate our DMCA system to their dishonest advantage.
[1] youtube.com/watch?v=_QNMVMlx48E&pp
[2] theautopian.com/toyota-prius-owners-can-soon-swap-tired-old-batteries-for-sodium-ion-cells-but-drama-rages/
Huh. Sounds like the research needs to be forked to several different hosting providers, preferably ones not based in the US with its insane DCMA laws.
As a scientist who has published in the neuroscience space, I don’t what to say other than the incentives in academia are all messed up. Back in the late 90s, NIH made a big push on ‘translational research”, that is, researchers were strongly encouraged to demonstrate their research had immediate, real world benefits or applications. Basic research and the careful, plodding research needed to nail down and really answer a narrow question was discouraged as academic navel-gazing.
On one hand, it seems the push for immediate real world relevance is a good thing. We fund research in order that society will benefit, correct? On the other hand, since publications and ultimately funding decisions are based on demonstrating real world relevance, it’s little surprise scientists are now highly incentivized to hype their research, p-hack their results, or in rare cases, commit outright fraud in an attempt to demonstrate this relevance.
Doing research that has immediate translational benefits is a tall order. As a scientist you might accomplish this feat a few times in your career if you’re lucky. The rest of the corpus of your work should consist of the careful, mundane research the actual translational research will be based upon. Unfortunately it’s hard to get that foundational, basic, research published and funded nowadays, hence the messed-up incentives.
There's evidence that the turning point was in the 90s but I suspect the real underlying problem is indirect funds as a revenue stream for universities, combined with the imposition of a for-profit business model expectation from politicians at the state and other levels. The expectation changed from "we fund universities to teach and do research" to "universities should generate their own income", which isn't really possible with research, so federal funding filled the gap. This lead to the indirect fund firehose of cash, pyramid scheme labs, and so forth and so on. It sort of became a feedback loop, and now we are where we are today.
Translational research is probably part of it but I think it's part of a broader hype and fad machine tied to medicine, which has its own problems related to rent-seeking, regulatory capture, and monopolies, among other things. It's one giant behemoth of corruption fed by systemic malstructurings, like a biomedical-academic complex of problematic intertwined feedback loops.
I say this as someone whose entire career has very much been part of all of it at some level.
Good points, thanks. As I’m sure you’re aware, the indirect rates at some universities are above 90%. That is, for every dollar that directly supports the research, almost another dollar goes to the university for overhead. Much of this overhead is legitimate: facilities and equipment expenses, safety training, etc… but I suspect a decent portion of it goes to administrative bloat, just as much as the education-only part of the university has greatly increased administrative bloat over the last 30-40 years.
Another commentator made a separate point about how professors don’t always get paid a lot, but they make it up in reputation. Ego is a huge motivator for many people, especially academics in my observation. Hubris plays no small part in the hype machine surrounding too many labs.
The Retraction Watch website does a good job of reporting on various cases of retractions and scientific misconduct [1].
Like many others, I hope that a greater focus on reproducibility in academic journals and conferences will help reduce the spread of scientific misconduct and inaccuracy.
[1]: https://retractionwatch.com/
There may be a dark twist to this story.
The expose article writes:
> "UCSD neuroscientist Edward Rockenstein, who worked under Masliah for years, co-authored 91 papers that contain questioned images, including 11 as first author. He died in 2022 at age 57."
They say nothing else about this. But looking at Rockenstein's obituary, indications are that it was suicide. (It was apparently sudden, at quite a young age, and there are many commenters on his memorial page "hoping that his soul finds peace," and expressing similar sentiments.)
[dead]
I shared this article with an MD/PhD friend who has done research at two of the three most famous science universities in America ... and she said "this [not this guy, this phenomenon] is why I left science."
Maybe it's like elite running - everyone who stays competitive above a certain level is cheating, and if you want to enjoy watching the sport, you just learn to look the other way. Except that the stakes for humanity are much higher in science than in sport.
Blatant fraud is rare in physics, engineering, chemistry. Lying is rare. Quality is high at the highest institutions of physics and chemistry. Exaggerated claims occur, but much less than in day to day life. Top visibility work is quickly reproduced. Reproduction is the essence of science.
Did you google "fraud in physics" or "fraud in chemistry"? (I just did.)
> Exaggerated claims occur, but much less than in day to day life.
"Day-to-day life" does not lay the foundation for millions of dollars in followup research, or set the direction of a grad student's research, i.e. their career.
> Reproduction is the essence of science.
Did you read OP's link to the Science article?
> It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position.
This is absolutely something that we should routinely be doing, though.
It's pretty similar to the level of distrust in the software engineering job interview process.
Pick your poison, to some extent. Better would be to not have to do it after-the-fact, but to vet better at every intermediate step, but it's hard. Just a very difficult people problem.
Agreed. There!s uproar over coding interviews, which makes no sense to me. We give easy peasy code reviews to smoke check claimed skills. 4 of 5 candidates do absolutely terribly on very easy stuff, relative to their claimed skillset. No, our bar isn’t high—fraud (resume fraud) is sadly very real.
I agree. To their defense, he had a few hundred papers published before joining, according to PubMed, and was a leader in his field: https://pubmed.ncbi.nlm.nih.gov/?term=Masliah%20E&filter=dat...
My concern is that with AI getting better and easier to use (e.g., in Photoshop) fraud will be extremely hard to detect.
We might need to re-think how research is done and results verified.
Maybe we need instruments that sign results cryptographically and use a blockchain mechanism to establish provenance. We should have cameras that can establish that published images have not been modified (or at least provide raw and adjusted pairs--digital radiology has the concept of a "presentation state" that I think could work).
In theory at least research should be auditable to a lab notebook. The problem with photos and such is you can't tell if it was modified before it was pasted to the page and large datasets just can't be put in to paper. And electronic notebooks I've used tend to be even more annoying than paper (too rigidly formatted and not adaptive to workflow optimization but it's difficult to explain).
Anyway those sorts of things that establish provenance should also protect against deep learning. You may be able to create fake data, but can you deep fake data that was signed by Nikon device #XYZ with cryptographicly confirmed hashes published to a blockchain 3 years ago at the time the data was generated?
yeah it sounds a little bit absurd to me. It's just basic due diligence. You don't not run a background check on a potential employee just bc their resume looks good and they got a reference. In those cases you still go, "Annoying we have to wait because we want this person on board NOW and it's a fairly shallow investigation that 99% of the time doesn't reveal anything even if there is something, but it's the standard procedure."
I'm not a researcher or academic, but when I think of roughly how long it takes me to do meaningful deep work and produce a project of any significance, I'm struck by the fact that his 800 papers isn't a red flag? Even if you allocate ~3 months per paper, that's over 200 years of work. Is it common for academics to produce research papers in a matter of days?
From the article: Masliah appeared an ideal selection. The physician and neuropathologist conducted research at the University of California San Diego (UCSD) for decades, and his drive, curiosity, and productivity propelled him into the top ranks of scholars on Alzheimer’s and Parkinson’s disease. His roughly 800 research papers, many on how those conditions damage synapses, the junctions between neurons, have made him one of the most cited scientists in his field.
It's kind of like, when reporters say a CEO built [insert ridiculously complex product here], ex: ascribing the success of OpenAI to Sam Altman, or Apple to Steve Jobs. Sure there were important in setting the direction, and allocating resources but they didn't actually do the work.
Similarly, the heads of famous science labs have lots of talented scientists who want to work with them. The involvement of a lab director varies wildly, but for the hyper productive, famous ones, it's largely the director curating great people, providing scientific advice, and setting a general research direction. The lab director gets named on all these papers that get generated from this process.
So 800 papers isn't necessarily a red flag if the director is great at fundraising and has lots of graduate students/post docs doing the heavy lifting.
800 papers is still a red flag, even if throughout those years he had 30 postdocs at any given time (which is only true for large labs).
800 is a lot, even for someone with a big lab.
More than likely many of those authorships were "honorary", that is Masliah "lent" his (once-famous) name to help others publish their own work. He likely provided little actual contribution to many of these papers.
As such one would normally only give an author "full" credit (and responsibility) if they appear as either first or last in the list of authors. In the biosciences these are the positions indicating substantial contributions to the published work.
His co-authors are now going to be very annoyed as association with this "honorary" author will now cast doubt upon their own work.
Over 20 years, that’s 40 per year on average. He’s emeritus from UCSD and I don’t see his old lab page online, not sure how big it was. But my PI’s lab had 13 last year and has 11 people. If Masliah had around 33 people that would be a pretty normal papers per capita.
Most neuroscience papers of the type Masliah published are the result of at least 2 person-years of hands-on work (and up to 10 or 15 person-years for large papers).
800 papers over 25 years would therefore need a minimum staff of 64 full time researchers for the entirety of those 25 years. Masliah didn't have this.
For most papers on which Masliah is an author, the majority of the work will have been performed in other labs, with Masliah and those under him contributing to a greater or lesser extent. Such collaborative work is not a bad thing (assuming everyone is honest).
Web.archive has a shot of his now-deleted lab page:
https://web.archive.org/web/20240303093209/https://www.nia.n...
Among other things my physics career taught me: anyone who is listed as an author on more than 200 papers is almost definitely a plagiarist, in the sense of a manager who adds his or her name to the papers of the underlings in his or her lab. When I was still bothering to go to conferences I would sometimes have fun with them (the male variety is easy to spot: look for the necktie) by asking detailed questions about the methodology of the research. They never have any idea how the work was actually done.
Similar to
> Founder, CEO, and chief engineer of SpaceX. CEO and product architect of Tesla, Inc. Owner, CTO and Executive Chairman of X (formerly Twitter). Founder of The Boring Company, X Corp., and xAI. Co-founder of Neuralink, OpenAI, Zip2, and X.com (part of PayPal)
It can only be a fraud.
Depends on your definition of fraud. Musk is obviously not chief engineer of SpaceX while actively working at Twitter, Tesla, and Neuralink. The founding claims aren’t that unbelievable though, founding 10 companies in 30 years isn’t that hard. I would call it heavy exaggeration.
Yet, he is able to answer most of deep technical questions related to his technologies, right on the spot. And his answers are well thought, concise and factual, i.e. not the handwavy crap you can expect from a CEO of his scale.
People are listed as authors if they advised or contributed to the papers of their grad students or other people in their lab.
This also seems like a problematic practice. Perhaps we should start expecting a shorter author list, and have separate credit for advisers, small contributions, overseeing, etc.
They still need to read the science and agree with it.
The amazing part about this to me is that the only reason the authors were caught is image manipulation. The fraud in numbers and text? Not so easy to uncover.
Prediction: papers stop using pictures entirely
When I read a paper, I first look at the images and tables. A paper (in this area) without images would be very suspictious.
GenAI will make faking western blots fantastically easy
Many journals now require all versions of a gel image that is used in a figure. So, you’d have to fake the full image that is cropped down to the lanes used in the figure. I think there aren’t as many of those raw images around to train AI on… yet.
I predict it will get even worse than that, in the next couple of decades I expect any document or work that has a substantial reward associated with it, either financially or in terms of career advancement or a grade for critical course work in one's major, or penalty such as indictment or conviction, to be backed by a time stamped stack of developing documentation, drafts, revisions, with these time stamp validated against a trusted custodial clock and a seed random string marking the start of work, recorded in some immutable public form.
Accompanying to finish document will be a hash of all of these works along with their associated timestamps, originals of can be verified if necessary to prove a custodial chain of development over a plausible period of time and across multiple iterations of the development of the work - a kind of signed time-lapse slideshow of its genesis from blank page to finished product as if it had a mandatory and global "track changes" flag enabled from the very beginning - by which the entire process can be proved an original human-collaborated work and not an insta-generated AI fiction.
I actually thought that digital timestamps would have been a great use-case for blockchains. They are publicly available and auditable. If you're working from hashes, you don't necessarily need to make the raw data public, just the hash. It is a use-case that has an intrinsic value to the data generator and the future auditor (so you could charge something for it). I know there was some work done on this, but I think it lost momentum due to trying to generate crypto as a value storage medium.
The gold bugs really set back that entire field: the quasi-religious pursuit of “trustless” designs made everything more expensive, but so many problems are far more tractable with trusted third parties both for cost and reduced attack potential because institutional/professional reputations are harder to build than getting n% consensus on a cryptocurrency and don’t have the built-in bug bounty problem.
For example, imagine if university libraries ran storage systems based on Merkle trees with PKI signatures and researchers used those for their papers, code, data inventory (maybe not petabytes of data but the hashes of that data), etc. If there were allegations of misconduct you’d be able to see that whole history establishing that when things were changed and by whom, and someone couldn’t fudge the data without multiple compromised/complicit people in a completely different department (a senior figure can pressure a grad student in their field but they have far less leverage over staff at the library), and since you’re not sharing a database with the entire world you have a much easier time scaling with periodic cross checks (e.g. MIT and Caltech could cross-sign each other’s indexes periodically so you could have confidence that nobody had altered the inventory without storing the actual collection).
Sounds complicated. You could just demand the lab log books. They are supposed to be dated and countersigned. Standard practice is to counter sign outside your group.
The YC company that wanted to sell fake survey results (yes they really had a launch HN with that idea) will surely be the first to sell fake science results next. YC disrupting sciences
Yep! Even at the moment you don't even need GenAI, you can just run a dummy gel with X, Y, and Z proteins that will give you desired band patterns.
Eventually AI will also be able to reliably audit papers and report on fraud.
There may be newer AI methods of fraud, but it will only buy you time. As both progress, committing to record a fraud generated by technology will almost certainly be detected by a later technology.
I would guess that we're within 10 years of being able to automatically audit the majority of papers currently published. That thought must give the authors of fraudulent papers the heebee jeebies.
The problem is that detecting fraud is fundamentally harder than generating plausible fraud. This is because ultimately a very good fraud producer can simply produce output that is identically distributed to non-fraud.
For the same reason, tools that try to detect AI-generated text are ultimately going to lose the arm's race.
It's not a race though. Once the fraud is committed to record it can no longer advance in sophistication. Mechanisms for detection will continue to advance.
I think the argument is that if you produce your fraud from an appropriate probability distribution, any "detection" method other than independently verifying the results is snake oil.
Is there no liability for the author? There are billions of dollars wasted in drug trials and research that can be tied to this fraud. Surely they can face some legal issues due to this?
Not only are there billions of dollars wasted, there are many, many lives wasted. If the billions had gone in a direction that was actually promising, maybe there would be treatments that would have saved millions of person-years of quality lifetime. This person is basically a mass-murderer.
Like all things in life that have risks of fraud, negligence or potential failure, insurance could be the answer.
Want to publish in a peer reviewed paper? Well then your institution or you should take out a bond or insurance policy that guarantees your work is accurate. The insurance amount would fluctuate based on how big of impact this study could have. Is it a drug that will be consumed by millions? Big insurance policy. Is it a behavioral study without much risk... small insurance policy.
Is a a person in an institution found caught committing fraud, well now then all papers from that institution now have higher premiums.
Did you sign off on a peer reviewed paper that was fraud? Well now your premiums are going up also.
Insurance costs too high to publish? Well then keep doing research until the underwriters are satisfied that your work isn't fraud and adjust the premiums down.
It adds a direct near-term economic incentive to publish honestly and punishes those that abuse the system.
In other words, you are suggesting more stringent peer review conducted by insurance companies. And because insurance companies are too small to have sufficient in-house expertise on every topic, the reviews will be usually done by external consultants. The costs might be from $10k for simple papers to hundreds of thousands for large complex papers.
The insurance model does not really work when the cost of evaluating the risks far outweighs the expected risks.
That is like saying my insurance company has to follow me around for a week while I drive before they can underwrite a policy. If there is money to be made, and money to be lost, the actuaries will find a way.
The problem could be, that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.
You are not the first person in the world to own a home or drive a car. Insurance companies can offer you cost-effective insurance, because you are doing effectively the same things as many other people.
Science is largely about doing novel things and often being the first person in the world to try something. In order to understand the risks, you have to understand the actual research, as well as the personalities and personal lives of the people doing it.
Then there is the question of perverse incentives. Research fraud is not a random event but an intentional action by the people who take the insurance. If they manage to convince you to underwrite their research, they know that the consequences of getting caught will be less severe than without the insurance, making fraud more likely. Normally intentional fraud would not be covered by the policy, but here covering it would be the explicit purpose of the insurance.
Insurance companies insure one off events all the time. You can literally insure anything, its just a matter if the premiums outweigh what you perceive as the risk. "Uninsurable" just means the price is too high to be considered practical.
The research might be novel, but the procedures for research and publication are very similar. So insurance companies would just make sure that you followed a protocol which minimizes their risk.
perverse incentives are taken into account by insurance. Insuring someone is always a adversarial back and forth to determine if they are being truthful or not. Which is why Life insurance companies require a physical. They don't just have you self report and then accept it as fact.
Industry professionals like lawyers and doctors carry malpractice insurance. A lawyer can still commit fraud. Insurance isn't a black and white thing. It is a sliding scale that ties risk to a monetary value.
Its not rocket science. Just actuarial science. ;)
> The research might be novel, but the procedures for research and publication are very similar.
This is wrong.
Some time ago, I completed the checklists for publishing a paper in a somewhat prestigious multidisciplinary journal. Large parts of the lists were about complying with various best practices and formal requirements in different fields. I often didn't even understand the questions outside my field. And the questions nominally within my field were often category errors. They assumed a mode of doing research that was far from universal. Overall, the process was more frustrating than (let's say) applying for a US visa.
I think you are desperately trying to fit something black and white rather than thinking critically that there is a spectrum of research, some of which is similar to others which can easily have procedures for insuring and others that are more complex that require more diligence from the insurance company. Just like nearly every single thing an insurance company does.
Yes there is novel research that has never been done before? So what? That doesn't change if you can get insurance or not. Thats a failed argument from the beginning.
Anyways you don't seem to be having a discussion in earnest and instead you seem to be intentionally disregarding large pieces of the above arguments and trying to shoehorn in your idea that if there is unique research being done that it means that it is impossible to tell the risk of anything. Kinda silly.
The cases that would require more diligence from the insurance company are the kind of research that should be encouraged. Breakthroughs are more likely to happen when people take risks and try something fundamentally new, instead of adhering to the established forms. Your insurance model would discourage such research by making it more expensive.
Additionally, even if we assume that the insurance model is a good idea, it should be tied to individual researchers, not universities. The entire model of university research is based on loose networks of independent professionals nominally employed by various organizations. Universities don't do research, they don't own or control the projects, and they don't have the expertise to evaluate research. They are just teaching / administrative organizations that provide services in exchange for grant overheads.
> that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.
What type of research would that be? Just publish it online without insurance and everyone will treat it as it unverified and uninsured... separate from other research that is.
Once the risk of the publishing research has gone down (i.e. reputable peers approve, or the findings were replicated), the cost of the insurance goes down also.
if something is so costly to insure, there would be a reason and thus the system works.
If it is possible to advance your career by publishing uninsured research then we've just renamed the problem, although I do like the idea of adding this structure. Eventually there could be so much of it that it would become an accepted norm that your research isn't actually published in a journal until five years after you informally publish it. Other scientists in the field have to be abreast of the latest findings, so now these informal publications are the true journals.
I see your point, the success of this would have to align with a change in the broader academia to only cite research from insured researchers.
The "organic" way this would happen is if there was a shift so that journals with insured research are far more valuable than uninsured research. Or perhaps if companies started suing researchers for negligence and fraud and recuperate costs if they used research that was later proved to be fraud.
In the literary world, anyone can publish a book, but a book from o'reilly caries with it a different level of authority and diligence then a self published book or blog post.
So the shift would have to be that your career can't advance without publishing a bonded and insured paper.
But that is not how research works in Academia. They have to follow the bleeding edge of the field, or they may be doing work of their own that is already irrelevant. They will not wait until a consortium of insurance companies and underwriters have done the actuarial analysis and come up with an underwriting product that the institution has funded (and what is the institution's business model for recovering this cost in a field of pure research, anyway?)
> you are suggesting more stringent peer review conducted by insurance companies
Absolutely not. Underwriters are smart. They use other variables and methods for determining risk. They don't need to directly recreate and peer review the research themselves.
I was thinking about it: If I come across someone seriously injured, try to help them, and accidentally hurt them, I'm protected (in many places) by Good Samaritan laws.
But if a health care professional does the same thing, and does something negligent, then they are usually liable. They are professionals and are held to a different standard. Similarly, that's why lawyers keep writing: this is not legal advice and you are not my client.
Perhaps a professional in science should have higher standards. Obviously they shouldn't be sued for being wrong - that would destroy science, disregard the scientific method's means to address inaccuracy, and go against science's nature as the means to develop new knowledge. But intentionally deceiving people perhaps should be illegal and/or create liability: When you publish something, people depend on its fundamental honesty and will act on it.
Here’s a deterrent:
1) revoke all of their academic accreditations and degrees
2) put them on a public “do not publish” list permanently banning them from being named on any paper in a journal
Ya seems obvious tbh.
The US has the Office for Research Integrity which can prosecute scientific fraud cases, but it only does a handful of cases per year.
To put the scale of this problem in perspective, the ORI was set up in the 1970s after Congress became concerned at widespread reports of scientific fraud. It clearly didn't work, but hangs around regardless.
It's ultimately a culture problem. Until academics have the same level of respect as ordinary corporate employees, you're going to get judges and juries who let them off scott free.
It would improve things a lot if the Office of Research Integrity had the same clout as the SEC.
He could be prosecuted under current fraud laws, but this hardly ever happens.
I wrote a blog post on how to make this easier, including a new criminal statute specifically tailored for scientific fraud. https://news.ycombinator.com/item?id=41672599
Sorry, here's the correct link https://chris-said.io/2024/06/17/the-case-for-criminalizing-...
any lawyers know if it's wire fraud to get paid to do academic research and lie about the results?
The line between outright fraud, bad methods correctly implemented, messy data, and implementation bugs is fuzzy. Trying to criminalize anything not very very clearly #1 quickly turns into a case of “show me the man and I’ll show you the crime”. You think groupthink in academia is bad just wait until professional disputes lead to jail time for the loser.
The fact that some areas are gray shouldn't prevent us from demanding legal consequences when the fraud is gross and deliberate, as appears to be the case here.
There are unfortunately very rarely consequences for academic fraud. It's not just that we only catch a small fraction — mostly the most brazen image manipulation — but these cases of blatant fraud happen again and again, to resounding silence.
Ever so rarely, there may be an opaque, internal investigation. Mostly, it seems that academia has a desire to not make any waves, keep up appearances, and let the problem quiet down on its own.
The people doing the investigation have a vested interest in keeping it quiet.
It's like the old quote... "If you commit fraud as an RA that's your problem. If you commit fraud as the head of department that's the university's problem."
And occasionally a grad student who discovers academic dishonesty, and complains internally (naively trusting administrators to have humility and integrity), has their career ended.
I suppose a silver lining to all the academic fraud exposés of the last few years is that more grad students and faculty now know that this is a thing, and one that many will try to cover up, so trust no one.
Another silver lining might be that fellow faculty are more likely to believe an accusation, and (if they are one of the awful people) less likely to think they can save funding/embarrassment/friend by neutralizing the witness.
(ProTip: If the success of your dishonesty-reporting approach is predicated on an internal administrator having humility and integrity, realize that those qualities are the opposite of what has advanced a lot of academic careers.)
Only fix I can see is making scientific fraud criminal. But it has to be straight fraud and not just bad science.
I can't imagine any other vocation where you can take public and private money, then cheat the stakeholders into thinking they got what they payed for, only to just walk away from it all when you are found out. Picture a contractor claiming to have build a high-rise for a developer, doctored photos of it, and then just go oops moneys all gone with no consequences when the empty lot is discovered years later.
It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position
I really feel stupid asking experienced developers to do FizzBuzz. Not one has ever failed. But I have heard tons of anecdotes of utterly incompetent developers being weeded out by it.
Everyone seems to acknowledge this is a problem, but refuse to believe it actually affects anything when it comes time to "trust the science". Yes, science is corrupted, but all the results can be trusted, and the correct answer is always reached in the end. So, is it really a problem? Or not?
Another example of the phenomenon where people can realize something when considering it from an abstract perspective but not at a realtime object level is psychological bias and imperfect rationality. If the topic of discussion is an article about bias, rare is the person who will deny the phenomenon, and many enthusiastically admit to suffering from the problem themselves. But if the topic of discussion is something else and one was to suggest the phenomenon may be in play: opposite reaction. During realtime cognition, that knowledge is inaccessible.
I honestly think if some serious attention was paid to this and various other real world paradoxes around us, we could actually make some forward progress on these problems for a change.
A key skill for any scientist is to differentiate quality work and science that can be easily faked.
The Alzheimer's and Parkinson's fields are too easy to fake, and too difficult to replicate. The new ideas are only ~20 years old. Big pharma companies are understandably wary of published papers.
When people say "trust the science", they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.
TL;DR: Some science should absolutely be trusted, some shouldn't. It's not surprising that you can't make blanket statements on a superfield ranging from germ theory to cold fusion.
> When people say "trust the science", they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.
When people say "trust the science" they're usually referring to fairly recent developments. Covid vaccines were in development and testing for just over 18 months before being mandated and were certainly not replicated on a large scale by disinterested 3rd parties before being mandated. The idea that we can have effective scientific policy without trust in scientific institutions is just... not accurate.
Exactly. Nobody needs to be told to "trust the science" on gravity and electricity, nobody asks to consult scientific consensus. The argument only arises for the more suspicious niches.
Yeah, we can trust that Ozempic works, because faking or fudging weight loss is close to impossible.
Maybe this indicates the need for better metric.
mRNA is very new...
It's a matter of how established the science actually is.
Questioning novel science is one thing but questioning if the Earth is flat or Germ Theory is another thing all together. The problem with skeptics is that they sometimes hang around conspiracists.
It's hard to not discount these people when the person next to them thinks black people are biologically inferior. Then when those skeptics don't distance themselves or don't explicitly condemn those bad actors, it brings to question if their positions are born of skepticism or some strange prejudice, and that they merely constructed the cover of skepticism to hide their strange prejudices.
For example, during the Covid pandemic there was a lot of questioning around masks. In hindsight, the answer is obvious: it doesn't really matter if masks were or were not effective, because they're essentially free to wear. Even in the worst case, nobody is actually hurt.
But there were many, maybe millions, of mask deniers who would simply refuse to wear them. They were doing this because of institutional distrust and political motivations, not because they truly believed the masks were dangerous. And this is the trouble: these people are skeptics, but they're skeptics with an end-goal of political destabilization, i.e. they're dangerous.
When you mix it all together, which people often do to themselves, it's discredits the very thought process.
Your mind reading and soothsaying are perfectly rational though I suppose, it is only us conspiracy theorists who suffer from delusional cognition?
> it is only us conspiracy theorists who suffer from delusional cognition
Of course not, but if you, say, think the Earth is flat you are delusional. That's just what it is, and I'm not gonna hand hold crazy people when I tell them they're crazy.
The issue is when crazy people assimilate, or rather try to infiltrate, groups of educated skeptics. Now they all look crazy, and that's a problem.
And what of the much larger harm your kind causes to the world? Nothing a nice story can't wash clean I suspect, but then that doesn't stop the harm.
My kind? What... round-earthers?
Or do you mean people who didn't deny Covid? Hmm I would demonstratablely less harm than you. Because even if masks are almost useless, that's better than nothing, right?
Simply being contrarian for the sake of it isn't impressive, it's kind of sad. Sometimes the big dogs are just right. If you can't articulate their motivation, objective value gain, methods, etc, then you're probably just crazy.
> My kind? What... round-earthers?
No, normative thinkers.
Nobody fucking cares about politics mate. People can’t breath with them on hot crowded city buses or for 9 hours straight when working. Your just a cuck who wore his face happy trying to justify his cowardice now to himself. Nobody else is interested in your shit ideas and theories
See, this is what I mean. People who take a skeptical approach to masks aren't doing it for scientific reasoning, they're doing it to avoid being a "cuck".
This type of mentality actively discredits skeptics, because nobody wants to be lumped in with that. There're genuinely very smart people who were/are skeptical of many Covid policies, but unfortunately, they have to stand next to you. Which, of course, makes them look very stupid. It's a tough problem.
> Which, of course, makes them look very stupid
Because of normative cognition, which you suffer from, and which is causing the planet to warm up.
No problem though, you carry on just as you are, no need for you and yours to improve yourselves.
> which is causing the planet to warm up
Yes, if you don't believe in Global Warming, you are just stupid. I'm not gonna hold your hand when I make you aware of your intellectual insufficiencies - you are stupid.
Now that you know you're stupid, you can either choose to reinforce your stupidity by living in a delusion or you can do a bit of research and catch up to the average human. I don't care either way, but you're passed the point of claiming ignorance. Eventually the stupidity is self-enforced, meaning you and others will go out of your way to ensure you are stupid.
There has surely got to be a better way to discuss important matters like climate change than this.
There are, and they've been in practice for many decades.
However, I give people the benefit of the doubt and assume they have a functional brain. Therefore, I conclude if someone "doesn't believe" in climate change, that is a choice. Not a matter of ignorance.
I do not pity you enough to spit in your face with hand-holding and euphemisms. There is a deliberate choice and I'll treat you as such.
Is imagining shortcomings on my behalf and then categorizing them as factual to use as evidence in an argument a part of these superior approaches you mention?
If I was to do the same to you, would you not protest?
I'm not imagining a shortcoming, rather I'm doing the opposite. I'm assuming you've done the proper research around climate change so I'm not going to patronize you with it. Therefore, I conclude you are not ignorant, you're willfully contrarian.
If you interpret that as a worse outcome, here's a thought: stop being willfully contrarian. Sometimes the most popular and most researched opinion is correct. You gain nothing by being contrarian.
Being skeptical is good. Being skeptical means you require a wealth of evidence to believe something. Well, if you don't believe in climate change, you're NOT skeptical - you're just an obnoxious contrarian. Because we have a wealth of evidence and I'm assuming you've reviewed it.
The virus is just going to go into people’s eyes dumbass. There are millions of cucks and weak men in western societies that didn’t exist 50 years ago. These men would have had deeper voices, excellent eye sight, thick heads of hair, followed logic, been brave ... now we have porn addicted gamer simps with nasally voices pretending to be scared of catching a head cold because they’re only too happy to bow down and be submissive with the added bonus that they can hide their disgusting eyes and faces in public , essentially enforcing mass cardboard box over head wearing with these “face nappies”.
> There are millions of cucks and weak men in western societies that didn’t exist 50 years ago
Yes, go back 50 years ago then. When we had so much more racism, when homosexuals were treated like dogs, when women were beaten for sport and nobody cared.
Those types of people died off not by some conspiracy. They died off because they were a cancer on society, a tumor on mankind. They died off because nobody liked them, except others of their ilk.
What you call "weak" I consider strong. We have the strength today to solve problems. We don't lynch black people anymore, we don't beat women anymore. Men are no longer scared to be themselves. I mean, people like you shiver in your timbers when you see a slightly feminine man - do you not understand the irony in that? How pathetic that makes you? Are you really so stupid that it's right in front of your eyes and you can't see it?
If it's the past you crave, I have doubts about your character. Go talk to an older gentleman and see what they've seen. We've moved on, either figure it out or die in the past. We're not gonna wait around and hold the hands of the weakest of our kind to catch up - you will be left behind.
I wonder if there's evidence of fraud _increasing_ or if the detection methods are just improving.
In my last workplace, self-evaluation (and, therefore, self-promotion) was mandatory on a semi-annual cycle and heavily tied to compensation. It's not surprising that it became a breeding ground for fraud. Outside of a strong moral conviction (which I would argue is in declining), these sorts of systems will likely always be targets for fraudulent behavior.
You're definitely seeing the consequences of papers being written in the past when large-scale fraudulent analysis wasn't that feasible, and now you have all this tech that can scoop it up and look for those "needle in a haystack" instances of fraud.
I'm thinking about all the plagiarism issues uncovered with the publications of the former Harvard president Claudine Gay (and, similarly, of Neri Oxman, the wife of Bill Ackman, who basically was exposed due to Ackman's campaign against Gay). I looked over all the instances of plagiarism in detail, and, while not excusing them, they seemed like less of egregious theft of others' ideas and more like laziness/sloppiness. But I could easily imagine that laziness/sloppiness being fostered by an idea of "How could someone really check this word-for-word anyway?"
Well, now we have tech that makes it almost trivially easy to expose this type of misconduct.
That's an interesting perspective. We tend to make judgments based on what's possible today, not twenty years from now (obviously, there are exceptions, like privacy). So it's easy to fall into sloppiness and not expect consequences. Maybe there's a lesson here...
I’ve hacked even peer reviews. Nobody wants to really write yours, so offer to do it for them.
Obviously making it mindlessly laudatory.
I've said so many times, but we need to go back to a system where it is possible to make a career in science and get funding for replicating other people's work to verify the results.
This leads to a tragedy of the commons. Say a random nation, say, Sweden, devotes 100% of their governmental and university research budgets toward replication.
70% of the studies they attempt are successfully replicated. 20% are inconclusive or equivocal. 10% are clearly debunked.
Now the world is richer, but Sweden? No return on investment for the Swedes, other than perhaps a little advanced notice on what hot new technologies their sovereign funds and investors ought not to invest in.
A bloc of nations, say NAFTA/CAFTA-DR, or the European Union, might be more practical.
That's the carrot. As for the stick, bad lawyers can get disbarred, bad doctors can get "unboarded". Some similar sort of international funding ban/blacklist for bad researchers would be useful.
Could it be part of the training for young researchers - replicate existing experiments?
I applaud that approach. The first year of a Ph.D. program could be reformulated to become 75% replicating the research of others, preferably that of unaffiliated research organizations.
A lot of this research is very involved and esoteric, requiring specialized equipment found only in one place, so some would be very hard to replicate. If what Theranos was doing (or claiming to do) was easy to replicate, it would've imploded years prior to when it did. So not all fraud could be detected, but a lot of the low-hanging fraud, especially in the psychological and pharmacological fields, could be quickly identified. Such a system would be a substantial upgrade and I applaud your suggestion. A smaller country could blaze the trail, because "big boys", like the U.S., are too set in their ways.
I think this suggestion contains the implicit bias that “replication isn’t important or challenging”, hence you leave it to trainees. Actually, replication is incredibly challenging. Put PhD students on it, and they’ll be convinced the original study was fraud for 4 years until they finally have the skill to get it right!
Alternately, look at one recent example of massive waste as a result of accepting fraudulent research as valid.
> Hundreds of millions of dollars and [16] years of research across an entire field may have been wasted due to potentially falsified data that helped lay the foundation for the leading hypothesis of what causes Alzheimer’s disease.
https://globalnews.ca/news/9016221/alzheimers-research-poten...
Wouldn’t science in total be impossible to fund if this argument were true? What advantage does Sweden have from doing science and publishing if everyone else gets to use it and they could just wait for someone else to do it? If this was how it worked, wouldn’t every scientist work in secret and never publish anything?
Anecdotally, during my (fairly short-lived) academic career, in which I did research with three different groups, 2/3 of them were engaging in fraudulent research practices. Unfortunately the one solid researcher I worked for was in a field I wasn't all that interested in continuing in, and as a naive young person who believed in the myth of academic freedom and didn't really understand the funding issue, I jumped ship to another field, and found myself in a cesspool of data manipulation, inflated claims, and all manner of dishonest skullduggery.
It all comes down to lab notebooks and data policies. If there is no system for archiving detailed records of experimental work, if data is recorded with pencils so it can later be erased and changed, if the PI isn't in the habit of regularly auditing the world of grad students and postdocs with an eye on rigor and reproduciblity, then you should turn around and walk out the door immediately.
As to why this situation has arisen, I think the corporatization of American academics is at fault. If a biomedical researcher can float a false claim for a few years, they can spin their research off to a startup and then sell that startup to a big pharmaceutical conglomerate. If it fails to pan out in further clinical trials, well, that's life. Cooking the data to make it look attractive to an investor - in the almost completely unregulated academic environment - is a game that many bright-eyed eager beavers are currently playing.
As supporting evidence, look at mathematical and astronomical research, the most fraud-free areas of academics. There's no money to be made in studying things like galactic collisions or exoplanets, the data is all in the public domain (eventually), and with mathematics, you can't really cook up fraudulent proofs that will stand the test of time.
> ...[bio people make money by] spin their research off to a startup... ...mathematical and astronomical research [is] fraud-free...
You are talking about a part of the academy that relative to medicine, so few people do.
Show up to a bank looking like someone who knows math, and they'll cut you a huge check. Is that not fraud?
> As supporting evidence, look at mathematical and astronomical research
Is there evidence of the fraud levels in those fields?
I imagine how common fraud is has more to do with the relative number of researchers in a field and the chance of getting caught.
Sure money could be a factor, but the desire for prestige can motivate people just as easily.
> mathematical and astronomical research, the most fraud-free areas of academics. There's no money to be made
So we're systemically safeguarding the quality of astronomy research, by setting up a gradient (at MIT: restaurant catering for business talks, pizza for CS, stale cookies for astronomy) to draw off some flavors of participants and thus concentrate others?
Seems to be censored from the NIH staff directory now https://www.nia.nih.gov/about/staff/masliah-eliezer
Most recent working archive.org snapshot:
https://web.archive.org/web/20240303093209/https://www.nia.n...
woopsies
When I was in my doctoral program I had some pretty promising early results applying network analysis to metabolic networks. My lab boss/PI was happy to advertise my work and scheduled a cross-departmental talk to present my research in front of ~100 professors or so. While I was making a last-minute slide for my presentation I realized one chart looked a little off and I started looking into the raw data. I soon realized that I had a bug in my code that invalidated the last 12 months of calculations run on our HPC cluster. My conclusions were flat out wrong and there was nothing to salvage from the data. I went to my lab boss the night before the talk and told him to cancel it and he just told me to lie and present it anyways. I didn't think that was moral or scientifically sound and I refused. It permanently damaged my professional relationship with him.
No one else I talked to seemed particularly concerned about this, and I realized that a lot of people around me were bowing to pressure to fudge results here and there to keep up the cycle of publicity, results, and funding that the entire academic enterprise relied upon. It broke a lot of the faith I had been carrying in science as an institution, at least as far as it is practiced in major American research universities.
Coding errors are a really common source of fraud unfortunately. You did the right thing but the vast majority don't. Given a choice between admitting the grant money was wasted, the exciting finding isn't real, everyone who cited your work should retract their papers or just covering it up, the pressure to do the latter is enormous.
During COVID I talked to a guy who used to do computational epidemiology. He came to me because I'd written about the fraud that's endemic in that field and wanted to get stuff off his chest. He was a research programmer, assisting scientists. One of the stories he told involved checking the code for a model written in FORTRAN. He discovered it was mis-using an FFI and using pointer values in equations instead of the dereferenced values. Everything the program had ever calculated was garbage. He checked and it had been used in hundreds of papers. After emailing the authors with a bug report, he got a reply half an hour later saying the papers had been checked and the results didn't change so nothing needed to be done.
Little known fact: the COVID model that drove lockdowns in the UK and USA was nothing but bugs. None of the numbers it produced can be replicated. When people pointed out that this was a problem academics went on the attack, claimed none of the criticism was legitimate because it didn't come from experts, and of course simp journalists went along with all of it. They got away with it completely and will probably do it again in future. Even in this thread you can see people defending COVID science as the good stuff! It was all riddled with fraud.
Part of the issue is that scientists are mostly self taught coders who aren't interested in coding. They frequently use a standard of: if it looks right, it is right. The whole thing becomes a circular exercise in reinforcing their priors.
>the COVID model that drove lockdowns in the UK and USA was nothing but bugs
I would love a source but I believe this given my experience with coding standards in the computational biology space, especially given that some of my own previous work and teaching touched on those models, and I couldn't believe any of the models that were publicized because they were at odds with what I thought the scientific consensus was about spread and containment models (96 hours after patient 0 it's pointless to attempt to restrict movement).
For a source, I wrote this a few years ago:
https://dailysceptic.org/2020/05/06/code-review-of-fergusons...
It was based primarily on the model's own github bug tracker.
> My conclusions were flat out wrong and there was nothing to salvage from the data.
Wow that’s pretty crazy. I have to say many times in my career I’ve been writing a paper and realized “** there’s a bug”, and had to redo everything. But the overall conclusion never changed because the idea was grounded from several different angles(usually the pieces fit together even better). One bug might invalidate your result, but even if your code was correct the underlying assumptions behind the code could be wrong! I think the real issue was your boss wasn’t active enough in your work to make it robust to coding mistakes.
>I think the real issue was your boss wasn’t active enough in your work to make it robust to coding mistakes.
That's a major issue and it went beyond coding, I picked a very well known and influential advisor and eventually discovered he didn't really direct any research or write papers, that was all the research assistants and postdocs. I was pretty much left on my own and expected to surface with a paper to put his name on.
It’s time that someone starts a thing similar in appearance to GitHub, but for science (datasets, images, calculations, scripts) and then, if journals required it, it just might get traction and make fraud science easy to spot.
Edit: found this article which says everything I wanted to say but couldn’t put into words. https://slate.com/technology/2017/04/we-need-a-github-for-ac...
Add another aspect here that LaTeX is a bit outdated in 2024 (I know that’s controversial! Sorry) and that we can do a lot better for digesting and displaying information than A4 sheets of paper, for example responsiveness, audit/comment logs/references to individual paragraphs/revision logs, and the ability to click figures and see underlying data or high resolution copies. This would be great in a web-based editor medium. Also the ability to “fork” a paper would be fantastic. And to automatically track and generate references, then roll it up as back/forward reference analytics for the authors so they can see impact.
> Add another aspect here that LaTeX is a bit outdated in 2024 (I know that’s controversial! Sorry)
I met Leslie Lamport seven or eight years ago and asked him what a completely modern LaTeX might look like. He replied “well, we won’t be using PDFs in twenty years” and so it would need to be something completely different. Something interactive, with depth. Remembering, of course, to focus on quality content first and quality presentation second.
In a world with LLMs, this question becomes ever more interesting - why write a literature review if one can be generated?
I think the important thing is capturing information in basic blocks (text, images, etc) and having the flexibility to reflow it later for any modern presentation mode, be it ingestion by LLM, listening to it, or just rendering it on desktop, mobile.
Translation and a11y is another important consideration here.
I'm surprised that people are surprised by science being done in non-scientific ways.
I got a taste of this in my high school honors biology class. I decided to do a survey of redwing blackbirds in my town. I had a great time, there was a cemetery across the street from my house with a big pond, where 6-8 males hung out. I was excited when later in the season several females also arrived and took up residence.
I eagerly wrote up my results in a paper. I thought I did "A" level work but was distressed when the teacher gave me B- or C+. She said "My husband and I are birdwatchers who have published papers on redwing mating habits in the area, and we haven't seen any females this year. Neither did one of your classmates who watched redwings in her neighborhood." While she did not directly in writing accuse me of fraud, she strongly implied it.
I told her to grab her binoculars and hang out at the cemetery one morning. She declined, as she was a published authority and didn't need to actually observe with her own eyes. IIRC I had photos but they were from faraway with a Kodak Instamatic (this was the mid-'80s), so she didn't accept those as evidence.
I often wonder if my life would have gone in a different direction if I had a science teacher who actually followed the scientific method of direct observation! It didn't come easy to me, but I was very interested in science before this showed me clearly that science is just another human endeavor, replete with bias, ego, horseshit, perverse incentives, and gatekeeping.
Scale this experience out to tens of thousands of young people. These kinds of people should not be teaching! A good teacher is capable of fearlessly admitting to a room of children that they were wrong and the students were right, or better yet that they have no idea what the answer is!
We have done a great disservice to human intellect to have mistaken the gift that empiricism gives of predicting the world, with knowledge of the world itself of which we possess almost nil.
In the future those who commit fraud are not likely leave trace in Western blot and photomicrograph audit.
When the experiments are significant, double blind is not enough. You need external auditors when conducting experiments. Preferably separate team making experiments from those who design them.
My career has been in this space (medical research, not neuroscience) and I honestly cannot fathom how this happened. I don't understand how a researcher can wake up one day, manipulate data, and then show it to others. I feel bad for everyone's time that was wasted in building off this research, likely other's careers were chartered based on the basis of this research. What a shame.
What I don't get is that people claim that the incentives are skewed because highly cited paper get you the top jobs. However, assume that a significant subset of the citation are citing because they require the fraudulent result, then this will increase the chance that it would be eventually exposed... and quickly.
That is assume that person publishes results that: "factor X seems to lead to outcome Y". Many other scientists will then start trying to establish the low-hanging fruit result: "something that looks like factor X seems to lead something that looks like outcome Y". In other words they will be performing a sort of replication but in a novel way. If the result is fraudulent, then none of these results will materialise. In other words I don't get how a paper can be fraudulent AND highly-cited without escaping scrutiny, unless we are talking of a fraud mafia.
Here I am using the field of pure mathematics as a mental model. Assume a person publishes a mathematical result with a flawed proof that escapes scrutiny. If this result is used by sufficient number of mathematicians (especially the lemmas used to prove the theorem) then fairly quickly it will end up generating self contradictory results.
And yet, it's common. And it can pay off.
https://en.wikipedia.org/wiki/Francesca_Gino
That guy has a channel on fraud.
Also check out Data Colada: https://en.wikipedia.org/wiki/Data_Colada
This isn't theoretical, there's a lot of it going on.
I am more interested in how it occurs in the hard sciences.
How can a scientific result accumulate a huge amount of citations without its effect not being replicated in many different contexts.
Consider how hard it is to publish a negative result, especially when it fails to reproduce "established" science.
For all the complaints about AI generated content showing up in scientific journals, I'm exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.
Ex: Finding when the exact same image appears in multiple publications, but with different captions/conclusions.
The evidence in this case came from one individual willing to volunteer hundreds of hours producing a side by side of all the reports. But clearly that doesn't scale.
I'm hoping it won't have the same results as AI Detectors for schoolwork, which have marked many legitimate papers as fraud, ruining several students' lives in the process. One even marked the U.S. Constitution as written by AI [1].
It's fraud all the way down, where even the fraud detectors are fraudulent. Similar story to the anti-malware industry, where software bugs in security software like CrowdStrike, Sophos, or Norton cause more damage than the threats they prevent against.
[1] https://www.reddit.com/r/ChatGPT/comments/11ha4qo/gptzero_an...
> For all the complaints about AI generated content showing up in scientific journals, I'm exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.
How would this work? AI can't even detect AI generated content reliably.
Not in a zero shot approach. But LLMs are more than capable of solving a similar scenario to the one presented:
- Parse all papers you want to audit
- Extract images (non AI)
- Diff images (non AI)
- Pull captions / related text near each image (LLM)
- For each image > 99% similarity, use LLM to classify if conclusions are different (i.e. highly_similar, similar, highly_dissimilar).
Then aggregate the results. It wouldn't prove fraud, but could definitely highlight areas for review. i.e. "This chart was used in 5 different papers with dissimilar conclusions"
How would that be possible? Novelty is a known weakness of the LLMs and ideally the only things published in peer-reviewed journals are novel.
Detecting images and data that's reused in different places has nothing to do with novelty.
Wouldn’t it be cool if people got credit for reproducing other people’s work instead of only novel things. It’s like having someone on your team that loves maintaining but not feature building.
LLMs might find some specific indications of possible fraud, but then fraudsters would just learn to avoid those. LLMs won’t be able to detect when a study or experiment isn’t reproducible.
Of course, but increasing the difficulty of committing fraud is still good. Fraudsters learn to bypass captchas as well, but they still block a ton of bad traffic.
Won't the scientist use some relatively secure/private model to fraud-check their own work before submitting? If it catches something, they would just improve the fraud.
This is terribly terribly frustrating. For every one of these cheats there are hundreds of honest, extremely hard-working ETHICAL scientists who toil 60 hours a week doing the thing they love. It is also terribly frustrating that, being human after all, smooth talkers with a confident stride, an easy smile, eager to shake hands can and do quickly climb the academic ladder, especially the administrative latter. This makes me terribly sad.
> For every one of these cheats there are hundreds of honest, extremely hard-working ETHICAL scientists
Every one of these /discovered/ cheats.
Remember this particular cheat was one of your ethicals until a few moments ago.
And in the behavioral and social “sciences,” the fraud is just off the charts. If psychologists wanted to prove that healing crystals worked — if that was the cause du jour — there’d be journals filled to the brim with “research” “proving” their efficacy.
I spent almost 10 years of my life as a founder of a mental health technology startup and the day we got acquired was a huge relief — I could finally get out of that industry — an industry that is much more about academic politics than actually solving anything. Seeing the maneuverings behind the scenes of the DSM-V, diagnostic codes, etc., was profound enough to destroy any idealism I might have felt towards that industry. (And yes, it’s an industry.)
Luckily in fields such as climate science or virology, there is never fraud. Good thing too since a lot of our governmental policies result from those fields. (And yes, that is sarcasm.)
“Science” feels very much like the Catholic Church — many people with good intentions, but there have been enough people participating in bad things that it poisons the entire institution and degrades whatever little faith people might have had remaining.
Follow the science indeed.
On a tangent, this video[0] from Sabine Hossenfelder about academics in general is eye opening. In comments, veritasium[1] agrees:
>After finishing my PhD I went to a university-led session on ‘What Comes Next.’ What I heard sounded a lot like “now, you beg for money.” It was so depressing to think about all the very clever people in that room who had worked so very hard only to find out they had no financial security and would be spending most of their days asking for money. I realised that even what I thought of as the ‘safe path’ was uncertain so I may as well go after what I truly want. That led me here.
EDIT: Typos
[0]. https://youtu.be/LKiBlGDfRU8
[1]. https://www.youtube.com/@veritasium
I can't manage to be really surprised. We already know many people will cheat when the incentives are right. And when the law of the land is “publish or perish”, then some will publish by any means necessary. Thinking “this subsegment of society is so honorable, they won't cheat” would be incredibly naive.
Don’t worry. The next generation will use generative algorithms to make fake images that are indistinguishable from the real deal.
But if the NIH had done that in 2016, they wouldn't be in the position they're in now, would they? How many people do we need to check? How many figures do we have to scrutinize? What a mess.
This is the core problem with science today. Everyone is trying desperately to publish as much, and as fast, as they can. Quantity over quality. That quantity dictates jobs, fellowships, grants, and careers. Dare I saw we have a "doping" problem in science and not enough controls. Especially when it comes to "some" countries feverish output of papers that have little to no scientific value, cannot be replicated, full of errors, but at least it's published and they can get a job.
For a long time the numbers have been manipulated and continue to be so, seemingly due to national pride.
https://en.wikipedia.org/wiki/List_of_countries_by_number_of...
https://www.science.org/content/article/china-rises-first-pl...
Scholars disagree about the best methodology for measuring publications’ impact, however, and other metrics suggest the United States is still ahead—but barely.
> There's also a proposed Alzheimer's therapy called cerebrolysin, a peptide mixture derived from porcine brain tissue. An Austrian company (Ever) has run some small inconclusive trials on it in human patients and distributes it to Russia and other countries (it's not approved in the US or the EU). But the eight Masliah papers that make the case for its therapeutic effects are all full of doctored images, too.
> cerebrolysin
This was discussed here recently: https://news.ycombinator.com/item?id=41239161
I am thinking if some type bounty program which would take sufficient proof on fraud would work. Sadly I don't think anyone will fund it. And those participating likely won't be taken well in the circles...
Interestingly this and other cases like it suggest that one of the most valuable skill some scientists have is photoshop.
As a scientist, I'm so glad that we're forced to publish all our primary/secondary data along with the publication itself. It's stored in a repository which is "locked" when the DOI (digital object identifier) is generated. Overall, the publishing process is tedious and frustrating, but this extra work is crucial and cases like this makes that very clear. However, in most of the recent cases you didn't even need to look at the data as even the publication itself shows the misconduct.
Duplication of the same image with different captions about armed conflicts is a technique mainstream news likes too.
> But if the NIH had done that in 2016, they wouldn't be in the position they're in now, would they? How many people do we need to check? How many figures do we have to scrutinize?
All of them
Why worry about fraud, deception and misleadings using AI when we have the old kind of fraud?
Or, in the other hand, now you don't have to manipulate images, you can just generate the ones you need.
I personally know two PhDs who faked a large portion of their data in order to complete the dissertation process. The reality is that you can get stuck in the research phase because genuine, large sample-size quantitative data is often extremely difficult if not impossible to obtain, and in the cases I personally know, they simply mocked it in a realistic way. And there’s no way to know since the surveys are often anonymous.
Reminder that these people are only caught because they photoshopped Western blots.
Even more widespread is when PIs just throw out data that don't agree with their hypothesis, and make you do it again until the numbers start making sense.
It's atrocious, but so common that if you're not doing this, you're considered dumb or weak and not going to make it.
Many PIs end up mentally justify this kind of behavior (need to publish / grant deadline / whatever) — even at the protest of most of the lab members.
Those who refuse to re-roll their results — those who want to be on the right side of science — get fired and black balled from the field.
And this is at the big famous universities you've all heard of
Technical/Academic people might hate "influecer" culture for its crassness, but whenever fame/popularity is the primary goal, this is the only social dynamic.
People are not outraged in academia that the primary goal is fame/popularity (rather than knowledge, technical ability), they're outraged that someone is cheating in this game to get ahead.
This is happening across the spectrum tbh, as the world becomes increasingly monocultural and winner-takes-it-all social schema. People talk about anthopocene, but look at human social cultures : the millions of ways of living (with dignity mind you) sustained by < 1B population from as late as a 100 years back is now down to 1 or 2 at best.
In such a vast pool, this kind of stuff is not only bound to happen, but is the optimal way forward (okay, may be not such blatant stuff). Honor-code etc. are BS unenforceable measures that are game-theoretically unstable (and kill-off populations that stick to it). See what the finance industry does for instance.
> and others appear to be running for cover.
In every industry right now there appear to be a lot of people running cover. I have a personal belief, with the exception of a few industries, 50% of managers are simply running cover. This is easy to explain:
1/ Nothing follows people
2/ Jobs were easy to get in the last 3 years (this is changing FAST)
3/ Rinse and repeat and stay low until you're caught.
Perhaps the root of all evil is "publish or perish". I am long out of research, working at a teaching college, and yet I am still expected to publish. Idiocy.
Academic fraud is also enabled by lack of replication. No one gets published by replicating someone else's work. If one could incentivize quality replication, that could help.
>>>> "..sleuths began to flag a few papers in which Masliah played a central role, posting to PubPeer, an online forum where research publications are discussed and allegations of misconduct often raised. In a few cases, Masliah or a co-author replied or made corrections. Soon after, Science spotted the posts, and because of Masliah’s position and standing decided to take a deeper look."
I am conforted that there are still real journalists such as those at science, doing fantastic work and pulling on a thread, wherever it may lead , reputations be damned.
Kudos to the PubPeer scientists for spotting the problem. Hat tip to you.
Last but not least, never forget that the free flow of information allowed this fraud to be uncovered. Truth and "moderation" (of the censorship/disinformation kind) cannot simultaneously exist.
Why we would expect academia to be different from anything else these days? Fraud is how you get ahead. It is how you gain competitive advantage. When everyone is cheating, the only way to win is to cheat smarter. Fraud is the end result of the dreams that motivate people to be better than they. are.
This stuff just ENRAGES me.
With that off my moobs ... for those interested in the broader topic, I highly recommend Science Fictions, by Stuart Ritchie. The audiobook is also excellent.
I'm not a working scientist, and I found it completely engaging. Worth it just for the explanation of p-hacking.
Do folks here know how expensive it is to develop a drug? How much work it takes to get it through the pipeline? How much time, heartache, effort, has gone wasted? How many patients given false hope? This is tragic on so many levels
While I agree this is a big problem, science should never be defined by a single article.
I was always taught that science is a tree of knowledge where you build off previous positive results, all of which collapse when an ancestor turns out to be false.
In this particular case, the person of interest published 800 widely cited papers. That seems like a considerable collapse.
I see this as a pruning process and an inevitable part of science.
But I would further argue against what others were saying about personal ethics. Science must remove the human as much as possible from the process.
As a funders to almost of these research studies, we also need to introduce some mechanisms which will impart a compounding fear in minds of these criminals as year passes.
Basically a wrong study results over the years may ended up affecting millions (if not billions) of people. Someone(at every level of the chain) should pay a compounding punishment for a verified fraud.
At the same time, this shouldn't prevent a Nobel upcoming scientist being bold. After all, science is all about pushing the boundaries of understanding or doing.
> But at the same time I have a lot of sympathy for the honest scientists who have worked with Masliah over the years and who now have to deal with this explosion of mud over parts of their records.
This really is quite unfortunate.
Is it? Is it possible that none of them knew? Should they have responsibilities to go with the benefits of putting their names on major discoveries?
Ah, I see how you could misunderstand. In context, this sentence was contrasting between those who knew, and those who didn't know about the fraud. To make my point more clear:
It's impossible that none of them knew.
It's possible that some of them didn't know.
It's definitely possible that some of them didn't know.
I'm not a scientist because of fraud and other reason related to academia, but I thought one of the tennets of an experiment was reproducibility. Were his experiments reproduced independently? Why not?
Tangential but related: My young tween child a couple of days ago:
“I hate AI, I don’t know what’s real anymore”
I think we’re about to see something much more extreme than the early ‘net days of “photoshop!” rage at clever fakes
I think major scandals such as this one are essential, and we need more of them.
Why? The misaligned incentives that drive (in my opinion) otherwise-well-meaning human beings to fraud in the biomedical sciences stem from competition for increasingly-scarce resources, and the deeply and fundamentally-broken culture that develops as a result. The only thing that will propel the needed culture shift is for the people who provide the money to see, from the visibility provided by such scandals, just how bad the problem is, and to basically withdraw funding unless and until the changes happen.
Some of those changes include:
1. Reducing competition for funds by reducing the number of research-focused faculty positions (a.k.a. principal investigators, or PIs) across the board. When people's livelihoods depend on the ridiculous 5% odds of winning an important grant competition, they WILL cheat. As it stands, 20 well-funded scientists are probably more productive than 100 modestly-to poorly-funded, most of whom will do nothing meaningful or useful while trying to show "productivity" until the next funding cycle.
2. Reducing competition for funds by providing reasonably-assured research funding, tied to a diversity of indices of productivity, NOT just publications. As an example, a PI should be hired with the understanding that they'll need `x` dollars over the next 10 years to do their work. If those dollars aren't available, the person shouldn't be hired.
3. Reducing the number of PhD- and post-doctoral trainees across the board. These folks are mostly used as cheap labor by people who are well-aware, and don't care, that there will likely be no jobs for them.
4. Turning those PhD and post-doctoral positions into staff scientist positions, for people who want to do the research, but don't want the hassle of lab management. Staff scientist positions already exist, but in the current environment, when a PI can pay a postdoc $40k a year to work 80 - 100 hours a week, versus a staff scientist $80k a year to work 40 hours a week, guess which they pick.
5. Professionalizing the PhD stream. A person with a PhD in the biomedical sciences should be a broadly-capable individual able to be dropped, after graduation, into an assortment of roles, either academic or industrial. Right now, the incentive to produce publications tends to create people who are highly expert in a tiny, niche area, while having variable to nil competencies in anything else. Professionalization increases the range of post-PhD options for these folks, only one of which is academia. As it stands now, there's the tendency to feel that one has nothing if one doesn't have publications -- which increases the tendency towards fraud.
I don't know why this would be surprising. There's nothing more obvious than the fact that research is riddled with both fraud and laughably shoddy work.
If you're an academic and want to use the fastest publishing stack ever created that also helps guide you to building the most honest, true thing you could create, I have built Scroll and ScrollHub specifically for you.
https://hub.scroll.pub/?template=paper
Happy to provide personal help onboarding those who want to use this to publish their scientific work. breck7@gmail.com
Youtube of porting a paper to Scroll: https://www.youtube.com/watch?v=oNJBBAR-F2s
Once, at 3Com, Bob Metcalfe introduced a talk by one of his MIT professors with the little joke, "The reason academic politics is so vicious is that nothing's at stake."
The guy said, "That depends on whether you consider reputation 'nothing.' "
I guess what that shows is, you can always negotiate and compromise over money, but reputation is more of a binary. An academic can fake some work, and as long as he's never called on it, his reputation is set.
So yeah, a little more fear of having one's reputation ruined would go a long way towards fixing science.
> reputation
A caveat that "reputation", like competence, is more variagated and localized than is often appreciated. As with someone who is highly competent and well regarded in their own subfield, while simultaneously rather nutter about some nearby subfield where they don't actually work.
One can have a reputation like "good, but beware they have a thing for <mechanism X>". Or "ignore their results using <technique> - they see what they want to see". Subtext that gets passed among professors chatting at conferences, and to some extent to their students, but otherwise isn't highly accessible.
When people speak of creating research AI's using just papers... that's missing seemingly important backchannels. And corresponding with authors. Attempting research AI as developing-world professionally-isolated professor.
But this is really a societal/political issue: since we decided that economic capital is king and symbolic capital not that much… (This is really the story of the last four decades or so.)
There are some people who think everything is "a societal/political issue."
But that one-dimensional view is boring. Life is more than politics.
Well, this is about Pierre Bourdieu, and he had a few things to say about academia, as in Homo Academicus.
And I'm not sure what example could illustrate the problem with the lopsided valuation of economic capital and the general devaluation of symbolic capital (as compared to pre-1980s, we have since undergone a social revolution of considerable dimensions, which is also why there isn't an easy fix) better than this one.
Socio-economic issues aren't one-dimensional, in fact they're very complex. Most of our systems and beliefs are socially constructed.
Humans are, by our biology, social creatures. Modern humanity more than ever before. If you're not considering the social effects, then IMO you're not addressing anything of value.
... and if ALL you can see is social effects, then you're reducing the rich complexity of human nature to a gray goo.
Life may be more than politics but all of life is inherently political.
No it isn't. Myopia on your part (and destructive myopia, too).
>But this is really a societal/political issue
Bang on.
Not many people in the academic/technical people realize this, often for their entire lives. In their naive worldview, they cannot even imagine that people can stoop that low.
(embarrassingly and shamefully I used to be one of those naive people)
The person who did this fraud decided that symbolic capital > economic capital (by being in academia/govt).
The problem being, we have "economized" academia, by things like "publish or perish", a citation pseudo stock market or third party funding, and all incentives are built around this pseudo-economy. Which also imports all the common incentives found in economy…
I have always said that while professors get paid less money than in industry, they are compensated in reputation to make up for it. Status and reputation are the currency of academia.
Intrinsic to the article is, arguably, a significant cause of fraud in this field: The article talks about fraud as if it's done by the 'other' - by someone else, other than the article's author (or their audience).
The solution starts when you say, 'we committed fraud - our field, our publication, the scientific enterprise. What are we going to do?'
Does the author really have no idea about these things? That they occur?
Does anyone know of an up-to-date or live visualization of the amount of scientific fraud? And perhaps also measuring the second order effects? i.e. poisoning of the well via citations to the fraudulent papers.
It's hard to tell at this point if it's just selection bias or if the scientific fraud problem has outgrown the scope of self-correction.
Title should be changed to be more specific. It appears as if it's referring to an industry rather than just a person.
So things haven't changed in the 30 years since I left academic medicine. Par for the course given how grants and funding are carried out. This will continue to happen as the system design guarantees this outcome.
I would rather die than deliberately cause a humongous speed bump in the history of human understanding of the universe like this guy did. And the choice is never that stark. It's usually "id rather work in a less highly paid role".
To selfishly discard the collective attention of scientific experts for undue gain is despicable and should disqualify a person from all professional status indefinitely in addition to any legal charges.
I deeply respect anyone whose desires align with winning the collective game of understanding that science should be. I respect even more those folks who speak up when their colleagues or even friends seek to hack academia like this guy did.
I'm a recovering academic, and have not published since not long after defending my dissertation.
I blame this behavior entirely on "publish or perish". The demands for novel, thoughtful and statistically-significant findings is tremendous in academe, and this is the result: cheating.
I left professional academia because I resented the grind, and the push to publish ANYTHING (even reframing and recombining the same data umpteen times in different publications) in an effort to earn grants or attain tenure.
The academia system is broken, and it cannot be repaired with minor edits, in my opinion. This is a tear out and do over scenario for the academic culture, I'm afraid.
What about all the authors citing these papers ? Didn’t they find any incongruences in their own research?
I've been saying this for years and have been punished for that. Even here.
I've done Biology and CS for almost 20 years now, I've worked at four of the top ten research institutions in the world. The ratio of honest to bullshit academics is alarmingly low.
Most of these people should be in jail. Not only do they commit academic fraud, many of them commit other types of crimes as well. When I was a PhD student, my 4 year old daughter was kidnapped by staff at KAUST. Mental and physical abuse is quite common and somewhat "accepted" in these institutions. Sexual harassment and sexual abuse is through the roof.
I am very glad that, slowly, these things are starting to vent out. This is one real swamp that needs to be drained.
Some smartass could come up and say "where is your evidence for this?". This is what allows this abhorrent behavior to thrive. Do you think these people are not smart enough to commit these crimes in covert ways? The reason why they do it is because they know no one will find out and they will get away with it.
What's the solution? I've thought about this a lot, a lot. I think a combination of policies and transparency could go a long way.
Because of what they did to me, I am fully committed to completely destroy and expunge people who do these things from academia. If you, for whatever reason, would like to help me on this mission, shoot me an email, there's a few ideas already taking shape towards that goal.
"Four of the top ten" research institutions is probably part of the reason for your experiences. I went to an elite private undergrad as a scholarship student and was sexually abused by the son of high powered lawyers, probably awful people themselves, who targeted scholarship students, international students, etc. because we were vulnerable with no recourse. I then went to a highly ranked but not super sexy public school for my PhD and my experience has been significantly better.
Bad actors are attracted to glamor and prestige because they're part of the cloaks and levers they use to escape consequences. Bad actors are far less attracted to, just as an example, living in Wisconsin, Michigan, or Indiana and telling people at conferences that they work at UW rather than Cambridge. UCs are also vastly more welcoming and supportive of working and middle class students than HYPSM even at the graduate level. That doesn't mean that you won't find any assholes at these places, and go too low in the rankings and you'll see ugly competition over scarce resources, but there's a sweet spot where more honorable people who aren't chasing prestige cluster and you'll find more support and recourse. Public schools ranked 5-15 are best for students without significant, significant social savvy and other forms of protection, IMO.
> Public schools ranked 5-15 are best for students without significant, significant social savvy and other forms of protection
Do you think these are also the best for incoming faculty?
So sorry to know you're one of the victims of these idiots.
>scholarship students, international students, etc. because we were vulnerable with no recourse
That's very accurate, this is a big target group prone to being abused.
>Bad actors are attracted to glamor and prestige because they're part of the cloaks and levers they use to escape consequences.
Yes, it could definitely be that the higher you go the more rotten it becomes, for the reasons you mentioned. The Epsteins of the world hang around those places for a reason.
Shoot me an email (check profile), I'll be very glad to get your feedback on what is being done to fight against this.
I hope your daughter is ok?!
We are fine now. That was four years ago. Our embassy intervened and eventually she was released and we were able to fly back home.
I'm not 100% satisfied with how they handled the situation (they took a while to react to the issue) but in the end we were able to leave that place and I'm happy with that.
If there's this much overt, deliberate fraud and dishonesty in all of our research institutions, the quantities of soft lying and fudging are inconceivable.
We need to seriously rethink our approaching to stewarding these institutions and ideas, public trust is rightfully plummeting.
NIH page for Eliezer Masliah is returning access denied:
https://www.nia.nih.gov/about/staff/masliah-eliezer
Here is a relevant video that I watched recently: https://www.youtube.com/watch?v=nfDoml-Db64
He probably refers to Sylvain Lesné's previously detected Alzheimer fraud, a hugely influencal doctored paper. And now the #1 Alzheimer researcher Eliezer Masliah is also a fraudster.
"Trust the Science"
Science is the best way we have of understanding reality, but sadly it is mediated by humans. Just because a human is a scientist, it doesn't make them infallible.
I think the worst part has been lost in the noise.
There were, and currently are, people suffering from Parkinsons disease whom are being subjected to greater suffering, knowingly, to further this person's career.
This is Nazi and Tuskegee experiment level evil. This person should go to jail. Not US jail, international jail. These are crimes against humanity.
We need a checksum for institutions:
https://youtu.be/PbAVTEbGF3c?si=nAivVaZEI0gMRzX9
(Such a good discussion!)
Oh wow, it was not just some guy publishing fradulent papers in fradulent journals that nobody reads or cites. He had giant impact, was cited tens of thousands of times!
"Publish or perish" incentivizes publication volume, which is going to lead directly to all kinds of attempts to pad publication counts.
You get what you incentivize.
I hate the thought that researchers and drug developers may have wasted their effort and dollars developing drugs based on one extremely selfish person's bogus results.
Don't worry, a lot of them are fudging their numbers too, it's no biggie
You can’t fake human clinical trials.
Is it time for periodic AI-driven audits of papers. Some types of audits may be easy—Western blots for example. But many edge cases will require lots of sleuthing or preferably open access to all files and data. Obviously paying for your own audit sets up the incentives the wrong way.
Alzheimer’s research has been a mess for 30 years as Karl Herrup argues persuasively in How Not to Study a Disease:
https://mitpress.mit.edu/9780262546010/how-not-to-study-a-di...
I guess we need to find a way to incentivize good practice rather than interesting results? Turns out that science is so hard that people cheat.
That’s what you get when you let bean counters take over academia and the worth of scientists are measured by number of papers and citations.
I love how often STEM people point at things like Sokal as fundamental criticisms of the humanities, and then stuff like this happens.
I'm a little puzzled:
> Splicing, cloning, overlaying, copy-and-pasting
Is there no 3nd-party verification?
no requirement to send original blot-papers somewhere?
Doctored neuroscience papers. I'm shocked.
Surely the incentive mismatch isn’t this simple:
Big results are rewarded, the process is considered worthless?
Verifiable computing and data lineage are important mitigation to be developed here.
Not clear whether it would be a net benefit, adding constraints and complexity to the scientific process which will be skipped whenever possible by underpaid labrats. Also, GIGO.
Need to tackle the incentives directly.
So I guess it’s time for the hourly Hackernews propaganda push about “science bad”.
This is all because science is systematic (step by step), not systemic (considering the whole system).
Both perspectives are required to understand reality. [1]
It's time to update science.
[1] "Systems Engineering: A systemic and systematic methodology for solving complex problems." Joseph Kasser, 2019. p. 17
Were their papers peer reviewed? How does something like this happen.
I misread as "Freud, so much Freud". Which is also true
Were the papers peer reviewed? How does something like this happen
Peer reviews are very surface-level, often delegated to inexperienced students, and not incentivized well to do any deep analysis except checking for proper references (the incentive here being making the author cite you or your friends). Been that student.
If this trend continues, science will be more like religion
It has been. Look at the history of science. Or look at well-educated lazy people. They explicitly say they believe in science.
At what point does scientific fraud become criminal?
Straight to jail.
Same as white collar crime, except in academia.
there is so much junk science these days and the problem is the incentives are wrongly set (quantity over quality)
The quantity over quality incentive has ruined so, so much about modern life.
At least science has mechanisms for dealing with fraud, for recognizing fraud and recovering from it. Can't be said for religion or politics.
Glad the title here is "Fraud, so much fraud" and not "Research misconduct". I hope that Masliah is charged with federal wire fraud.
In cases like this where the fraud is so blatant and solely done for the purposes of aggrandizing Masliah's reputation (and getting more money), and where it caused real harm, we need to treat these as the serious crimes that they are.
Just a lark, not to be taken too seriously:
I wonder if a market-driven approach could work here, where hedge funds hire external labs to attempt to reproduce the research underlying new pharmaceutical companies or trials and then short the companies whose results they can’t replicate before results get reported.
Trust the science.
And now a whole generation of doctors will probably be “treating patients” using these “findings”. See eg COVID where it became obvious that the ventilators are killing people and then we kept people hooking up to them for several more months
I didn’t get the Covid vaccine because of all the medical research fraud I’ve witnessed as a grad student.
Remember things like this the next time you try to mandate injections with no long term research.
Distrust in science is already a big problem as it is, but this is really making it so much worse.
Good luck convincing an anti-vaxer now.
It was already hard before, but now they have plenty of amunition.
I did not know Madoff did Science....
> A former NIA official who would only speak if granted anonymity says he assumes the agency did not assess Masliah’s work for possible misconduct or data doctoring before he was hired.
> Indeed, NIH told Science it does not routinely conduct such reviews, because of the difficulty of the process. “There is no evidence that such proactive screening would improve, or is necessary to improve, the research integrity environment at NIH,” the agency added.
LOL. Here are your tax dollars at work, Americans.
Aw shucks, better luck next time. I bet each of you hackers possess exactly the humanist, ethics focused, inclusive, science based, data driven solution "we" need to fix this problem. If only it wasn't for those bad people who made this bad system turning all the good people into bad people!
This shit should be a crime. Imagine how many person-hours and how much money has been wasted.
Wait until image diffusion is used to fake blots and panels. :(
We already have fake rat 50x priapic erections.
If you are familiar with academia you'll realize the academic dishonesty policy is essentially the playbook by which academics behave. The author is surprised that Eliezer Masliah purportedly had instances of fraud spanning 25 years. I bet the author would be even more surprised to find out that most academics are like that for the entire duration of their career. My favorite instance is Shing-Tung Yau, who is still a Harvard professor, who attempted to steal Grigori Perelman's proof of Poincare's conjecture (a millenium prize problem <https://www.claymath.org/millennium-problems/> that comes with a $1MM prize and $10k/mo for the rest of one's life; Perelman rejected all of it.)
I mean, get this: an extremely gifted Mathematician living on a measly salary in Russia had had his millenium prize almost stolen by a Harvard professor. What more evidence do you need?
You've given two examples. Please explain why you can extrapolate to all of academia.
From personal experience, it is all I've seen. Could anyone be in a position to extrapolate to all of academia without speaking from personal experience? I'm not speaking of all academics (hence 'most'). It's a statement similar to "Hollywood has a drug problem" or something of that sort.
My advice to anyone going into Hollywood would be to stay away from drugs; my advice to anyone going into academia is to treat every interaction as if you've just sat at a poker table in Las Vegas.
I work in Hollywood. I am not sure it has more of a drug problem more than say tech or finance. Maybe it does-- I don't know. The point is when a celebrity is a drug addict you hear about it. When a banker or a lawyer is you don't.
Our experience of things has a lot of bias toward what we want to hear. Generalization plays into sterotypes and ideology.
I believe that tech and finance also have a drug problem. Those that sell expensive drugs like cocaine go after rich clients. You work in Hollywood, but have you been attending wild private parties? I've worked in academia and I was in the thick of it, I've experienced first hand the fraud I'm talking about, and it was a large part of my experience, not some side note. Perhaps it's an uncomfortable truth that academia is in the state it is in, but again, it is of utmost importance to warn younger people to its perils. (Act as if you're at a poker table at all times.) In any case, how do you know that it isn't your biases that prevent you from considering what I describe? What is so surprising with the claim that people who are very incentivized to steal and commit fraud do so if they are not punished for it?
edit: and it's not things I've heard, instead it is direct experiences, i.e. people stole my work, and things like that. As a graduate student to watch professors come to you with problem X, take what you've said (an actual solution) and publish a paper without attribute, that sort of thing; to report it and have nothing be done about it, et cetera, and on it goes, it's just instance after instance of such behavior, or the million ways in which they are careful to trick you into working on their problems without receiving attribute. One such trick for example, that again happened to me, is that after a conference talk I got into an e-mail discussion where I explained my approach; I was told that "they already have these results" (the trick here was to divulge less in the talk than what was currently known in order to be able to avoid "significant progress by another person" in the case another person does share new progress that they have already established, and hence not having to share attribution.) It turned out that our discussion was enough for them to go from n=3,4 to a general formula involving primes, because I pointed out a certain property they had not noticed. This is just a single example of the sorts of tricks, aside from total fraud, that happen, and one of the milder incidents I had happen to me.
> you'll realize the academic dishonesty policy is essentially the playbook by which academics behave
If you are unable to "extrapolate to all of academia" then I suggest you be more selective in your statements.
I extrapolate to all of academia, but not to all academics (persons working in academia). My methodology is based on my intuition and my experiences. Already in this YC article the comments appear to be akin to the first meeting between battered housewives. You don't have to believe me or others, I'm just issuing a warning to anyone thinking of getting into academia: be alarmed and alert, and always careful. It's nothing like the movies portray academia to be, instead it's a thieves den, or a poker table, etc, you get the point.
Sorry that your experience in academia has been so negative in this regard. If it is any consolation, mine has been the opposite.
Sounds like someone should write a paper that makes fraudulent claims about the extent of fraud in all of academia!
The damage this person and he accomplices blew to science and the reputation of medical research in this moment in time is enormous. The first thing that comes to mind is that this outing of such blatant fraud will be inevitably quoted by hordes of novaxxers and anti-science cultists for years to come.
From "The Big Crunch" by David Goodstein" (1994) https://www.its.caltech.edu/~dg/crunch_art.html
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
Unfortunately, sometimes someone becomes a bad example. That doesn't make them a "scapegoat", the favored defense of people like that.
A scapegoat is something that takes on all the sins of a lot of others who skate free. If Masliah is the only one who ever suffers, then he IS a scapegoat, but if this article serves to uncover a lot of other bad actors, then he's not. And if his example serves to warn a lot of other scientists to clean up their acts, then his suffering is a benefit.
The language of the article is as low as it is loaded. This is just Derek Lowe covering for the fact that “Science” magazine and the like have let this scoundrel (and many more like him) carry on, without hindrance, for an entire career; pointing the finger anywhere and everywhere but at the journals themselves. None of this is an isolated incident. It is widespread! There is a new scapegoat every month.
> there is a new scapegoat every month.
Correction: there is a new scoundrel every month. It would be nice to expose them all instantaneously, but unfortunately that's not possible.
[flagged]
[flagged]
[flagged]
I had a feeling academia was just run a ran by people letting blatant fraud, exploitation and abuse of phd students, stealing during peer-review, and just other forms of plagiarism, fraud, and exploitation slide by. They let it slide by because correcting these things would lead to massive changes in academia that might put them out of jobs.
Every year that feeling becomes more certain. Glad I quit the track in grad school.
I feel terribly for all the incredibly smart and hard working academics that remain honest and try to make it work. They do what they love, otherwise they wouldn't do such intensive work with so much sacrifice.
It is really disheartening too because academia only turns on the "honesty filter" when it comes to minor grad students that pissed off the wrong people. But you can do all this fraud constantly and become president of harvard if you know the right politics.
Dishonest lot. I hope karma is real so they get what is coming to them for taking advantage of people that just love to increase humanity's knowledge.
alright, am i being downvoted because of my perniciousness to those leading academia, or because I was too sympathetic to the people being exploited?
They would be out of jobs.
You're being downvoted because you're correct—HN is an eco chamber for zealous regurgitation of opinions of the academy and media—institutions that have decayed. It's been happening slowly for awhile, but now things are starting to come apart at the seams.
It is really annoying because a common response is
"We know academia is bad. But this is the best we have and it is hard to improve"
when that is false on two counts.
1. If you had said the same thing before 2016 or covid, people would not agree that academia is rife with fraud or worthy of skepticism. 2. The same people saying the system dismiss how can it be improved are the same ones that would suffer from disruption as you say. They have the power to dismiss these arguments to begin with.
When I hear someone say, 'We know academia is flawed, but it's the best we have, and it's hard to improve,' I can't help but feel a deep, seething frustration.
It's profoundly insulting and grotesque—on par with excusing the inexcusable.
Accepting this degree of mediocrity is as repulsive as tolerating the most heinous acts imaginable. I've confronted people directly with this, to their face, because to me, it's inconceivable how anyone can be okay with such a vile acceptance of the status quo.
If society was even slightly capable of rational action . . . (legally, I cannot complete this sentence).
You are being downvoted because you extrapolating from one fraud case to call all scientists dishonest.
I can do it too. Person named SpaceManNabs made a bad post. There for all posts by SpaceManNabs, and probably all posts on HackerNews are bad. A dishonest lot.
> from one fraud case to call all scientists dishonest
I specifically mention that the majority of scientists are not dishonest. The majority of scientists are not running academia. The majority of scientists are suffering from this system, to differing degrees.
If I were as rude as you, I'd extrapolate on reading ability, especially since it is not just one fraud case.
Regardless, even if I was wrong on that, all my other criticisms of academia still stand, like exploitation of the phd students. I really hope the grad student unions get what they want.
I appreciate your response though. Makes me feel confident that it is just salty people on HN that hate truth, because otherwise, why would you mischaracterize what I said?
Humans being humans
[flagged]
It's wrong to think that because there is reports of fraud or systematic error in science you shouldn't trust it. I'm sure all those things exist. But they also exist in every other institution with a lot less self-reflection and self-correction.
Nassim Taleb said that people think weathermen are terrible predictors of the future. He says meteorology is among the most accurate sources of predictions in our lives. But we can easily validate it and we see the mistakes. If we had as much first hand experience with other types of predictions we'd appreciate the accuracy of weatherman. My point is: just because you know the flaws in a system don't assume it isn't better than another.
blablabla - tldr, you're very very smart.
Where is Eliezer Masliah from?
“MASLIAH, 65, TRAINED in medicine and neuropathology at the National Autonomous University of Mexico (UNAM), earning his medical degree in 1982 and completing a residency in pathology in 1986. He married a U.S. resident who also studied medicine at UNAM. They relocated to San Diego after Masliah’s training.”
— <https://www.science.org/content/article/research-misconduct-...>
What kind of "from"? For example, ethnically, due to his Hebrew name, he's seems to be from Israel or at least the Levant.
Universities became tax funded and the consequences is warm bodies filling chairs. I have experience with a number of big name unis in the U.S. they are all about office and national politics. It's not about the work and hasn't been for a while now.
Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.
> Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.
This... it's no longer about value its about optics... Problem exists in most industries now. The pendulum needs to swing back the other way before it's too late to stop the decay...
On the plus side, this is the kind of stuff you could screen pretty easily with large model machine learning. Not that there is a business in identifying scientific fraud, doing that with fraudulent government documents would probably have a better ROI (at least for the tax payer), but clearly we need a repository if every image/graph that has been published as evidence to start.
It would be something you could offer to journals perhaps as a business. Sort of "peer reviewed and fraud analyzed" kinda service.
What is truly sad for me is the 'wrong paths' many hard working and well meaning scientists get deflected down while someone cheats to get more 'impact' points.