26 comments

  • withinboredom a day ago ago

    There's this guy I usually have on in the background on youtube who replicates chemistry experiments -- or attempts to. It's pretty rare to see him find a paper that doesn't exaggerate yields or go into enough details, and he has to guess things.

    • datadrivenangel a day ago ago

      You don't exaggerate yields, you just publish the best one you get out of a dozen attempts. Chemistry is messy.

      • thyristan a day ago ago

        That, in science, is called "lying".

        Either you publish the range of results, the average plus standard deviation or average plus standard deviation of a subset with the exclusion criteria and exclusion range. Picking a result is a lie, plain and simple, and messiness is not an excuse.

        • passwordoops a day ago ago

          Hence the crisis we have in science today.

          As an aside, I'm working at a QC chem lab now, with results that have a direct impact on revenue calculations for clients. Therefore the reports go to accountants, therefore error bars dont't exist. We recently had a case where we reported 41.7 when the client expected 42.0 on a method that's +/- 1.5... They insisted we remeasure because our result was "impossible" The repeat gave 42.1, and the client was happy to be charged twice

        • mattmanser a day ago ago

          See my comment too, you jump to lying, but as the GP said, chemistry is messy.

          • thyristan a day ago ago

            Any other science is messy as well.

            Truck passing by on the nearby road? Oops, my physics experiment got shaken, results look messy. Lab animal caught a cold? Oops, genetics experiment now has messy data. Atmosphere is turbulent and some shitty starlink satellite passed by at the wrong moment? Oops, my stellar spectra are messy now. Imperfection in my test ingot? Oops, now my tensile strength measurements have messy data because a few ripped too early...

            It is the nature of experimental science to deal with messiness. And dealing with it means being honest about it. You write it like it happened, find the problems in the messy parts of your data, exclude that and explain the why and how. Hand-picked results and just omitting data you find inconvenient is not science, its fraud.

            When I am allowed to just pick one result I can show you a perpetuum mobile, cold fusion, superhuman intelligence in mice and tons of other newsworthy things...

            • mattmanser 15 hours ago ago

              Can I ask if you've done any actual commercial work in any science?

              From the way you're talking, I'm going to guess you're an armchair commentator.

              One person performing an unfamiliar experiment once is going to get lower yields and occasional failures.

              • thyristan 4 hours ago ago

                I've done scientific work in science. I've been paid for it, but by a public university, so not "commercial" in the strictest sense of the word.

                Do you mean to suggest that "commercial work" in science takes shortcuts and ignores the essentials of the scientific method? Do you mean to suggest that commercial science or at least commercial chemistry writing science-like papers are all misrepresenting their results systematically? Do you think the standards for good scientific conduct do not apply to chemists or commercially working scientists? Because any of that would mean that "commercial work" in science is just fraud dressed up as science.

                And yes of course an experienced experimenter will get better, easier, more consistent results, everyone knows that. The issue is not about that at all. The issue is about suppressing results and data that you don't like. Those maybe result from initial inexperience or bad luck, normal variations in measurements or whatever. You present all your data, with statistics, with an explanation, and if that explanation is "well, the initial 20 values are excluded from the reported average because of me being heavy-handed with the frobnicator" then that is fine. People can check your values, your reasoning and convince themselves that your reporting is right and your experiment works to the extend you reported. If you just say "the yield is 89%" without mentioning that all the other yields were worse, without mentioning any kind of variance, range, exclusions, you are lying. Those 89% were your single best yield, since they were best you were never able to reproduce that, so it might as well have been leftover product from improperly cleaning your glassware...

                Are you really trying to convince me that all chemists are crooked like that? Or all commercial work in science is crooked?

      • awjlogan a day ago ago

        Compare the yields in a typical JACS (or any high end journal) paper versus those in OrgSyn and I think it's pretty clear that yields in many papers are more than exaggerated. It's a single untraceable number and the outcome of your PhD depends on it - the incentive is very clear. Leave a bit of DCM in, weigh, high vac to get rid of the singlet at 5.30ppm and no one's any the wiser...

    • mattmanser a day ago ago

      I did a lot of chemistry for a year when I worked as a QA for a pharmaceuticals company before going to uni.

      So much so that when I did Chemistry at uni I got asked if I was cheating a few times in labs, until I explained.

      It's actually really hard to get any experiment perfect the first time.

      Even with a year's practice of measuring and mixing and titration and all the other skills you need, I'd still get low yields, or bad results occasionally. Better than everyone else, but still not perfect.

      I also noticed that the more you do a particular process, the better results you will get. Just like practicing a solo on an instrument lots, or a particular pool shot, or cooking a particular meal. There's a level of learning and experience needed for each process, not all chemistry in general.

    • zipy124 a day ago ago

      Was it perhaps "that chemist"? He has some decent videos on complete bogus papers but I don't think he does reproductions, I'd be interested in that channel if you happen to find it in your watch history.

      • 8note 21 hours ago ago

        nileblue/red typically pulls his processes from papers that have some dubious documentation, and his results have variance with the papers'.

        he's not going out of his way to reproduce papers, its just on the way of turning peanut butter into toothpaste, or something of the sorr

  • drgo a day ago ago

    I think what publishers need to do is retain reviewers (possibly on part-time basis); many retired scientists can benefit from those opportunities and it is a way to keep senior scientists engaged in their fields. For most submitted papers, there is no need for the reviewer to be sub-specialized in the paper's field (most reviews done by the sub specialists are actually done by their postdocs and grad students) and the hiring process (and subsequent evaluation) is ought to be more effective and speedier than randomly contacting people to beg for reviews. Until the review process is taken more seriously by publishers and journal editors, the quality of published science continues to deteriorate.

  • jruohonen a day ago ago

    > Some 53% of researchers accepted the invitation to review when offered payment, compared with 48% of those who received a standard, non-paid offer. On average, paid reviews came in one day earlier than unpaid ones.

    Does not sound like notable effects to either end. (I was once offered a payment for a peer review, but declined it.)

    • mmooss a day ago ago

      Don't overlook the other experiment's results.

  • mmooss a day ago ago

    What are the requirements of a review? And what is the marketplace for someone meeting those requirements?

    What expertise is required - someone who researches the same questions? Same general domain? Adjacent domain?

    And how long does it take? I imagine that depends on many details.

    Finally, what are they reviewing for? Is it a once-over for errors in method? Something like grading a student paper?

    • tsumnia a day ago ago

      Speaking as a CS Education reviewer, some of the criteria can be "signing up to review", though solicitation is often sent to professionals in the domain (through personal requests or blanket email campaigns), as well as through respective mailing lists. I review papers for I think 4-5 conferences, mostly because I have colleagues that serve/publish in those spaces (you declare conflicts of interest to avoid bias).

      Each publisher/conference have their own reviewing guidelines to follow, but at least for the conferences I've reviewed for they include: a summary (2-5 sentences tops), the strengths of, the weaknesses of the research, and potentially your opinion on the piece. You are typically asked to include your familiarity with the research space since you may be reviewing methodologies that you were not explicitly trained in. This all distills into a metric that effectively reflects "this paper should be accepted/not accepted" which is then handed to a 'senior' reviewer to summarize for the conference to decide. All of my conferences are double-blind single submission, but I have colleagues that are able to respond to reviewer critiques.

      Most conferences recognize things like grammatical issues can happen, so reviewers are asked to only point them out rather than use them as a basis for rejection; however if the paper is riddled with mistakes, then it can be grounds for rejection. Likewise, since CS Education is a combination of CS and cognitive psychology, some of the discussion can be attributed toward "appropriateness for CS education research". For example, I once reviewed a paper that clearly was including theater-based education techniques but had CS shoehorned in one paragraph (that was it). Alternatively, measuring time delays in student responses to a tutoring system can help distinguish when students become distracted or take a break.

      • mmooss a day ago ago

        Thanks. Someone told me that the 'blind' review doesn't often work because they already know who is doing what in their field.

        • tsumnia a day ago ago

          It can depend on the field and the methodologies that are used - there's been some papers I've reviewed that I could assume who they were based on the contents. I can't really offer a counterpoint on non-blinded reviews as I've only done blind. I have heard some reviewers use the anonymity to be particularly rude, but I've only ever experienced that once but I used our 'discussion' phase to express my concerns.

    • goosedragons a day ago ago

      Generally they want to know is this paper worth publishing and what are things that need fixing, clarification, etc. The reviewers should be people that understand the topics in the paper so they can identify issues, these are usually people that have published articles on similar topics, or people those people recommend. It's more in-depth than grading a paper.

  • westurner 5 days ago ago

    > USD $250

    How much deep research does $250 yield by comparison?

    Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples

    • pjdesno a day ago ago

      I'm not sure why one would compare reviews by acknowledged experts in a field with stuff written by anonymous randos, and it seems highly unlikely that anyone with the appropriate qualifications would be lurking on some mechanical turk-like site.

      I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.

      However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...

    • moomin a day ago ago

      It’ll get you an electrician for about three hours in London. How long do these papers take to read critically?

      • voxl a day ago ago

        One full work day to due decently. Two full work days to do well.

    • tdeck a day ago ago

      You can get 50 reviews on Fiverr for that price!

  • odyssey7 a day ago ago

    Peer review is work. The workers are subject to capitalism. Pay them or capitalism will optimize the quality unfavorably