Is particle physics dead, dying, or just hard?

(quantamagazine.org)

209 points | by mellosouls 3 days ago ago

377 comments

  • mattlangston 3 days ago ago

    Experimental particle physicist here. It's just hard.

    I measured the electron's vector coupling to the Z boson at SLAC in the late 1990s, and the answer from that measurement is: we don't know yet - and that's the point.

    Thirty years later, the discrepancy between my experiment and LEP's hasn't been resolved.

    It might be nothing. It might be the first whisper of dark matter or a new force. And the only way to find out is to build the next machine. That's not 'dead', that's science being hard.

    My measurement is a thread that's been dangling for decades, waiting to be pulled.

    • sashank_1509 3 days ago ago

      What would the cost of the “next machine” be? Is it going to be tens of billions or can we make progress with lesser money. If it is going to be tens of billions, then maybe we need to invest in engineering to reduce this cost, because it’s not sustainable to suspend thirty years, tens of billions for every incremental improvement.

      • sigmoid10 3 days ago ago

        This kind of slow, incremental improvement that costs tens of billions of dollars and takes decades gave us the microchips that ultimately enabled you to type this comment on your phone/computer. The return on that investment is obvious.

        But it is not just about making money: The entire field of radiation therapy for cancer exists and continues to improve because people figured out ways to control particle beams with extreme precision and in a much more economical way to study particle physics. Heck, commercial MRIs exist and continue to improve because physicists want cheaper, stronger magnets so they can build more powerful colliders. What if in the future you could do advanced screening quickly and without hassle at your GP's office instead of having to wait for an appointment (and possibly pay lots of money) at an imaging specialist center? And if they find something they could immediately nuke it without cutting you open? We're talking about the ultimate possibility of Star Trek level medbays here.

        Let the physicists build the damn thing however they want and future society will be better off for sure. God knows what else they will figure out along the way, but it will definitely be better for the world than sinking another trillion dollars on wars in the middle east.

        • carefree-bob 2 days ago ago

          Jack Kilby at Texas Instruments and Robert Noyce at Fairchild did not require tens of billions of dollars. Sherman Fairchild invested 1.3 million and the treacherous eight each put in $500. Fairchild did have the right to purchase the firm for $3 million, which of course he exercised. Similarly, Shockley's lab was funded by a $1 million grant in the 50s.

          There is a lot of handwaving going on here to justify the incredibly cheap, mostly privately funded investments that launched the computer generation with the massively expensive, extremely gradual gains we are making now with particle accelerators. Part of it is that people just can't imagine how little was invested in R&D to get these stunning results, given how much we have to invest today to get much less impressive results, so they just assume that semiconductors could not have been invented without tens of billion dollars of research.

          There is diminishing returns, just as a 90nm process is really all you need to get 90% of the benefits of computerization -- you can drive industrial automation just fine, all the military applications are fine, etc. But to go from a 90nm process to a 3nm process is an exponential increase in costs. In a lot of fields we are at that tail end where costs are incredibly high and gains are very low, and new fields will need to be discovered where there is low hanging fruit, and those fields will not require "tens of billions" of dollars to get that low hanging fruit.

          Even with particle accelerators, SLAC cost $100 million to build and generated a massive bounty of discoveries, dwarfing the discoveries made at CERN.

          To pretend that there is no such thing as a curve of diminishing returns, and to say that things have always been this way is to not paint an accurate picture of how science works. New fields are discovered, discoveries come quickly and cheaply, the field matures and discoveries become incremental and exponentially more expensive. That's how it works. For someone who is in a field on the tail end of that process, it's not good history to say "things have always been this way and have always cost this much".

          • sigmoid10 2 days ago ago

            Duh. The first cyclotron was built for, like, a 1000 bucks. Many of the following colliders were also ridiculously cheap by comparison. But in the same way the semiconductor industry now spend billions on EUV research to keep making progress, particle physics spends billions on colliders. But when you account for real GDP growth, collider costs have actually been stagnating for decades.

        • brazzy 3 days ago ago

          > This kind of slow, incremental improvement that costs tens of billions of dollars and takes decades gave us the microchips that ultimately enabled you to type this comment on your phone/computer.

          No. These two cases are absurdly different, and you're even completely misunderstanding (or misrepresenting) the meaning of the "tens of billions of dollars" figure.

          Microchips were an incremental improvement where the individual increments yielded utility far greater than the investment.

          For particle physics, the problem is that the costs have exploded with the size of facilities to reach higher energies (the "tens of billions of dollars" is for one of them) but the results in scientific knowledge (let alone technological advances) have NOT. The early accelerators cost millions or tens of millions and revolutionized our undestanding of the universe. The latest ones cost billions and have confirmed a few things we already thought to be true.

          > Let the physicists build the damn thing and future society will be better off for sure.

          Absolutely not.

          • sigmoid10 2 days ago ago

            >Microchips were an incremental improvement where the individual increments yielded utility far greater than the investment.

            You should look up how modern EUV lithography was commercialised. This was essentially a big plasma physics puzzle. If ASML hadn't taken on a ridiculous gamble (financially on the same order of magnitude as a new collider, esp. for a single colpany) with the research, Moore's law would have died long ago and the entire tech industry would be affected. And there was zero proof that this was going to work beforehand.

            • varjag 2 days ago ago

              EUV lith would have absolutely been achieved if LHC wasn't ever built.

              • skeptic_ai 2 days ago ago

                Lhc mastered high vacuum, high precision lens from zeiss, precision lasers, specialized magnets which are needed by euv lith.

                So would have been delayed.

                • varjag 2 days ago ago

                  High vacuum in enormous volumes maybe. Otherwise it was certainly a problem solved decades ago.

                  Not sure what role of EUV optics was in LHC. But Zeiss would develop you anything on the frontier of optics if you have deep enough pockets.

                  The rest I don't know enough to comment on, but as far as technology goes both LHC and EUV lithography are bespoke systems. Seriously doubt there is any path dependency. Huge part of LHC cost were earthworks and precision construction of complex machinery at enormous scale.

                • chuckadams 2 days ago ago

                  EUV uses mirrors rather than lenses, and the precision surfaces on those are something that more likely came out of space programs. But honestly, I have no problem with throwing a few billion at basic science that might go nowhere. It's a drop in the ocean compared to war and corporate welfare.

          • plastic-enjoyer 2 days ago ago

            > Absolutely not

            Engineers not being able to fathom that by building this huge-ass and complicated machines to answer questions about the fundamentals of nature, other problems are solved or new things are invented that improve and change our life will never not be funny to me

            • eviks 2 days ago ago

              This is a pretty common mistake - why not invest directly in trying to solve those problems instead of hoping to learn something by chance from different activities?

            • brazzy 2 days ago ago

              Just as funny as armchair science enthusiasts not being able to fathom that research budgets are limited and it makes sense to redirect them into other, more promising fields when a particular avenue of research is both extremely expensive and has shown diminishing returns for decades.

              • XorNot 2 days ago ago

                The more important question is, are you content with simply dismantling any progress in accelerator science at all for the next century? Because the LHCs successors won't be online till the 2050s at least. If you don't fund them now though and start the work, then no one does the work, no one studies the previous work (because there's no more grant money in it) and the next generation of accelerator engineers and physcists doesn't get trained and the knowledge and skill base withers and literally dies.

                Because the trade off of no new accelerators is the definite end of accelerator science for several generations.

              • whatever120 2 days ago ago

                Real scientists don’t call others armchair scientists, it’s just belittling. Do you resort to ad hominem because you feel like your argument is not strong enough, so you have to try to attack the person as well?

              • hugh-avherald 2 days ago ago

                Does targeting research towards 'more promising' fields actually produce greater economic returns?

                • bluGill 2 days ago ago

                  There is no way to answer that - we have limited money/people/time. Whatever we fund - we will get whatever the returns are - but there is no way to know what we don't have because we didn't fund some other thing. Even if in a few years we fund that other thing - what we get out of those funds is influenced by the other things we already know and so whatever we get out of it also shows the results of the other research that we already have.

                  The only exception is if some research reveals nothing. Though this isn't a useful claim: "it doesn't work" still revealed something.

                • brazzy 2 days ago ago

                  Given that you can do a lot more research in different fields at the same time for the amount of money the next bigger particle accellerator would cost, the answer is very likely yes.

                  • pixl97 2 days ago ago

                    Ok, which field? How much money will be needed? What potential experiments are lined up in those fields that need money to go forward?

                    Particle physics has told us a lot about the base nature of our model and the affirmation of the standard model. The fruits of these labors still take decades to make their mark on our world.

                    And, we still are working on those other things at the same time too. It turns out with 8 billion people on the planet and modern technology we can get an absolute fuckload done at once.

                  • SmirkingRevenge 2 days ago ago

                    Not a physicist, but I think building state of the art particle accelerators probably requires doing a lot of research in many different fields

                • bryanrasmussen 2 days ago ago

                  The field of Elon Musk has been promising shit for years, what do you think?

                  • danparsonson 2 days ago ago

                    Well we definitely have a lot more Elon Musk now

                  • verzali 2 days ago ago

                    To be fair, he has delivered a lot of (bull)shit

                    • __patchbit__ a day ago ago

                      How SpaceX and Tesla patent new industrial scale techniques and technologies for any competitor to use is the bullcase for bringing the future forward faster. Lookup at Starlink.

          • bryanrasmussen 2 days ago ago

            >Absolutely not.

            A statement that certain needs some backing.

            You might say that the statement you were replying to also needs some backing, but they did give some, although you believe it was incorrect.

            It just seems that "absolutely not" goes against the conventional wisdom that knowledge for knowledge sake will lead to some greater return than was expended on getting that knowledge somewhere down the road which really is one of the main underlying ideas of Western Civilization since before Newton.

            Absolutely not means future society will not be better off! That seems to be a big weird absurdly pompous and conceited statement to make unless you have a time machine, or at least a big mess of statistics that can show that scientific advances in physics for a significant amount of time has failed to provide a return value on existence, although I would think that does not rise to the promise of "absolutely not".

          • bl0rg 2 days ago ago

            > The latest ones cost billions and have confirmed a few things we already thought to be true.

            Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?

            > Absolutely not.

            What are we supposed to do then? As humans, I mean. No one knows why we're here, what the universe really is like. We have some pretty good models that we know are wrong and we don't know what wonders the theoretical implications of any successor models might bring. That said, do we really need to motivate fundamental research into the nature of reality with a promise of technology?

            I'm not arguing for mindlessly building bigger accelerators, and I don't think anyone is - there has to exist a solid line of reasoning to warrant the effort. And we might find that there are smarter ways of getting there for less effort - great! But if there isn't, discrediting the venue of particle accelerators due to their high upfront cost as well as historical results would be a mistake. We can afford it, and we don't know the future.

            • nobodyandproud 2 days ago ago

              > I'm not arguing for mindlessly building bigger accelerators, and I don't think anyone is

              You sure about that?

              The GP whose position you’re defending wrote this:

              > Let the physicists build the damn thing however they want and future society will be better off for sure.

            • tokai 2 days ago ago

              >I'm not arguing for mindlessly building bigger accelerators, and I don't think anyone is

              But you are and they are. Just by the comments here its clear that even suggesting not to use untold billions on maybe pushing theoretical physics a little forward is meet with scorn. The value proposition either, in knowledge or technology, is just not well argued anymore besides hand waving.

              • bl0rg 2 days ago ago

                No, I'm not and neither is anyone else. It's common sense that we should explore options that require less effort, just as one would in any project. I'm saying that we can't discredit huge particle accelerators due to, in the grandest scheme of things, a small economic cost and past results of a different experiment.

              • davrosthedalek 2 days ago ago

                Or, you know, we have read the physics case and are of the opinion that it's worth it. Have you?

            • brazzy 2 days ago ago

              > Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?

              Sure, but it didn't. Which is knowledge that really should factor into the decision to build the next, bigger one.

              > What are we supposed to do then? As humans, I mean.

              Invest the money and effort elsewhere, for now. There are many other fields of scientific exploration that are very likely to yield greater return (in knowledge and utility) for less. You could fund a hundred smaller but still substantial intiatives instead of one big accelerator. And be virtually guaranteed to have an exciting breakthrough in a few of them.

              And who knows, maybe a breakthrough in material science or high-voltage electrophysics will substantially reduce the costs for a bigger particle accelerator?

              • nobody9999 2 days ago ago

                >> Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?

                >Sure, but it didn't. Which is knowledge that really should factor into the decision to build the next, bigger one.

                Not this week, no. And if, next week (or next year or next decade) we resolve some of the most significant problems in modern physics, any expenditures in those fields were a waste?

                You've repeatedly bashed particle physics based on your perception of a lack of progress vis-a-vis the costs, and claimed that other fields should be prioritized. Which fields? What would you hope to gain from those fields?

                Is there no room for basic research that attempts to validate the bases (Standard Model, Quantum Field Theory, the marriage of the former with General Relativity, etc.) of modern physics? If not why not? Our models are definitely wrong, but they're measurably less wrong than previous models.

                Should we not continue to hone/probe those models to find the cracks in the theories underpinning those models? If we don't, how will we solve these extant issues?

              • bl0rg 2 days ago ago

                > Which is knowledge that really should factor into the decision to build the next, bigger one.

                It was always factored in, and of course it would be in any next iteration.

                > Invest the money and effort elsewhere, for now. There are many other fields of scientific exploration that are very likely to yield greater return (in knowledge and utility) for less. You could fund a hundred smaller but still substantial intiatives instead of one big accelerator. And be virtually guaranteed to have an exciting breakthrough in a few of them.

                I agree with this to a large extent. I'm just not against particle accelerators as a venue for scientific advancement and in the best of worlds we could do both.

                • brazzy 2 days ago ago

                  I'm not against them in principle either. Just at this time, at this cost, at this state of development in the field.

          • bayindirh 2 days ago ago

            > Absolutely not.

            I'd not be so sure about that. Doing this research will probably allow us to answer "it works but we don't know exactly why" cases in things we use everyday (i.e. li-ion batteries). Plus, while the machines are getting bigger, the understood tech is getting smaller as the laws of physics allows.

            If we are going to insist on "Absolutely not" path, we should start with proof-of-work crypto farms and AI datacenters which consume county or state equivalents of electricity and water resources for low quality slop.

            • brazzy 2 days ago ago

              That "probably" is really more of a "maybe" given the experience with the current big accelerators, and really needs to be weighed against the extreme costs - and other, more promising avenues of research.

              > If we are going to insist on "Absolutely not" path, we should start with proof-of-work crypto farms and AI datacenters which consume county or state equivalents of electricity and water resources for low quality slop.

              Who exactly is the "we" that is able to make this decision? The allocation of research budgets is completely unrelated to the funding of AI datacenters or crypto farms. There is no organization on this planet that controls both.

              And if you're gonna propose that the whole of human efforts should somehow be organized differently so that these things can be prioritized against each other properly, then I'm afraid that is a much, MUCH harder problem than any fundamental physics.

              • pixl97 2 days ago ago

                >and other, more promising avenues of research.

                Which are? Just asking for the purposes of this discussion.

          • crispyambulance 2 days ago ago

            >> Let the physicists build the damn thing and future society will be better off for sure.

            > Absolutely not.

            And what do YOU mean, "absolutely not"? You have no more say in what happens than anyone else unless you're high level politician, who would still be beholden to their constituents anyway.

            And yet big science, like particle accelerators, STILL gets funding. There's plenty to go around. Sure, every once in a while a political imperative will "pull the plug" on something deemed wasteful or too expensive and maybe sometimes that's right. But we STILL have particle physics, we STILL send out pure science space missions, there are STILL mathematicians and theorists who are paid for their whole careers to study subject matter that has no remotely practical applications.

            Not everything must have a straight-line monetary ROI.

          • Iulioh 2 days ago ago

            I'm torn between "yes, these experinets are way too expensive and the knowlage is too niche to be really usefull" and "We said this about A LOT and we found utility in surprising ways so it could be a gamble worth taking"

            That's the problem with cutting edge reaserch....you don't even know if you will ever needed it or if a trilion dollar industry is waiting for just a number to be born

            • brazzy 2 days ago ago

              Yes, we don't really know. But at some point the gamble is just too big.

              Because the costs aren't just numbers. They represent hundreds or thousands of person-years of effort. You're proposing that a large number of people should spend their entire lives supporting this (either directly as scientists, or indirectly through funding it) - and maybe end up with nothing to show for it.

              And there's the opportunity costs. You could fund hundreds of smaller, yet still substantial scientific efforts in many different fields for the cost of just one particle accelerator of the size we think is sufficient to yield some new observations.

        • gosub100 2 days ago ago

          Why can't some of these trillion dollar companies invest back in the quantum tech that got them there, if it's so certain there will be benefits? Why not Apple and Nvidia fund the next particle collider, and give something back to society instead of letting tax payers fund it so billionaires can privatize the profits?

          • davrosthedalek 2 days ago ago

            Do you want the results of the research be open and available to all, or should it become IP of nvidia or apple?

            • gosub100 2 days ago ago

              Why wouldn't it be open to all? The person I replied to just said how great it is for open information to trickle down to everyone.

              • davrosthedalek 2 days ago ago

                Why would nvidia/apple/whoever open up research they paid for to help anybody but them?

          • sigmoid10 2 days ago ago

            Fundamental physics research has an extremely profitable returns ratio, but it takes decades to amortize. This does not work with capitalist corporations who only care about immediate profits. Even for governments this is a difficult sell, but at least they don't have to soothe shareholders every quarter. Generational projects take a different kind of economic thinking.

            • butlike 2 days ago ago

              Is that just because there's shareholder anxiety with the unknown on if their investment will "be vested" by the time they need to pull it out for retirement?

              If that's the case it seems like it might be shrewd for younger investors to buy into physics research on a 15-20 year timeline?

              • sigmoid10 2 days ago ago

                Unfortunately, younger people usually have neither the money nor the foresight for this.

          • parineum 2 days ago ago

            > Why not Apple and Nvidia fund the next particle collider, and give something back to society instead of letting tax payers fund it so billionaires can privatize the profits?

            Where do you think that tax money comes from?

            Apple and Nvidia are creating the economies that produce tax revenue at every step of the way.

            • awkwardleon 2 days ago ago

              I believe the point was these companies benefited greatly and specifically from basic research funded by the government: they should therefore "give back" in kind (vs simply contributing to the tax base and relying on a government to figure out what to fund). The reality is these companies care only about shareholder value, and the current US administration has been terminating grants and cutting funding in basic research. I think it's fair to question, in this environment, what these companies' ethical responsibilities really should be.

            • bigfudge 2 days ago ago

              Yeah, except corporations don’t pay tax like they did in the 50s and 60s…

          • boringg 2 days ago ago

            I think your starting premise is obviously false and where are you getting that billionaires are privatizing the profits from the particle collider (sounds like a talking point). No one can guarantee that there are benefits - we can surmise that there are but there are still massive risks associated with large form science experiments.

            Government has always been the backbone of basic science research - no one else can reasonably bear the risk and the advances are public domain.

        • accidentallfact 2 days ago ago

          I'm so sick of this "good guy approach". It didn't give us progress, it gave us those like Watt and Intel, highly celebrated bullshiters who stopped being relevant as soon as their IP deadlock expired.

          I suppose the only solution is undeground science. Do enough progress in silence, dont disseminare the results, unless the superiority becomes so obvious that an armed resistance becomes unthinkable.

      • toast0 2 days ago ago

        Spending tens of billions every thirty years is pretty sustainable actually.

        "Fundamental Research" may or may not pan out, but the things that happen along the way are often valuable... I don't think there's any practical applications related to generating Higgs Bosons, but it's interesting (at least for particle physicists) and there's a bunch of practical stuff you have to figure out to confirm them.

        That practical work can often generate or motivate industrial progress that's generally useful. For example, LHC generates tons of data and advances the state of the art in data processing, transmission, and storage; that's useful even if you don't care about the particle work.

        • ajam1507 2 days ago ago

          You could say the same thing about the world wars or porn. Any human pursuit taken to an extreme can produce knock-on effects, that isn't an argument in a vacuum to continue to fund any one area.

          • toast0 2 days ago ago

            Spending tens of billions every 30 yesrs on world wars would be pretty awesome. Much better than what we currently spend.

            Porn seems to be sustainably self funding; no need for government stimulus.

            • ajam1507 2 days ago ago

              > Porn seems to be sustainably self funding; no need for government stimulus.

              Only because you haven't seen the plans for the Large Hardon Collider

      • snowwrestler 2 days ago ago

        In the scope of international cooperation, tens of billions of dollars is not very much money. For context, the U.S. economy generates $10 billion every ~3 hours. One private company, Google, spends $10 billion in about 2 weeks.

        So look at it this way. Let’s take a bunch of the smartest people alive, train them for decades, give them a month of Google money, and they’ll spend 30 years advancing engineering to probe the very fabric of reality. And everything they learn will be shared with the rest of humanity for free.

        Sounds like a pretty good deal to me.

        • WarmWash 2 days ago ago

          Takes like this are an optical illusion meant to create the idea that there is an insane amount of money freely floating around that is just being hoarded.

          But just like that money is generated, it's also all spent.

          So the actual hard part is deciding what not to spend money on so we can build some crazy physics machines with a blurry ROI instead.

          • snowwrestler 2 days ago ago

            It’s not an optical illusion, there is actually is a large amount of money available to do things. This is all public data, you can check it yourself.

        • aleph_minus_one 2 days ago ago

          > Let’s take a bunch of the smartest people alive, train them for decades, give them a month of Google money

          Unpopular opinion: Google makes an insane amount of money, so they can afford this salary. The CERN (or whatever your favourite research institute is), on the other hand, is no money-printing machine.

          • alphawhisky 2 days ago ago

            Every step towards understanding subatomic physics is a step towards cold fusion. The second we're able to understand and capture this energy, money literally doesn't exist. Infinite energy means infinite free energy, which would also abolish money from a fundamental market value perspective. I'll continually preach that we need to plan for this economically as a species because none of our current government or economic systems will survive the death of scarcity.

            • llbbdd 2 days ago ago

              Unless cold fusion allows everyone to literally pull infinite energy out of thin air with no maintenance or labor costs, I don't buy that premise. Many other utilities are effectively free already in some places, but you still need metering to deter bad actors, which is what money is. Otherwise I'm going to take all available cold fusion capacity in existence and use it to build my own artificial sun with my face on it.

            • bluGill 2 days ago ago

              > Every step towards understanding subatomic physics is a step towards cold fusion.

              Is it?

              You are assuming cold fusion is possible. We don't know that. It might be one more step before we finally prove it is never possible.

              You are also assuming that cold fusion is something this path of research will lead us to. However this might be a misstep that isn't helpful at all because it doesn't prove anything useful about the as yet unknown physical process that cold fusion needs.

              We just don't know, and cannot know at this point.

            • aleph_minus_one 2 days ago ago

              > The second we're able to understand and capture this energy, money literally doesn't exist. Infinite energy means infinite free energy[.]

              Similar statements were already claimed about nuclear fission power plants in the 70s.

              • pixl97 2 days ago ago

                And your point is? Sometimes we make predictions that take hundreds of years to be turned into products.

                • aleph_minus_one 2 days ago ago

                  My point is that you shouldn't believe in marketing claims that are obviously too good to be true, like

                  > The second we're able to understand and capture this [cold fusion] energy, money literally doesn't exist. Infinite energy means infinite free energy, which would also abolish money from a fundamental market value perspective.

                  • pixl97 2 days ago ago

                    I mean obviously this statement is false as we live in a finite section of the visible universe.

                    This said beyond the marketing there is a reality that if cold fusion did show up that there is a singularity event that occurs that making predictions past that point will almost always fail as the world would change very rapidly.

      • Uehreka 2 days ago ago

        There are people in this thread saying tens of billions isn't that much in the long term (I'd agree) but there's a bigger point that comes into play whatever the price: The universe doesn't care if exploring it is expensive. You can't make a "that's not sustainable" argument to the universe and have it meet you half way. And that's who you're arguing against: not the scientists, the universe. The scientists don't decide how expensive future discoveries will be.

      • raverbashing 3 days ago ago

        The next machine is not necessarily a longer LHC

        There are talks of a Muon collider, also there's a spallation source being built in Sweden(?) and also of an electron 'Higgs factory' (and while the LHC was built for the Higgs boson it is not a great source for it - it is built as a generic tool that could produce and see the Higgs)

      • ForgotIdAgain 2 days ago ago

        I think that engineering progress made while building those machines are maybe more relevant for practical technical development than the discovery they make.

        • api 2 days ago ago

          Better superconductors here. Would you like a $20 MRI down at your local drug store to detect cancer at early stage 1?

          • amanaplanacanal 2 days ago ago

            The problem isn't the cheaper MRI. The problem is the expert that needs to interpret the results. Detecting millions of cancers that don't actually exist doesn't help anybody.

            • api 2 days ago ago

              This is a problem domain AI is good at. Have AIs do first-pass, then when they flag something an actual doctor reviews it. Then if they concur it goes to your doctor, who knows you, who can review it.

    • hippich 2 days ago ago

      Is it hard as in:

      1) we know what to do, but it is expensive

      2) we don't know what to do exactly, but many more people involved can increase search speed, so just need more people

      3) it is purely sequential problem, and therefore it takes a lot of time

      • samus 2 days ago ago

        A combination to some degree. Scientists yearn to stumble upon something hitherto unexplainable that requires a new theory or validates or definitely rules out some of the more fringe theories.

        While other natural sciences often suffer from an abundance of things that "merely" need to be documented, or where simulation capability is the limit, particle physics is mostly based on a theoretical framework from the middle of the 20th century that has mostly beth explored.

        Getting ahead in particle physics comprises measuring many arcane numbers to as high precision as possible until something doesn't line up with existing theories or other measurements anymore. More people could help with brainstorming and measuring things that don't require humongous particle accelerators.

        • aleph_minus_one 2 days ago ago

          > Scientists yearn to stumble upon something [that] definitely rules out some of the more fringe theories

          The existing measurements at CERN ruled out a lot of the "more natural" variants of string theory. Until now this insight has not lead to a big scientific breakthrough.

    • kakacik 3 days ago ago

      Its a clickbait article name (from otherwise good place), of course its not dead... we are now getting understanding of all things we don't know yet, discrepancies like yours, unified theory and so on.

      Everybody knows we are not there yet and how the final knowledge set will look like, if its even possible to cover it (ie are quarks the base layer or we can go deeper, much deeper all the way to planck scales? dynamics of singularities etc)

    • KolibriFly 2 days ago ago

      So, if the answer were obvious or quick, it wouldn't be worth building machines that take decades to design

    • orbifold 2 days ago ago

      I guess we will find out in 20+ years once the next electron positron collider at CERN has been build

    • htx80nerd 2 days ago ago

      >"It might be the first whisper of dark matter"

      Come now.

      • mattlangston 2 days ago ago

        Fair - that sounds hyperbolic. But my point is specific: if the weak mixing angle is shifted from the Standard Model value, one of the standard explanations is a heavier cousin of the Z boson mixing in.

        Many of those models naturally include a dark matter candidate. I didn't mean to imply 'we found dark matter' — it's that the theories which could explain the discrepancy often come with one attached.

    • alpineidyll3 a day ago ago

      But.. are you saying your vector coupling isn't explained by the existing standard model, that the measurement lacked sufficient resolution, or that existing calculations don't agree with your measurement?

      • mattlangston 13 hours ago ago

        Good question. It's mostly the third — but let me unpack that.

        The Standard Model predicts a specific value for the weak mixing angle, which determines the electron's vector coupling. My measurement at SLAC, along with other SLD measurements, consistently preferred a slightly different value than what LEP (the European competitor experiment) found using a different technique.

        The key word there is "different technique." SLD used a polarized beam of electrons — a completely novel approach at the time — which gave us direct access to the left-right asymmetry without needing to untangle final-state effects. LEP extracted the same parameter from b-quark forward-backward asymmetry. Two fundamentally different methods probing the same physics, with different systematic exposures, giving different answers.

        Both experiments had good resolution. We spent enormous effort characterizing the systematics, and they're small compared to the statistical uncertainty. But the two most precise determinations of this parameter disagreed at roughly the 3-sigma level — and that disagreement has never been explained. The world average splits the difference, and the Standard Model prediction is consistent with that average, so you could say "the SM is fine" if you squint. But nobody knows why the two experiments don't agree with each other.

        It could be an unidentified systematic error in one experiment. It could be that something beyond the Standard Model is subtly shifting one measurement and not the other. That ambiguity is exactly what makes it a "dangling thread" rather than a resolved question.

  • bananaflag 3 days ago ago

    It's basically the opposite situation from 150 years ago.

    Back then, we thought our theory was more or less complete while having experimental data which disproved it (Michelson-Morley experiment, Mercury perihelion, I am sure there are others).

    Right now, we know our theories are incomplete (since GR and QFT are incompatible) while having no experimental data which contradicts them.

    • Sniffnoy 3 days ago ago

      I wouldn't say that we have no experimental data which contradicts them. Rather, we do have experimental data which contradicts them, but no experimental data that points us in the direction of a solution (and whenever we go looking for the latter, we fail).

      Consider e.g. neutrino masses. We have plenty of experimental data indicating that neutrinos oscillate and therefore have mass. This poses a problem for the standard model (because there are problems unless the mass comes from the Higgs mechanism, but in the standard model neutrinos can't participate in the Higgs mechanism due to always being left-handed). But whenever we do experiments to attempt to verify one of the ways of fixing this problem -- are there separate right-handed neutrinos we didn't know about, or maybe instead the right-handed neutrinos were just antineutrinos all along? -- we turn up nothing.

      • T-A 3 days ago ago

        > the standard model neutrinos can't participate in the Higgs mechanism due to always being left-handed

        This again? It's only true if you insist on sticking with the original form of Weinberg's "model of leptons" from 1967 [1], which was written when massless neutrinos were consistent with available experimental data. Adding quark-style (i.e. Dirac) neutrino mass terms to the Standard Model is a trivial exercise. If doing so offends some prejudice of yours that right-handed neutrino can not exist because they have no electric and weak charge (in which case you must really hate photons too, not to mention gravity) you can resort to a Majorana mass term [2] instead.

        That question (are neutrinos Dirac or Majorana?) is not a "contradiction", it's an uncertainty caused by how difficult it is to experimentally rule out either option. It is most certainly not "a problem for the standard model".

        [1] https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.19.1264

        [2] https://en.wikipedia.org/wiki/Majorana_equation#Mass_term

        • Sniffnoy 18 hours ago ago

          So, I'm not actually a particle physicist. My understanding had been (based on something I'd read somewhere -- should try to find it again) that there is some problem caused by just declaring "neutrinos just have innate masses, they're not from the Higgs mechanism", but I could be mistaken. Obviously, if that is mistaken, then as you say it merely a question rather than a contradiction. Should try to dig that up though.

          Edit: Doing some quick searching seems to indicate that giving neutrinos a bare mass term would violate electroweak gauge invariance? I don't know enough to evaluate that claim, or TBH really even to understand it. But I believe that's what I was thinking of, so maybe you can say how true and/or pertinent that is.

        • TheOtherHobbes 2 days ago ago

          It's trivial to add a matrix to account for neutrino masses, but that doesn't explain their origin.

          That is not a trivial problem at all. It certainly has not been solved, and it's possible experiments will say "Both the current ideas are wrong."

          • T-A 2 days ago ago

            > It's trivial to add a matrix to account for neutrino masses

            The matrix you are thinking of is presumably the PMNS matrix [1]. It's equivalent to the CKM matrix for quarks [2]. The purpose of both is to parametrize the mismatch between flavor [3] and mass eigenstates, not "to account for neutrino masses" or "explain their origin".

            As far as the standard model is concerned, neutrino masses and quark masses all originate from Yukawa couplings [4] with the Higgs field. Adding such terms to Weinberg's original model of leptons is very much a trivial exercise, and was done already well before there was solid evidence for non-zero neutrino masses.

            > it's possible experiments will say "Both the current ideas are wrong."

            Assuming that by "Both current ideas" you mean Dirac vs Majorana mass, those are the only available relativistic invariants. For both to be wrong, special relativity would have to be wrong. Hopefully I don't need to explain how extraordinarily unlikely that is.

            [1] https://en.wikipedia.org/wiki/Pontecorvo%E2%80%93Maki%E2%80%...

            [2] https://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%8...

            [3] https://en.wikipedia.org/wiki/Flavour_(particle_physics)

            [4] https://en.wikipedia.org/wiki/Yukawa_coupling

            • Yossarrian22 2 days ago ago

              Thanks Lord Kelvin

              • T-A 13 hours ago ago

                Poor Lord Kelvin gets maligned a lot:

                https://arxiv.org/abs/2106.16033

                That aside, a distinction should be made between

                1) claiming that physics is pretty much done (what he's often accused of) and

                2) pointing out factual errors in claims about the current state of knowledge (what I am doing).

                If you absolutely must make flattering comparisons, may I suggest Feynman instead, especially on lying to laymen?

                https://calteches.library.caltech.edu/51/2/CargoCult.htm

                I should add that I am not in complete agreement with what he said in that speech: calling it "not essential to the science" strikes me as naive. Once you start juggling two standards of communication, you are on a slippery slope. If it's OK to lie to the funding public at large, what about politicians, funding bodies, colleagues in other disciplines competing for the same funding, journal editors asking you to review a rival's work in your own field? Where do you draw the line? Do you draw a line, or do you descend into a state of generalized charlatanry?

              • limonstublechew 2 days ago ago

                [dead]

    • atakan_gurkan 3 days ago ago

      I disagree, but maybe only because we are using different definitions. For example, we have neutrino oscillations, this requires neutrino mass, which is not part of the standard model of particle physics. In cosmology, there is "lithium problem" (amongst others), which cannot be explained by Lambda-CDM. We know our physical theories are incomplete not only because our mathematical frameworks (GR & QFT) are incompatible (similar to the incompatibility of Maxwell's equations and the Galilean transformations that form the basis of Newtonian mechanics), but also there are these unexplained phenomena, much like the blackbody radiation at the turn of previous century.

    • Paracompact 3 days ago ago

      What about underexplained cosmological epicycles like dark matter (in explaining long-standing divergences of gravitational theory from observation), or the Hubble tension?

      • joe_the_user 3 days ago ago

        The dark matter theory broadly is that there is amount of invisible matter that obeys the laws of Einsteinian gravity but isn't otherwise visible. By itself, it has considerable experimental evidence. It doesn't resemble Ptolemaic theories of planetary motion notably in that doesn't and hasn't required regular updating as new data arrives.

        It really fits well with the OP comments. Nothing really contradicts the theory but there's no deeper theory beyond it. Another comment mentioned as "nightmare" of dark matter only have gravitational interaction with other matter. That would be very unsatisfying for physicists but wouldn't something that really disprove any given theory.

        • geysersam 3 days ago ago

          When you say dark matter theory doesn't require updates when new data arrives, it sounds like you don't count the parameters that describe the dark matter distribution to be part of the theory.

      • XorNot 3 days ago ago

        This is your regular reminder that epicycles were not an incorrect theory addition until an alternative hypothesis could explain the same behavior without requiring them.

        • Paracompact 3 days ago ago

          Sure, but in that regard dark matter is even more unsatisfying than (contemporary) epicycles, because not only does it add extra complexity, it doesn't even characterize the source of that complexity beyond its gravitational effects.

          • dataflow 3 days ago ago

            FYI, very recently (as in this has been in the news the past few days, and the article is from December) an article was published that suggested we might already have experimental evidence for dark matter being primordial black holes, though there are reasons to doubt it as well. I just posted the article: https://news.ycombinator.com/item?id=46955545

            But this might be easier to read: https://www.space.com/astronomy/black-holes/did-astronomers-...

          • cozzyd 3 days ago ago

            Even better, there are the "nightmare" scenarios where dark matter can only interact gravitationally with Standard Model particles.

            • Paracompact 3 days ago ago

              Personally—and this is where I expect to lose the materialists that I imagine predominate HN—I think we are already in a nightmare scenario with regard to another area: the science of consciousness.

              The following seem likely to me: (1) Consciousness exists, and is not an illusion that doesn't need explaining (a la Daniel Dennett), nor does it drop out of some magical part of physical theory we've somehow overlooked until now; (2) Mind-matter interactions do not exist, that is, purely physical phenomena can be perfectly explained by appeals to purely physical theories.

              Such are the stakes of "naturalistic dualist" thinkers like David Chalmers. But if this is the case, it implies that the physics of matter and the physics of consciousness are orthogonal to each other. Much like it would be a nightmare to stipulate that dark matter is a purely gravitational interaction and that's that, it would be a nightmare to stipulate that consciousness and qualia arise noninteractionally from certain physical processes just because. And if there is at least one materially noninteracting orthogonal component to our universe, what if there are more that we can't even perceive?

              • jemmyw 3 days ago ago

                I don't think any of this is particularly nightmarish. Just because we don't yet know how this complex system arises from another lower level one doesn't make it new physics. There's no evidence of it being new or orthogonal physics.

                Imagine trying to figure out what is happening on someone's computer screen with only physical access to their hardware minus the screen, and an MRI scanner. And that's a system we built! We've come exceedingly far with brains and minds considering the tools we have to peer inside.

                • Paracompact 2 days ago ago

                  Knowing how to build a brain is different from knowing whether that brain has consciousness in the sense that you or I do. The question of consciousness appears to demand new/orthogonal physics because according to our existing physics, there's no sense in which you or should "feel" any differently than a rock does, or a computer does, or Searle's room does, or a Chinese brain does, or the universe as a whole does, etc.

                  • squeefers 2 days ago ago

                    > The question of consciousness appears to demand new/orthogonal physics because according to our existing physics, there's no sense in which you or should "feel" any differently than a rock does,

                    deepak chopra may interest you

                  • jemmyw 2 days ago ago

                    I don't believe in the hard consciousness problem. Yes, materialist. And yes, it might be that we can never actually put together the path of physical level to how it feels, just like we might never find the fundamental physical rules of the universe. At this time both our positions are belief.

              • galaxyLogic 3 days ago ago

                I don't think there is any mystery to what we call "consciousness". Our senses and brain have evolved so we can "sense" the external world, so we can live in it and react to it. So why couldn''t we also sense what is happening inside our brains?

                Our brain needs to sense our "inner talk" so we can let it guide our decision-making and actions. If we couldn't remember sentences, we couldn't remember "facts" and would be much worse for that. And talking with our "inner voice" and hearing it, isn't that what most people would call consciousness?

                • jacquesm 3 days ago ago

                  This is not nearly as profound as you make it out to be: a computer program also doesn't sense the hardware that it runs on, from its point of view it is invisible until it is made explicit: peripherals.

                  • dgfl 3 days ago ago

                    You also don’t consciously use your senses until you actively think about them. Same as “you are now aware of your breathing”. Sudden changes in a sensation may trigger them to be conscious without “you” taking action, but that’s not so different. You’re still directing your attention to something that’s always been there.

                    I agree with the poster (and Daniel Dennet and others) that there isn’t anything that needs explaining. It’s just a question framing problem, much like the measurement problem in quantum mechanics.

                • tehjoker 3 days ago ago

                  another one that thinks they solved the hard problem of consciousness by addressing the easy problem. how on earth does a feedback system cause matter to "wake up"? we are making lots of progress on the easy problem though

                  • dgfl 3 days ago ago

                    This is not as good a refusal as you think it is. To me (and I imagine, the parent poster) there is no extra logical step needed. The problem IS solved in this sense.

                    If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

                    To me the hard problem is more or less akin to looking for the true boundaries of a cloud: a seemingly valid quest, but one that can’t really be answered in a satisfactory sense, because it’s not the right one to pose to make sense of clouds.

                    • Paracompact 2 days ago ago

                      > If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

                      I would be very satisfied to have an answer, or even just convincing heuristic arguments, for the following:

                      (1) What systems experience consciousness? For example, is a computer as conscious as a rock, as conscious as a human, or somewhere in between? (2) What are the fundamental symmetries and invariants of consciousness? Does it impact consciousness whether a system is flipped in spacetime, skewed in spacetime, isomorphically recast in different physical media, etc.? (3) What aspects of a system's organization give rise to different qualia? What does the possible parameter space (or set of possible dynamical traces, or what have you) of qualia look like? (4) Is a consciousness a distinct entity, like some phase transition with a sharp boundary, or is there no fundamentally rigorous sense in which we can distinguish each and every consciousness in the universe? (5) What explains the nature of phenomena like blindsight or split brain patients, where seemingly high-level recognition, coordination, and/or intent occurs in the absence of any conscious awareness? Generally, what behavior-affecting processes in our brains do and do not affect our conscious experience?

                      And so on. I imagine you'll take issue with all of these questions, perhaps saying that "consciousness" isn't well defined, or that an "explanation" can only refer to functional descriptions of physical matter, but I figured I would at least answer your question honestly.

                      • dgfl 2 days ago ago

                        I think most of them are valid questions!

                        (1) is perhaps more of a question requiring a strict definition of consciousness in the first place, making it mostly circular. (2) and especially (3) are the most interesting, but they seem part of the easy problem instead. And I’d say we already have indications that the latter option of (4) is true, given your examples from (5) and things like sleep (the most common reason for humans to be unconscious) being in distinct phases with different wake up speed (pun partially intended). And if you assume animals to be conscious, then some sleep with only one hemisphere at a time. Are they equally as conscious during that?

                        My imaginary timeline of the future has scientific advancements would lead to us noticing what’s different between a person’s brain in their conscious and unconscious states, then somehow generalize it to a more abstract model of cognition decoupled from our biological implementation, and then eventually tackle all your questions from there. But I suspect the person I originally replied to would dismiss them as part of the easy problem instead, i.e. completely useless for tackling the hard problem! As far as I’m concerned, it’s the hard problem that I take issue with, and the one that I claim isn’t real.

                        • galaxyLogic 2 days ago ago

                          I much agree, especially on the importance of defining what we mean by the word "conscicousness", before we say we cannot explain it. Is a rock conscious? Sure according to some deifinition of the word. Probably everybody would agree that there are different levels of consciousness, and maybe we'd need different names for them.

                          Animals are clearly conscious in that they observe the world and react to it and even try to proactively manipulate it.

                          The next level of consciousness, and what most people probably mean when they use the word is human ability to "think in language". That opens up a whole new level, of consciousness, because now we can be conscious of our inner voice. We are conscious of ourselves, apart from the world. Our inner voice can say things about the thing which seems to be the thing uttering the words in our mind. Me.

                          Is there anything more to consciousness than us being aware that we are conscious? It is truly a wondrous experience which may seem like a hard problem to explain, hence the "Hard Problem of Consciousness", right? But it's not so mysterious if we think of it in terms of being able to use and hear and understand language. Without language our consciousness would be on the level of most animals I assume. Of course it seems that many animals use some kind of language. But, do they hear their "inner voice"? Hard to say. I would guess not.

                          And so again, in simple terms, what is the question?

                          • dgfl 2 days ago ago

                            This is precisely the matter, I wholeheartedly agree. The metacognition that we have, that only humans are likely to have, is the root behind the millennium-long discussions on consciousness. And the hard problem stems from whatever was left of traditional philosophers getting hit by the wall of modern scientific progress, not wanting to let go of the mind as some metaphysical entity beyond reality, with qualia and however many ineffable private properties.

                            The average person may not know the word qualia, but “is your red the same as my red” is a popular question among kids and adults. Seems to be a topic we are all intrinsically curious about. But from a physical point of view, the qualia of red is necessarily some collection of neurons firing in some pattern, highly dependent on the network topology. Knowing this, then the question (as it was originally posed) is immediately meaningless. Mutatis mutandis, same exact argument for consciousness itself.

                            • galaxyLogic 15 hours ago ago

                              Talking of "qualia" I think feeling pain is a good example. We all feel pain from time to time. It is a very conscious experience. But surely animals feel pain as well, and it is that feeling that makes them avoid things that cause them pain.

                              Evolution just had to give us some way to "feel", to be conscious, about some things causing us pain while other things cause us pleasure. We are conscious of them, and I don't think there's any "hard question" about why we feel them :-)

              • Dylan16807 3 days ago ago

                How can consciousness have information about the material world if it doesn't interact with it in any way?

                And when your fingers type that you experience qualia, are they bullshitting because your fingers have never actually received any signals from your consciousness in any direct or indirect way?

              • eucyclos 3 days ago ago

                I think the old theory of the planes of existence has a lot of utility here - if you substitute "the dimensionality at which you're analyzing your dataset" for the hermetic concept of "planes of existence" you get essentially the same thing, at least in lower dimensions like one (matter) or two (energy). Mind, specifically a human mind, would be a four dimensional under the old system, which feels about right. No idea how you'd set up an experiment to test that theory though. It may be completely impossible because experiments only work when they work in all contexts and only matter is ever the same regardless of context.

              • geysersam 3 days ago ago

                That would certainly be a difficult scenario. But it doesn't seem very likely. For example, consciousness and material systems seem to interact. Putting drugs in your blood changes your conscious experience etc.

              • TheOtherHobbes 2 days ago ago

                Yes, but it doesn't even need mysticism or duality.

                There's a more straightforward problem, which is that all of science is limited by our ability to generate and test mental models, and there's been no research into the accuracy and reliability of our modelling processes.

                Everything gets filtered through human consciousness - math, experiment, all of it. And our definition of "objective" is literally just "we cross-check with other educated humans and the most reliable and consistent experience wins, for now."

                How likely is it that human consciousness is the most perfect of all possible lenses, doesn't introduce distortions, and has no limits, questionable habits, or blind spots?

              • im3w1l 3 days ago ago

                I've thought about this possibility but come to reject it. If mind-matter interactions did not exist, then matter could not detect the presence of mind. And if the brain cannot detect the mind then we wouldn't be able to talk or write about the mind.

                • meindnoch 3 days ago ago

                  Or, the mind is in spectator mode?

                  • skeptic_ai 2 days ago ago

                    From a physics point of view should be as every effect is caused by previous state. And next tick is always next tick, except quantum bacause has some randomness, but let’s assume it’s a seeded randomness.

                    I think every tick is predictable from previous state. Inevitable. Therefore I really like how you put it: mind is just spectating.

                    • Dylan16807 2 days ago ago

                      That doesn't answer the question though.

                      If a rock starts moving in one tick, it affects other things in the next tick. Despite being deterministic, that rock is not a spectator.

                      So if the mind is a spectator, it's not for that reason, it's some other reason.

              • tim333 2 days ago ago

                Yeah - the nightmare situation doesn't exist if you take a materialist approach. Maybe that's evidence for it?

        • 3 days ago ago
          [deleted]
        • suddenlybananas 3 days ago ago

          Scientific theories are not curve-fitting.

    • tim333 2 days ago ago

      >GR and QFT are incompatible

      I did physics at uni and kind of dropped out when it got too hard.

      I've long guessed the incompatibility is because the maths is just too hard for human brains, though I'm probably biased there, and we'll get a breakthrough when AI can handle much more complex maths than us. Probably not so long till we find out on that one.

      I once tried to write a simplified explanation for why a spin-2 quantum theory naturally results in something like general relativity and totally failed - man that stuff's hard.

      • jfengel 2 days ago ago

        The math is hard, but I don't think that's the problem. Hard math eventually succumbs.

        I think that even if AI were to find a good unification of GR and QM, we wouldn't be able to test it. We might accept it without additional confirmation if it were sufficiently natural-feeling (the way we accepted Newtonian gravity long before we could measure G), but there's no guarantee that we'd ever be able to meaningfully test it.

        We could get lucky -- such a theory might point at a solution to some of the few loose threads we get out of existing collider and cosmological measurements -- but we might not. We could be stuck wishing we had a galaxy-sized collider.

        • tim333 2 days ago ago

          It might explain some of the many physics observations that we don't have explanations for like why do we have the particles we have and why those properties.

    • klipt 3 days ago ago

      Doesn't that imply our theories are "good enough" for all practical purposes? If they're impossible to empirically disprove?

      • hackingonempty 3 days ago ago

        Yes, for all practical purposes. This is the position of physicist Sean Carroll and probably others. We may not know what is happening in the middle of a black hole, or very close to the big bang, but here on Earth we do.

        "in the specific regime covering the particles and forces that make up human beings and their environments, we have good reason to think that all of the ingredients and their dynamics are understood to extremely high precision"[0]

        0: https://philpapers.org/archive/CARCAT-33

        • throwaway81523 3 days ago ago

          ER=EPR says something completely shocking about the nature of the universe. If there is anything to it, we have almost no clue about how it works or what its consequences are.

          Sean Carroll's own favorite topics (emergent gravity, and the many worlds interpretation) are also things that we don't have any clue about.

          Yes there is stuff we can calculate to very high precision. Being able to calculate it, and understanding it, are not necessarily the same thing.

      • Legend2440 3 days ago ago

        Typically whenever you look closely at an object with complex behavior, there is a system inside made of smaller, simpler objects interacting to produce the complexity.

        You'd expect that at the bottom, the smallest objects would be extremely simple and would follow some single physical law.

        But the smallest objects we know of still have pretty complex behavior! So there's probably another layer underneath that we don't know about yet, maybe more than one.

        • jhanschoo 3 days ago ago

          I agree, and I think that your claim is compatible with the comment that you are responding to. Indeed, perhaps it's turtles all the way down and there is systematic complexity upon systematic complexity governing our universe that humanity has been just too limited to experience.

          For a historical analogy, classical physics was and is sufficient for most practical purposes, and we didn't need relativity or quantum mechanics until we had instruments that could manipulate them, or that at least experienced them. While I guess that there were still macroscopic quantum phenomena, perhaps they could have just been treated as empirical material properties without a systematic universal theory accounting for them, when instruments would not have been precise enough to explore and exploit predictions of a systematic theory.

          • adrianN 3 days ago ago

            The experiments that lead to the invention of quantum theory are relatively simple and involve objects you can touch with your bare hands without damaging them. Some are done in high school, eg the photoelectric effect.

            • jhanschoo 2 days ago ago

              Whereas I did hedge my point regarding macroscopic quantum phenomena, I think that the quantum nature of the photoelectric effect would have been harder to discern without modern access to pure wavelength lighting. But you could still rely on precise optics to purify mixed light I suppose. But without even optics it should be even harder.

              • adrian_b 2 days ago ago

                All the 19th century experiments that desired monochromatic light, including those that have characterized the photoelectric effect, used dispersive prisms, which separated the light from the Sun or from a candle into its monochromatic components. These are simple components, easily available.

                This allowed experiments where the frequency of light was varied continuously, by rotating the prism.

                Moreover, already during the first half of the 19th century, it became known that using gas-discharge lamps with various gases or by heating certain substances in a flame you can obtain monochromatic light corresponding to certain spectral lines specific to each substance. This allowed experiments where the wavelength of the light used in them was known with high accuracy.

                Already in 1827, Jacques Babinet proposed the replacement of the platinum meter standard with the wavelength of some spectral line, as the base for the unit of length. This proposal has been developed and refined later by Maxwell, in 1870, who proposed to use both the wavelength and the period of some spectral line for the units of length and time. The proposal of Babinet has been adopted in SI in 1960, 133 years later, while the proposal of Maxwell has been adopted in SI in 1983, 113 years later.

                So there were no serious difficulties in the 19th century for using monochromatic light. The most important difficulty was that their sources of monochromatic light had very low intensities, in comparison with the lasers that are available today. The low intensity problem was aggravated when coherent light was needed, as that could be obtained only by splitting the already weak light beam that was available. Lasers also provide coherent light, not only light with high intensity, thus they greatly simplify experiments.

        • SAI_Peregrinus 2 days ago ago

          > You'd expect that at the bottom, the smallest objects would be extremely simple and would follow some single physical law.

          That presupposes that there's a bottom, and that each subsequent layer gets simpler. Neither proposition is guaranteed, indeed the latter seems incorrect since quantum chromodynamics governing the internal structure of the proton is much more complex than the interactions governing its external behavior.

        • epsilonsalts 3 days ago ago

          Yeah that's the outcome theorized by Gödel.

          Incompleteness is inherent to our understanding as the universe is too vast and endless for us to ever capture a holistic model of all the variables.

          Gödel says something specific about human axiomatic systems, akin to a special relativity, but it generalizes to physical reality too. A written system is made physical writing it out, and never complete. Demonstrates that our grasp of physical systems themselves is always incomplete.

          • drdeca 3 days ago ago

            Gödel’s incompleteness says almost nothing about this. I wish people wouldn’t try to apply it in ways that it very clearly is not applicable to.

            An environment living in Conway’s Game of Life could be quite capable of hypothesizing that it is implemented in Conway’s Game of Life.

            • longfacehorrace 2 days ago ago

              That's not what they were saying.

              Systems can hypothesize about themselves but they cannot determine why the rules they can learn exist in the first place. Prior states are no longer observable so there is always incomplete history.

              Conway's Game of Life can't explain its own origins just itself. Because the origins are no longer observable after they occur.

              What are the origins of our universe? We can only guess without the specificity of direct observation. Understanding is incomplete with only simulation and theory.

              So the comment is right. We would expect to be able to define what is now but not completely know what came before.

            • bananaflag 3 days ago ago

              Indeed, as I think I commented before here, this kind of self-reference is exactly what makes Gödel's proof work.

            • mastermage 3 days ago ago

              Now the question is are we in Conways Game of Life?

      • andreareina 3 days ago ago

        The fundamental theories are good enough in that we can't find a counterexample, but they're only useful up to a certain scale before the computational power needed is infeasible. We're still hoping to find higher-level emergent theories to describe larger systems. By analogy, in principle you could use Newton's laws of motion (1685) to predict what a gas in a room is going to do, or how fluid will flow in a pipe, but in practice it's intractable and we prefer to use the higher-level language of fluid mechanics: the ideal gas law, the navier-stokes equations, etc.

      • PlatoIsADisease 3 days ago ago

        If I have to make a guess, we are at the level of pre-copernicus in particle physics.

        We are finding local maximums(induction) but the establishment cannot handle deduction.

        Everything is an overly complex bandaid. At some point someone will find something elegant that can predict 70% as good, and at some point we will realize: 'Oh that's great, the sun is actually at the center of the solar system, Copernicious was slightly wrong thinking planets make circular rotations. We just needed to use ellipses!'

        But with particles.

        • davrosthedalek 2 days ago ago

          The sun is not at the center of the solar system. The intellectual leap was not to replace earth with the sun. Earth does not "revolve around the sun". The intellectual leap was to realize that the situation is somewhat symmetric -- they both attract each other, and they orbit around their center of gravity (which, yes, is in the sun. But not because the sun is the center.)

          This sounds like a distinction without consequence, but I think that's wrong. The sun is not special. It just has a lot of mass. If somebody learns: The earth orbits the sun-- They don't understand how two black holes can orbit each other. If somebody learns: The sun and the earth orbit their CM -- They will be able to understand that.

      • colechristensen 3 days ago ago

        Classical physics was indeed "good enough for all practical purposes" as well at the time... but those didn't include electronics, nuclear power, most all basic understanding of materials, chemistry, and just a tremendous amount of things.

        The point being it's not at all clear what we might be missing without these impractical little mysteries that so far are very distant from every day life.

      • sixo 3 days ago ago

        The point is not to make better predictions of the things we already know how to predict. The point is to determine what abstractions link the things we don't presently understand--because these abstraction tend to open many new doors in other directions. This has been the story of physics over and over: relativity, quantum theory, etc, not only answered the questions they were designed to answer but opened thousands of new doors in other directions.

      • recursivecaveat 3 days ago ago

        Maybe? We seem to be able to characterize all the stuff we have access to. That doesn't mean we couldn't say produce new and interesting materials with new knowledge. Before we knew about nuclear fission we didn't realize that we couldn't predict that anything would happen from a big chunk of uranium or the useful applications of that. New physics might be quite subtle or specific but still useful.

        • A_D_E_P_T 2 days ago ago

          All the stuff we have access to?

          There isn't even a general physical theory of window glass -- i.e. of how to resolve the Kauzmann paradox and define the nature of the glass transition. Glass is one of man's oldest materials, and yet it's still not understood.

          There's also, famously, no general theory for superconducting materials, so superconductors are found via alchemical trial-and-error processes. (Quite famously a couple of years ago, if you remember that circus.)

          Solid-state physics has a lot of big holes.

      • adrian_b 2 days ago ago

        The existing theories are extremely far from being good enough for practical purposes.

        There exists a huge number of fundamental quantities that should be calculated from the parameters of the "standard model", but we cannot compute them, we can only measure them experimentally.

        For instance, the masses and magnetic moments of the proton, of the neutron and of all other hadrons, the masses and magnetic moments of the nuclei, the energy spectra of nuclei, of atoms, of ions, of molecules, and so on.

        The "standard model" can compute only things of negligible practical importance, like the statistical properties of the particle collisions that are performed at LHC.

        It cannot compute anything of value for practical engineering. All semiconductor devices, lasers and any other devices where quantum physics matters are not designed using any consistent theory of quantum physics, but they are designed using models based on a great number of empirical parameters determined by measurement, for which quantum physics is only an inspiration for how the model should look like and not a base from which the model can be derived rigorously.

        • jhrmnn 2 days ago ago

          This depends very much on what "practical purposes" are. For almost all conceivable technology, relativistic quantum mechanics for electrons and light, ie QED, is sufficient fundamental theory. This is unlike before quantum mechanics, when we basically didn't have fundamental laws for chemistry and solid-state physics.

          • adrian_b 2 days ago ago

            The vast majority of useful things cannot be computed with QED from fundamental principles. You cannot compute even simple atomic energy spectra.

            The fundamental laws of chemistry have not been changed much by quantum physics, they just became better understood and less mysterious. Quantum mechanics has explained various cases of unusual chemical bonds that appeared to contradict the simpler rules that were believed to be true before the development of quantum physics, but not much else has practical importance.

            Solid-state physics is a much better example, because little of it existed before quantum physics.

            Nevertheless, solid-state physics is also the most obvious example that the current quantum physics cannot be used to compute anything of practical value from first principles.

            All solid-state physics is based on experimentally-measured parameters, which cannot be computed. All mathematical models that are used in solid-state physics are based on guesses about how the solutions could behave, e.g. by introducing various fictitious averaged potentials in equations, like the Schroedinger equation, and they are not based on computations that use primary laws, without guesses that do not have any other justification, except that when the model is completed with the experimentally-measured values for its parameters, it can make reasonably accurate predictions.

            Using empirical mathematical models of semiconductor materials, e.g. for designing transistors, is perfectly fine and entire industries have been developed with such empirical models.

            However, the fact that one must develop custom empirical models for every kind of application, instead of being able to derive them from what are believed to be the universal laws of quantum physics, demonstrates that these are not good enough.

            We can live and progress very well with what we have, but if someone would discover a better theory or a mathematical strategy for obtaining solutions, that could be used to compute the parameters that we must now measure and which could be used to model everything that we need in a way for which there would be guarantees that the model is adequate, then that would be a great advance in physics.

            • dgfl 2 days ago ago

              You seem to be familiar with the field, yet this is a very strange view? I work on exactly this slice of solid state physics and semiconductor devices. I’m not sure what you mean here.

              The way we construct Hamiltonians is indeed somewhat ad hoc sometimes, but that’s not because of lack of fundamental knowledge. In fact, the only things you need are the mass of the electron/proton and the quantum of charge. Everything else is fully derived and justified, as far as I can think of. There’s really nothing other than the extremely low energy limit of QED in solid state devices, then it’s about scaling it up to many body systems which are computationally intractable but fully justified.

              We don’t even use relativistic QM 95% of the time. Spin-orbit terms require it, but once you’ve derived the right coefficients (only needed once) you can drop the Dirac equation and go back to Schrödinger. The need for empirical models has nothing to do with fundamental physics, and all to do with the exorbitant complexity of many-body systems. We don’t use QFT and the standard model just because, as far as I can tell, the computation would never scale. Not really a fault of the standard model.

            • jcranmer 2 days ago ago

              > The fundamental laws of chemistry have not been changed much by quantum physics, they just became better understood and less mysterious. Quantum mechanics has explained various cases of unusual chemical bonds that appeared to contradict the simpler rules that were believed to be true before the development of quantum physics, but not much else has practical importance.

              Um, false? The fundamentals of chemistry are about electron orbitals (especially the valence ones) and their interactions between atoms to form molecules. All of my college chemistry courses delved somewhat into quantum mechanics, with the biggest helping being in organic chemistry. And modern computational chemistry is basically modeling the QED as applied to atoms.

            • davrosthedalek 2 days ago ago

              What are you talking about? The spectra of hydrogen is very well understood and a text book example for students to calculate.

              We use spectra to test QED calculations to something like 14 digits.

              • adrian_b 2 days ago ago

                The hydrogenoid atoms and ions, with a single electron, are the exception that proves the rule, because anything more complex cannot be computed accurately.

                The spectrum of hydrogen (ignoring the fine structure) could be computed with the empirical rules of Rydberg before the existence of quantum physics. Quantum physics has just explained it in terms of simpler assumptions.

                Quantum physics explains a great number of features of the atomic spectra, but it is unable to compute anything for complex atoms with an accuracy comparable with the experimental measurements.

                The QED calculations with "14 digits" of precision are for things that are far simpler than atomic spectra, e.g. for the gyromagnetic ratio of the electron, and even for such things the computations are extremely difficult and error-prone.

                • aleph_minus_one 2 days ago ago

                  > The hydrogenoid atoms and ions, with a single electron, are the exception that proves the rule, because anything more complex cannot be computed accurately.

                  Rather: there is no known closed-form solution (and there likely won't be any).

                  • dgfl 2 days ago ago

                    If you let the computer run for long enough, it will compute any atomic spectrum to arbitrary accuracy. Only QFT has non-divergent series, so at least in theory we expect the calculations to converge.

                    There’s an intrinsic physical limit to which you can resolve a spectrum, so arbitrarily many digits of precision aren’t exactly a worthy pursuit anyway.

        • davrosthedalek 2 days ago ago

          Lattice-QCD can, by now, actually calculate the masses of the proton, neutron from first principles pretty accurately.

          This is of course a brute-force approach. We currently lack, in all fields, theory for emergent properties. And the mass of the proton definitely is such.

          • adrian_b 2 days ago ago

            There have been claims about this, starting with "Ab Initio Determination of Light Hadron Masses" (Science, 2008).

            Nevertheless, until now I have not seen anything that qualifies as "computing the masses".

            Research papers like that do not contain any information that would allow someone to verify their claims. Moreover, such papers are much more accurately described as "fitting the parameters of the Standard Model, such as quark masses, to approximately match the measured masses", and not as actually computing the masses.

            The published results of hadron masses are not much more accurate than you could compute mentally, without using any QCD, much less Lattice QCD, by estimating approximate quark masses from the composition in quarks of the hadrons and summing them. What complicates the mass computations is that while the heavy quarks have masses that do not vary much, the effective masses of the light quarks (especially u and d, which compose the protons and neutrons) vary a lot between different particles. Because of this, there is a very long way between a vague estimate of the mass and an accurate value.

      • doctoboggan 3 days ago ago

        The theories don't answer all the questions we can ask, namely questions about how gravity behaves at the quantum scale. (These questions pop up when exploring extremely dense regions of space - the very early universe and black holes).

      • 3 days ago ago
        [deleted]
      • csomar 3 days ago ago

        I think the problem is that GR and QFT are at odds with each other? (I am not quite versed in the subject and this is my high-level understanding of the “problem”)

      • idiotsecant 3 days ago ago

        Absolutely not. Newtonian physics was 'good enough' until we disproved it. Imagine where we would be if all we had was Newtonian physics.

        • nancyminusone 3 days ago ago

          You would still make it to the moon (so I've heard). Maybe you wouldn't have GPS systems?

        • mikkupikku 3 days ago ago

          Newtonian physics is good enough for almost everything that humans do. It's not good for predicting the shit we see in telescopes, and apparently it's not good for GPS, although honestly I think without general relativity, GPS would still get made but there'd be a fudge factor that people just shrug about.

          For just about anything else, Newton has us covered.

          • idiotsecant 3 days ago ago

            Oh sure, nothing major. Just transistors, lasers, MRI, GPS,nuke power, photovoltaics, LEDs, x-rays, and pretty much anything requiring maxwells equations.

            Nothing major.

          • z3phyr 3 days ago ago

            Microchips? A lot of quantum physics is applied here from the top of my mind.

            • mikkupikku 2 days ago ago

              Quantum mechanics is relevant to humanity because we build things which are very small. General relativity is not, because we're more or less incapable of actually doing things on a scale where it matters.

              • chuckadams 2 days ago ago

                General relativity is pretty relevant to GPS satellites.

          • cozzyd 3 days ago ago

            quantum mechanics (also very much not Newtonian) is much more important to our day-to-day lives.

            • momoschili 3 days ago ago

              this kind of distinction is quite stupid in general as plenty of things that we rely on for day-to-day activities such as our houses, desks, chairs, beds, shoes, clothes, etc are all based on Newtonian/classical mechanics. Basically everything that we use which existed pre-transistor strictly speaking only required classical physics.

              • cozzyd 2 days ago ago

                I mean sure, but the transistor is pretty important to the way I live my life now!

                • momoschili 2 days ago ago

                  I'd argue so is the bed you sleep in every night, and the roof over your head. Best not to take those for granted, as I don't think the transistor would last so long if it wasn't sheltered from the environment.

                  The argument is that these kind of distinctions between how "classical" and "quantum" physics affects our lives is just a pointless endeavor that even academics don't waste their time with.

            • refulgentis 3 days ago ago

              Is it?

              • nerdsniper 3 days ago ago

                Flash memory (quantum tunneling), lasers (stimulated emission), transistors (band theory), MRI machines (nuclear spin), GPS (atomic transition), LED's (band gap), digital cameras (photoelectric effect), ...the list does, in fact, go on, and on, and on.

                • narcraft 3 days ago ago

                  Did you intentionally list things that are clearly not essential to day-to-day life?

                  • refulgentis 3 days ago ago

                    I'd argue flash memory and transistors certainly are.

      • light_triad 3 days ago ago

        There's still huge gaps in our understanding: quantum gravity, dark matter, what happens before planck time, thermodynamics of life and many others.

        Part of the problem is that building bigger colliders, telescopes, and gravitational wave detectors requires huge resources and very powerful computers to store and crunch all the data.

        We're cutting research instead of funding it right now and sending our brightest researchers to Europe and China...

    • throw_m239339 3 days ago ago

      I find the idea that reality might be quantized fascinating, so that all information that exists could be stored in a storage medium big enough.

      It's also kind of interesting how causality allegedly has a speed limit and it's rather slow all things considered.

      Anyway, in 150 years we absolutely came a long way, we'll figure it that out eventually, but as always, figuring it out might lead even bigger questions and mysteries...

      • tsimionescu 3 days ago ago

        Note that "reality" is not quantized in any existing theory. Even in QM/QFT, only certain properties are quantized, such as mass or charge. Others, like position or time, are very much not quantized - the distance between two objects can very well be 2.5pi planck lengths. And not only are they not quantized, the math of these theories does not work if you try to discretize space or time or other properties.

      • cvoss 2 days ago ago

        > all information that exists could be stored in a storage medium big enough

        Why is quantization necessary for information storage? If you're speculating about a storage device external to our universe, it need not be constrained by any of our physical laws and their consequences, such as by being made up of finitely many atoms or whatever. It might have components like arbitrary precision real number registers.

        And if you're speculating about a storage device that lives within our universe, you have a contradiction because it's maximum information capacity can't exceed the information content of its own description.

      • csomar 3 days ago ago

        If reality is quantized, how can you store all the information out there without creating a real simulation? (Essentially cloning the environment you want stored)

    • KolibriFly 2 days ago ago

      This era might be one where we have to earn the next clue much more slowly

  • tasty_freeze 3 days ago ago

    Here is one fact that seems, to me, pretty convincing that there is another layer underneath what we know.

    The charge of electrons is -1 and protons +1. It has been experimentally measured out to 12 digits or so to be the same magnitude, just opposite charge. However, there are no theories why this is -- they are simply measured and that is it.

    It beggars belief that these just happen to be exactly (as far as we can measure) the same magnitude. There almost certainly is a lower level mechanism which explains why they are exactly the same but opposite.

    • andyferris 3 days ago ago

      The hint from quantum field theory (and things like lattice gauge theory) is that charge emerges from interesting topological states/defects of the underlying field (by "interesting topological shapes" I mean - imagine a vortex in the shape of a ring/doughnut). It's kind of a topological property of a state of the photonic field, if you will - something like a winding number (which has to be an integer). Electric charge is a kind of "defect" or "kink" in the photonic field, while color charge (quarks) are defects in the strong-force field, etc.

      When an electron-positron pair is formed from a vacuum, we get all sorts of interesting geometry which I struggle to grasp or picture clearly. I understand the fact that these are fermions with spin-1/2 can similarly be explained as localized defects in a field of particles with integer spin (possibly a feature of the exact same "defect" as the charge itself, in the photonic field, which is what defines an electron as an electron).

      EDIT:

      > However, there are no theories why this is -- they are simply measured and that is it.

      My take is that there _are_ accepted hypotheses for this, but solving the equations (of e.g. the standard model, in full 3D space) to a precision suitable to compare to experimental data is currently entirely impractical (at least for some things like absolute masses - though I think there are predictions of ratios etc that work out between theory and measurement - sorry not a specialist in high-energy physics, had more exposure to low-energy quantum topological defects).

      • phkahler 2 days ago ago

        Have you seen this: https://www.researchgate.net/publication/281322004_The_elect...

        Or any of the more recent work that references it?

      • empath75 2 days ago ago

        > something like a winding number (which has to be an integer). Electric charge is a kind of "defect" or "kink" in the photonic field, while color charge (quarks) are defects in the strong-force field, etc.

        Quark's don't have integer charge

        • franktankbank 2 days ago ago

          Redefine the down quark charge as the fundamental unit and you lose nothing.

          • marcosdumay 2 days ago ago

            > you lose nothing

            For some reason electrons have charge -3 then, that coincides with the proton charge for no good reason.

          • AnimalMuppet 2 days ago ago

            Right, but then you have the questions of 1) why do leptons have (a multiple of) the same fundamental unit as quarks, and 2) why does that multiple equal the number of quarks in a baryon, so that protons have a charge of exactly the same magnitude as electrons?

            I mean, I guess you could say that charge comes from (or is) the coupling of the quark/lepton field to the electromagnetic field, and therefore if it's something that's quantized on the electromagnetic side of that, then quarks and leptons would have the same scale. I'm not sure that's the real answer, much less that it's proven. (But it might be - it's a long time since my physics degree...)

            • franktankbank 2 days ago ago

              > it's a long time since my physics degree...

              me too, just addressing that a fraction might as well be an integer with some redefinition of the fundamental charge.

      • ndsipa_pomu 2 days ago ago

        Is this the same idea behind Williamson & Van der Mark's electron model?

        https://www.youtube.com/watch?v=hYyrgDEJLOA

      • RupertSalt 3 days ago ago

        > interesting topological states/defects of the underlying field

        eddies in the space-time continuum?

      • quchen 3 days ago ago

        (Note the post you’ve replied to mentioned electrons and _protons_, not positrons.)

    • Paracompact 3 days ago ago

      Technically, the charge of a proton can be derived from its constituent 2 up quarks and 1 down quark, which have charges 2/3 and -1/3 respectively. I'm not aware of any deeper reason why these should be simple fractional ratios of the charge of the electron, however, I'm not sure there needs to be one. If you believe the stack of turtles ends somewhere, you have to accept there will eventually be (hopefully simple) coincidences between certain fundamental values, no?

      • auntienomen 3 days ago ago

        There does appear to be a deeper reason, but it's really not well understood.

        Consistent quantum field theories involving chiral fermions (such as the Standard Model) are relatively rare: the charges have to satisfy a set of polynomial relationships with the inspiring name "gauge anomaly cancellation conditions". If these conditions aren't satisfied, the mathematical model will fail pretty spectacularly. It won't be unitary, can't couple consistently to gravity, won't allow high and low energy behavior to decouple,..

        For the Standard Model, the anomaly cancellation conditions imply that the sum of electric charges within a generation must vanish, which they do:

        3 colors of quark * ( up charge 2/3 - down charge 1/3) + electron charge -1 + neutrino charge 0 = 0.

        So, there's something quite special about the charge assignments in the Standard Model. They're nowhere near as arbitrary as they could be a priori.

        Historically, this has been taken as a hint that the standard model should come from a simpler "grand unified" model. Particle accelerators and cosmology hace turned up at best circumstantial evidence for these so far. To me, it's one of the great mysteries.

        • AnimalMuppet 2 days ago ago

          So they have to cancel, or we don't have a universe? ("Have to" not because we need electrical neutrality for large-scale matter - though we do need that - but because you can't build a quantum field that doesn't explode in various ways without it.)

          • auntienomen 2 days ago ago

            There's always some risk of confusing the model with the reality, but yeah, if you have chiral fermions interacting through gauge fields and gravity, the charges have to say satisfy all of the anomaly cancellation conditions (there's about half a dozen) or the model will be inconsistent.

      • tasty_freeze 3 days ago ago

        I'm aware of the charge coming from quarks, but my point remains.

        > you have to accept there will eventually be (hopefully simple) coincidences between certain fundamental values, no?

        When the probability of coincidence is epsilon, then, no. Right now they are the same to 12 digits, but that undersells it, because that is just the trailing digits. There is nothing which says the leading digits must be the same, eg, one could be 10^30 times bigger than the other. Are you still going to just shrug and say "coincidence?"

        That there are 26 fundamental constants and this one is just exactly the same is untenable.

        • jacquesm 3 days ago ago

          I think I agree with you. It could be just a matter of static bias or some other fairly simple mechanism to explain why these numbers are the same.

          Imagine an object made of only red marbles as the 'base state'. Now you somehow manage to remove one red marble: you're at -1. You add a red marble and you're at +1. It doesn't require any other marbles. Then you go and measure the charge of a marble and you and up at some 12 digit number. The one state will show negative that 12 digit number the other will show positive that 12 digit number.

          Assigning charge as being the property of a proton or an electron rather than one of their equivalent constituent components is probably a mistake.

        • Paracompact 3 days ago ago

          If you imagine the universe is made of random real fundamental constants rather than random integer fundamental constants, then indeed there's no reason to expect such collisions. But if our universe starts from discrete foundations, then there may be no more satisfying explanation to this than there is to the question of, say, why the survival threshold and the reproduction threshold in Conway's Game of Life both involve the number 3. That's just how that universe is defined.

          • tasty_freeze 3 days ago ago

            Why do you assume the two have to be small integers? There is nothing currently in physics which would disallow the electron to be -1 and the proton to be +1234567891011213141516171819. The fact they are both of magnitude 1 is a huge coincidence.

            • Paracompact 3 days ago ago

              I'm not assuming they have to be small integers—I'm saying that if the universe is built on discrete rather than continuous foundations, then small integers and coincidences at the bottom-turtle theory-of-everything become much less surprising. You're treating the space of possible charge values as if it's the reals, or at least some enormous range, but I consider that unlikely.

              Consider: in every known case where we have found a deeper layer of explanation for a "coincidence" in physics, the explanation involved some symmetry or conservation law that constrained the values to a small discrete set. The quark model took seemingly arbitrary coincidences and revealed them as consequences of a restrictive structure. auntienomen's point about anomaly cancellation is also exactly this kind of thing. The smallness of the set in question isn't forced, but it is plausible.

              But I actually think we're agreeing more than you realize. You're saying "this can't be a coincidence, there must be a deeper reason." I'm saying the deeper reason might bottom out at "the consistent discrete structures are sparse and this is one of them," which is a real explanation, but it might not have the form of yet another dynamical layer underneath.

              • light_hue_1 3 days ago ago

                Sparsity != symmetry.

                It's simple to say "Ah well, it's sparse" that doesn't mean anything and doesn't explain anything.

                Symmetries are equivalent to a conserved quantity. They exist because something else is invariant with respect to some transformation and vice versa. We didn't discover arbitrary constraints we found a conserved quantity & the implied symmetry.

                "There are integers", "the numbers should be small" all of these are nothing like what works normally. They aren't symmetries. At most they're from some anthropic argument about collections of universes being more or less likely, which is its own rabbit hole that most people stay away from.

            • jaybrendansmith 3 days ago ago

              Perhaps only visible matter is made up of particles with these exactly matching charges? If they did not match, they would not stay in equilibrium, and would not be so easily found.

              • thegabriele 3 days ago ago

                I like this survivorship bias, "evolution" works in everything why not in the shaping of the "costants" of the universe as we know it?

            • IsTom 2 days ago ago

              If they were, I'd assume that there wouldn't be anyone in the universe to observe that.

            • ImHereToVote 2 days ago ago

              And why does this hole fit my shape perfectly? Asked the puddle.

            • anon84873628 2 days ago ago

              You seem to be contradicting yourself, having already said:

              >I'm aware of the charge coming from quark

              So it's not +huge_number because the number of quarks involved is small. Sure we still don't understand the exact reason, but it's hardly as surprising that, uh, charge is quantized...

      • elfly 2 days ago ago

        Well yes, but the coincidence that Quarks have charges of multiples of another particle, that is not made up of quarks, should rise your brow, shouldn't it?

        Like we could accept coincidences if at the bottom is all turtles, but here we see a stack of turtles and a stack of crocodiles and we are asking why they have similar characteristics even if they are so different.

      • JumpCrisscross 3 days ago ago

        > you have to accept there will eventually be (hopefully simple) coincidences between certain fundamental values, no?

        No. It’s almost certainly not a coïncidence that these charges are symmetric like that (in stable particles that like to hang out together).

        • Paracompact 3 days ago ago

          Whence your confidence? As they say in math, "There aren't enough small numbers to meet the many demands made of them." If we assume the turtle stack ends, and it ends simply (i.e. with small numbers), some of those numbers may wind up looking alike. Even more so if you find anthropic arguments convincing, or if you consider sampling bias (which may be what you mean by, "in stable particles that like to hang out together").

          • JumpCrisscross 3 days ago ago

            > if you find anthropic arguments convincing

            Which makes every constant fair game. Currently, we don’t have a good process for explaining multiple universes beyond divine preference. Hence the notion that a random number settled on mirror whole sums.

        • hackyhacky 3 days ago ago

          > coïncidence

          Nïce

      • idiotsecant 3 days ago ago

        Shrugging and calling it a coincidence is generally not an end state when figuring out how something works.

      • 3 days ago ago
        [deleted]
    • jiggawatts 3 days ago ago

      This is "expected" from theory, because all particles seem to be just various aspects of the "same things" that obey a fairly simple algebra.

      For example, pair production is:

          photon + photon = electron + (-)electron
      
      You can take that diagram, rotate it in spacetime, and you have the direct equivalent, which is electrons changing paths by exchanging a photon:

         electron + photon = electron - photon
      
      There are similar formulas for beta decay, which is:

         proton = neutron + electron + (-)neutrino
      
      You can also "rotate" this diagram, or any other Feyman diagram. This very, very strongly hints that the fundamental particles aren't actually fundamental in some sense.

      The precise why of this algebra is the big question! People are chipping away at it, and there's been slow but steady progress.

      One of the "best" approaches I've seen is "The Harari-Shupe preon model and nonrelativistic quantum phase space"[1] by Piotr Zenczykowski which makes the claim that just like how Schrodinger "solved" the quantum wave equation in 3D space by using complex numbers, it's possible to solve a slightly extended version of the same equation in 6D phase space, yielding matrices that have properties that match the Harari-Shupe preon model. The preon model claims that fundamental particles are further subdivided into preons, the "charges" of which neatly add up to the observed zoo of particle charges, and a simple additive algebra over these charges match Feyman diagrams. The preon model has issues with particle masses and binding energies, but Piotr's work neatly sidesteps that issue by claiming that the preons aren't "particles" as such, but just mathematical properties of these matrices.

      I put "best" in quotes above because there isn't anything remotely like a widely accepted theory for this yet, just a few clever people throwing ideas at the wall to see what sticks.

      [1] https://arxiv.org/abs/0803.0223

      • tasty_freeze 3 days ago ago

        > This is "expected" from theory, because all particles seem to be just various aspects of the "same things" that obey a fairly simple algebra.

        But again, this is just observation, and it is consistent with the charges we measure (again, just observation). It doesn't explain why these rules must behave as they do.

        > This very, very strongly hints that the fundamental particles aren't actually fundamental in some sense.

        This is exactly what I am suggesting in my original comment: this "coincidence" is not a coincidence but falls out from some deeper, shared mechanism.

        • jiggawatts 3 days ago ago

          > this is just observation

          Sure, but that's fundamental to observing the universe from the inside. We can't ever be sure of anything other than our observations because we can't step outside our universe to look at its source code.

          > It doesn't explain why these rules must behave as they do.

          Not yet! Once we have a a theory of everything (TOE), or just a better model of fundamental particles, we may have a satisfactory explanation.

          For example, if the theory ends up being something vaguely like Wolfram's "Ruliad", then we may be able to point at some aspect of very trivial mathematical rules and say: that "the electron and proton charges pop out of that naturally, it's the only way it can be, nothing else makes sense".

          We can of course never be totally certain, but that type of answer may be both good enough and the best we can do.

    • cozzyd 3 days ago ago

      As soon as charge is quantized, this will happen. In any quantization scheme you will have some smallest charge. There are particles with charge +2 (the Delta++, for example), but ... anything that can decay while preserving quantum numbers will decay, so you end up with protons in the end. (ok, the quarks have fractional charge but that's not really relevant at scales we care about QED)

      If the question is, why is quantum mechanics the correct theory? Well, I guess that's how our universe works...

    • rjh29 3 days ago ago

      One argument (while unsatisfying) is there are trillions of possible configurations, but ours is the one that happened to work which is why we're here to observe it. Changing any of them even a little bit would result in an empty universe.

      • libraryofbabel 3 days ago ago

        There’s a name for that: the Anthropic principle. And it is deeply unsatisfying as an explanation.

        And does it even apply here? If the charge on the electron differed from the charge on the proton at just the 12th decimal place, would that actually prevent complex life from forming. Citation needed for that one.

        I agree with OP. The unexplained symmetry points to a deeper level.

        • squeefers 2 days ago ago

          > There’s a name for that: the Anthropic principle. And it is deeply unsatisfying as an explanation.

          i feel the same about many worlds

        • 3 days ago ago
          [deleted]
        • krzat 3 days ago ago

          I find the anthropic principle fascinating.

          I was born to this world at a certain point in time. I look around, and I see environment compatible with me: air, water, food, gravity, time, space. How deep does this go? Why I am not an ant or bacteria?

          • GordonS 3 days ago ago

            Presumably your parents weren't ants?

    • phkahler 2 days ago ago

      I'm convinced there are some semi-classical explanations that just haven't been figured out.

      Electrons are helically moving photons: https://www.researchgate.net/publication/281322004_The_elect...

      That's some interesting/wacky stuff, but there has been more research to improve those calculations - like deriving the electron charge and magnetic moment.

      Personally I like the idea that a proton is somehow literally an electron and 3 up quarks (a neutron gets 2 electrons and 3 up quarks). I am not a physicist though, so I'm sure there are reasons they "know" this is not the case.

      I find it fascinating that some physicists say wave functions are somehow "real" and then we've got Jacob Barandes saying you don't even need wave functions to do the computations of QM: https://www.youtube.com/watch?v=7oWip00iXbo

      IMHO there is a lot of exploration to be done without particle accelerators.

      • aleph_minus_one 2 days ago ago

        > Electrons are helically moving photons

        How do you explain that electrons have a rest mass, but photons don't (otherwise photons couldn't move with the speed of light according to special relativity)?

    • PaulHoule 3 days ago ago

      If it wasn't the case then matter wouldn't be stable.

      • jiggawatts 3 days ago ago

        An interesting early theory of gravity was: "What if opposite charges attract slight more strongly than identical charges repel each other?"

        If you tally up the forces, the difference is a residual attraction that can model gravity. It was rejected on various experimental and theoretical grounds, but it goes to show that if things don't cancel out exactly then the result can still leave a universe that would appear normal to us.

      • tasty_freeze 3 days ago ago

        Agreed (well, assuming the delta is more than a small fraction of a percent or whatever). But this is begging the question. If they are really independent then the vast, overwhelming fraction of all possible universes simply wouldn't have matter. Ours does have matter, so it makes our universe exceedingly unlikely. I find it far more parsimonious to assume they are connected by an undiscovered (and perhaps never to be discovered) mechanism.

        Some lean on the multiverse and the anthropic principle to explain it, but that is far less parsimonious.

        • PaulHoule 3 days ago ago

          Also note that the proton is not an elementary particle so it is really a question of "are the various quarks really 1/3, 2/3 of an electron charge".

          Crackpots have found thousands of formula that try to explain the ratio of the proton to electron mass but there is no expectation that there is a simple relationship between those masses since the proton mass is the sum of all sorts of terms.

          • gsf_emergency_6 3 days ago ago

            Crackpots are downstream of the "physics community" awarding cultural cachet to certain types of questions -- those with affordances they don't necessarily "deserve"-- but not others.

            (I use quotes because those are emergent concepts)

            Same as "hacker community" deciding that AI is worth FOMO'ing about

            • PaulHoule 2 days ago ago

              Well, I'm not sure I believe that "hierarchy problems" in HEP are real, but I do think the nature of the neutrino mass is interesting (we know it has a mass so it is a something and not a nothing) as is the nature of dark matter, the matter-antimatter asymmetry, and the non-observation of proton decay. That article has nothing to say about non-accelerator "big science" in HEP such as

              https://en.wikipedia.org/wiki/Super-Kamiokande

              which targets many of those questions.

              As for the "hacker community" I think AI is really controversial. I think other people find the endless spam of slop articles about AI more offensive than I do. It's obvious that these are struggling to make it off the "new/" page. The ones that offend me are the wanna-be celebrity software managers [1] who think we care what they think about delivering software that almost works.

              [1] sorry, I liked DHH's industry-changing vision behind Ruby-on-Rails, but his pronunciations about software management were always trash. You might make the case that Graham worked with a lot of startups so his essays might have had some transferable experience but they didn't. Atwood and Spolsky, likewise. Carmack is the one exception, he's a genius

              • gsf_emergency_6 2 days ago ago

                Carmack is the Midwestern middle middle-class (culturally) dropout amongst them. Classic

      • libraryofbabel 3 days ago ago

        Is that actually true, if the charges differed at the 12th decimal place only? That’s non-obvious to me.

        • baggy_trough 3 days ago ago

          Yes because matter would have a residual charge that would massively overpower gravity even at that small a discrepancy.

          • PaulHoule 2 days ago ago

            To be devil's advocate maybe there is a surplus deficit of 1-part-in-10^12 in electrons relative to protons.

    • andyfilms1 3 days ago ago

      For a given calculation on given hardware, the 100th digit of a floating point decimal can be replicated every time. But that digit is basically just noise, and has no influence on the 1st digit.

      In other words: There can be multiple "layers" of linked states, but that doesn't necessarily mean the lower layers "create" the higher layers, or vice versa.

    • wvbdmp 3 days ago ago

      Aren’t things like this usually explained by being the only viable configuration, or is that not the case here?

    • throwup238 3 days ago ago

      Or why the quarks that make up protons and neutrons have fractional charges, with +1 protons mixing two +2/3 up quarks and one -1/3 down quark, and the neutral neutron is one up quark and two down quarks. And where are all the other Quarks in all of this, busy tending bar?

      • david-gpu 3 days ago ago

        They have fractional charges because that is how we happen to measure charge. If our unit of charge had been set when we knew about quarks, we would have chosen those as fundamental, and the charge of the electron would instead be -3.

        Now, the ratios between these charges appear to be fundamental. But the presence of fractions is arbitrary.

        • jcranmer 3 days ago ago

          > If our unit of charge had been set when we knew about quarks, we would have chosen those as fundamental, and the charge of the electron would instead be -3.

          Actually, I doubt it. Because of their color charge, quarks can never be found in an unbound state but instead in various kinds of hadrons. The ways that quarks combine cause all hadrons to end up with an integer charge, with the ⅔ and -⅓ charges on various quarks merely being ways to make them come out to resulting integer charges.

        • throwup238 3 days ago ago

          Isn’t charge quantized? Observable isolated charges are quantized in units of e. You can call it -3 and +3 but that just changes the relative value for the quanta. The interesting question is still why the positive and neutral particles are nonelementary particles made up of quarks with a fraction of e, the math made possible only by including negatively charged ones (and yet electrons are elementary particles).

    • smnplk 3 days ago ago

      There are layers science can not access.

      • f30e3dfed1c9 3 days ago ago

        Well OK then! Let's tell all the physicists they can close up shop now. They might not have realized it, but they're done. All their little "theories" and "experiments" and what not have taken them as far as they can go.

        • paganel 2 days ago ago

          > Let's tell all the physicists they can close up shop now.

          Yes, that's part of the plan. I mean, not to all the physicists, just to those whose work doesn't bring in results anymore, and it hasn't for 30 to 40 years now. At some point they (said physicists) have to stop their work and ask themselves what it is that they're doing, because judging by their results it doesn't seem like they're doing much, while consuming a lot of resources (which could have been better spent elsewhere).

        • albatross79 3 days ago ago

          We're already in the realm of virtual particles, instantaneous collapse, fields with abstract geometric shape and no material reality, wave particle duality, quantized energy etc. The project of physics was to discover what the universe was made of. None of these things can answer that. If intelligibility was the goal, we lost that. So in an important sense, they might as well have closed up shop. If you're interested in the specific value of a certain property to the nth decimal place, there is work to do, but if you're interested in the workings of the universe in a fundamentally intelligible sense, that project is over with. What they're doing now is making doodles around mathematical abstractions that fit the data and presenting those as discoveries.

      • jacquesm 3 days ago ago

        By observing the discrepancies between theories we are accessing those layers. Whether we can access them with instruments is a different matter but with our minds we apparently can.

  • pjmlp 3 days ago ago

    As CERN Alumni, this isn't easy, the data is endless, processing it takes take, usually everything is new technology, and also needs to be validated before being put into use.

    Thousands of people have worked on bringing LHC up during a few decades before, Higgs came to be, across all engineering branches.

    This stuff is hard, and there is no roadmap on how to get there.

    • alt227 2 days ago ago

      > Higgs came to be

      Did it? I thought the whole point was that the data that came from LHC showed that it was inconclusive and needed a bigger more powerful machine to prove it. Happy to be proved wrong.

      • franktankbank 2 days ago ago

        Is this some mandella effect going on? No, they discovered the Higgs and did rough measurements.

      • pjmlp 2 days ago ago

        I am out of CERN since 2004, and only return there during Alumni related events, not keeping up with has happened with Higgs during the last years.

  • GlibMonkeyDeath 3 days ago ago

    It's hard. Particle physics faces the problem that in order to dig down to ever smaller scales, ironically, ever larger experiments are needed. We've pretty much built large enough colliders for our current understanding. No one really knows how much more energy would be needed to expose something new - it might be incremental, within current technical reach, or it might be many orders of magnitude beyond our current capabilities. The experiments have become expensive enough that there isn't a lot of appetite to build giant new systems without some really good reason. The hard part is coming up with a theory to justify the outlay, if you can't generate compelling data from existing systems.

    Physics advances have been generally driven by observation, obtained through better and better instrumentation. We might be entering a long period of technology development, waiting for the moment our measurements can access (either through greater energy or precision) some new physics.

  • threethirtytwo 3 days ago ago

    All of science is getting harder as the easiest discoveries are all pretty much behind us.

    LLMs were a breakthrough I didn't expect and it's likely the last one we'll see in our lifetime.

    • iterance 3 days ago ago

      Specific fields may not advance for decades at a time, but we are hardly in a scientific drought. There have been dramatic advances in countless fields over the last 20 years alone and there is no good reason to expect such advances to abruptly cease. Frankly this is far too pessimistic.

      • threethirtytwo 3 days ago ago

        I don't understand what is wrong with pessimism. That's not a valid critique. If someone is pessimistic but his description of the world matches REALITY, then there's nothing wrong with his view point.

        Either way this is also opinion based.

        There hasn't been a revolutionary change in technology in the last 20 years. I don't consider smart phones to be revolutionary. I consider going to the moon revolutionary and catching a rocket sort of revolutionary.

        Actually I take that back I predict mars as a possible break through along with LLMs, but we got lucky with musk.

        • andrewflnr 3 days ago ago

          You imply your view "matches REALITY", then fall back to "Either way this is also opinion based." Nicely played. But the actual reality is that scientific discovery is proceeding at least as fast as it ever has. These things take time. 20 years is a laughably short time in which to declare defeat, even ignoring the fact that genetic and other biological tech has advanced leaps and bounds in that time. There's important work happening in solid state physics and materials science. JWST is overturning old theories and spawning new ones in cosmology. There's every reality-based reason to believe there will be plenty of big changes in science in the next 20 years or so.

          • threethirtytwo 3 days ago ago

            [flagged]

            • andrewflnr 2 days ago ago

              No, your opinions bias toward negativity, and we can see it in this comment by the way you shift the goalposts for every achievement until you can poo-poo it. Oh, except for the ones you just omitted from your quote, maybe because even you can't rationalize why CRISPR isn't a step change.

              • threethirtytwo 2 days ago ago

                >No, your opinions bias toward negativity, and we can see it in this comment by the way you shift the goalposts for every achievement until you can poo-poo it. Oh, except for the ones you just omitted from your quote, maybe because even you can't rationalize why CRISPR isn't a step change.

                Not true at all. CRISPR isn't a step change because it only made genetic engineering more efficient and it didn't effect the lives of most people. It's still a research thing.

                I didn't poo-poo AI did I? That's the favorite thing for everyone to poo-poo these days and ironically it's the one thing that effects everyones life and is causing paradigm shifting changes in society right now.

                • andrewflnr 2 days ago ago

                  CRISPR "only made genetic engineering more efficient" which is no big deal. Smartphones don't count though, despite both requiring scientific breakthroughs in multiple fields and turning society upside down, because... reasons. Your standards are incoherent.

                  BTW, for someone who claims not to poo-poo AI, I find it hilarious that you still don't think we're due for another breakthrough or two in that area in the next decade or so. I hate the current genAI craze and I still think that's coming.

                  • threethirtytwo 2 days ago ago

                    It’s no longer a break through because the breakthrough already happened. Everything subsequent to LLMs is an incremental increase in optimization and not a breakthrough. Even if some breakthrough occurs it will be dragged through shit and ridiculed for being overused for generating slop.

                    Smartphones required zero breakthroughs. It’s just existing technology made smaller and more efficient. What changed is how we used technology. Under your reasoning dating apps would be a breakthrough.

        • tehjoker 3 days ago ago

          genetic technology and computing technology have been the biggest drivers for a while. i do think it is remarkable to video call another continent. communication technology is disruptive and revolutionary though it looks like chaos. ai is interesting too if it lives up to the hype even slightly.

          catching a rocket is very impressive, but its just a lower cost method for earth orbit. it does unlock megaconstellations tho

          • threethirtytwo 3 days ago ago

            Yeah none of those are step function changes. Video calling another continent is like a tiny step from TV. Yeah I receive video wirelessly on my tv not that amazed when I can stretch the distance further with a call that has video. Big deal.

            AI is the step function change. The irony is that it became so pervasive and intertwined with slop people like you forget that what it does now (write all code) was unheard of just a couple years ago. ai surpassed the hype, now it’s popular to talk shit about it.

            • incr_me 3 days ago ago

              A step in which function are you talking about, exactly?

              • threethirtytwo 3 days ago ago

                If you want it stated precisely, the function is human cognitive labor per unit time and cost.

                For decades, progress mostly shifted physical constraints or communication bandwidth. Faster chips, better networks, cheaper storage. Those move slopes, not discontinuities. Humans still had to think, reason, design, write, debug. The bottleneck stayed human cognition.

                LLMs changed that. Not marginally. Qualitatively.

                The input to the function used to be “a human with training.” The output was plans, code, explanations, synthesis. Now the same class of output can be produced on demand, at scale, by a machine, with latency measured in seconds and cost approaching zero. That is a step change in effective cognitive throughput.

                This is why “video calling another continent” feels incremental. It reduces friction in moving information between humans. AI reduces or removes the human from parts of the loop entirely.

                You can argue about ceilings, reliability, or long term limits. Fine. But the step already happened. Tasks that were categorically human two years ago are now automatable enough to be economically and practically useful.

                That is the function. And it jumped.

                • 3 days ago ago
                  [deleted]
        • iterance 3 days ago ago

          My critique is not due to pessimism, it is due to afactuality. Breakthroughs in science are plenty in the modern era and there is no reason to expect them to slow or halt.

          However, from your later comments, it sounds as though you feel the only operating definition of a "breakthrough" is a change inducing a rapid rise in labor extraction / conventional productivity. I could not disagree more strongly with this opinion, as I find this definition utterly defies intuition. It rejects many, if not most, changes in scientific understanding that do not directly induce a discontinuty in labor extraction. But admittedly if one restricts the definition of a breakthrough in this way, then, well, you're probably about right. (Though I don't see what Mars has to do with labor extraction.)

          • threethirtytwo 2 days ago ago

            That’s only one dimension. The step function is multidimensional. My critique is more about the Euclidean distance between the initial point and the end point.

            To which AI is the only technology that has enough distance to be classified as a “breakthrough”.

        • layer8 2 days ago ago

          > If someone is pessimistic but his description of the world matches REALITY, then there's nothing wrong with his view point.

          A description that matches reality is realist, not pessimist.

          • threethirtytwo 2 days ago ago

            Technically this is true. Practically speaking most realists are perceived to be pessimists. There are tons of scientific studies to back this up as well. People who are judged to be pessimistic experimentally have more accurate perceptions of the real world.

            This means that most people who you would term as "realists" are likely optimists and not realists at all.

    • j-krieger 2 days ago ago

      The additional irony here is that LLMs are a tool that is likely forever damned to regurgitate knowledge of the past, with the inability to derive new information.

      • kingstnap 2 days ago ago

        It depends on what you mean, specifically on your distance metric.

        If you mean nearest neighbours search like autocorrect then LLMs are extrapolative.

        You can easily generate combinations not seen before. I mean you can prove this with parametric prompting.

        Like "Generate a poem about {noun} in {place} in {language}" or whatever. This is a simplistic example but it doesn't take much to come up with a space that has quadrillion of possibilities. Then if you randomly sample 10 and they all seem to be "right" then you have proven it's not pure neighbour recall.

        Same is true of the image generators. You can prove its not memorizing because you can generate random varients and show that the number of images realizable is more than the training data possibly contains.

        If you mean on the underlying manifold of language and ideas. Its definitely interpolation, which is fundamentally a limitation of what can be done using data alone. But I know this can be expanded over iteration (I have done experiments related to this). The trick to expanding it actually running experiments/simulation on values at the boundry of the manifold. You have to run experiments on the unknown.

        • threethirtytwo 2 days ago ago

          It is interpolation but that is what human thinking is as well. Interpolation is so broad it can cover agi conceptually.

          But I get it, the interpolation you’re talking about is limited. But I think you missed this insight: human interpolation is limited too. In the short term everything we do is simply recombination of ideas as you put it.

          But that’s the short term. In the long term we do things that are much greater. But I think this is just an aggregation of small changes. Change the words in a poem 5000 times: have the LLM do the same task 5000 times. Let it pick a random word. The result is wholly original. And I think in the end this what human cognition is as well.

          • kingstnap a day ago ago

            A chatbot is exactly and only a short term recombination of existing ideas is exactly my point.

            Even if an LLM came up with a theory of quantum gravity in some random chain of thought via chance, once the context is wiped everything is gone.

            Expanding the frontier of knowledge (true extrapolation) requires iteration and layering of sinpler ideas. If you loose the layers and have to start from scratch every time then you fundamently will never move further out then what you already know (the interpolation).

            • threethirtytwo a day ago ago

              >A chatbot is exactly and only a short term recombination of existing ideas is exactly my point.

              You missed my point. I'm saying humans have finite context windows as well.

              Look at how claude keeps passing it's context window down the chain. It creates a summary. It can spend thousands of tokens to coalesce on a conclusion, and only that conclusion needs to be passed on to the next context window. The research can be tossed. That's how human discovery works. We don't need the whole context window, we produce major discoveries because we pass the conclusion down the chain.

              LLMs can do it too. We just never fully tried it.

      • threethirtytwo 2 days ago ago

        This is not true at all. Just query any LLM and ask it for new information. Literally ask it to create something that doesn't exist.

        It will give it to you.

    • 8note 2 days ago ago

      famous last words before quantum physics hit

  • bsder 3 days ago ago

    Theoretical physics progresses via the anomalies it can't explain.

    The problem is that we've mostly explained everything we have easy access to. We simply don't have that many anomalies left. Theoretical physicists were both happy and disappointed that the LHC simply verified everything--theories were correct, but there weren't really any pointers to where to go next.

    Quantum gravity seems to be the big one, but that is not something we can penetrate easily. LIGO just came online, and could only really detect enormous events (like black hole mergers).

    And while we don't always understand what things do as we scale up or in the aggregate, that doesn't require new physics to explain.

    • beezle 3 days ago ago

      Please do not conflate the broad "theoretical physics" with the very specific "beyond the standard model" physics questions. There are many other areas of physics with countless unsolved problems/mysteries.

      • bsder 3 days ago ago

        Sure, there are things like "Really, how do superconductors work?", but nobody (mostly) believes that understanding things like that requires "new physics".

        And, I think, most people would place that kind of stuff under "solid state physics" anyway.

        • squeefers 2 days ago ago

          oh i dont know, being able to predict the path of a particle seems pretty basic to me, and it cannot be done for any given particle.

    • mhandley 3 days ago ago

      Neutrino mass is another anomaly, which is at least slightly easier to probe than quantum gravity: https://cerncourier.com/a/the-neutrino-mass-puzzle/

  • benreesman 3 days ago ago

    It is almost always the case that when progress stops for some meaningful period of time that a parochial taboo would need violating to move forwards.

    The best known example is the pre- and post-Copernican conceptions of our relationship to the sun. But long before and ever since: if you show me physics with its wheels slipping in mud I'll show you a culture not yet ready for a new frame.

    We are so very attached to the notions of a unique and continuous identity observed by a physically real consciousness observing an unambiguous arrow of time.

    Causality. That's what you give up next.

    • fatbird 3 days ago ago

      This is a common framing of the Copernican revolution, and it's wrong.

      Copernicus was proposing circular orbits with the sun at the center instead of the earth. The Copernican model required more epicycles for accurate predictions than the considerably well-proven Ptolemaic model did, with the earth at the centre.

      It wasn't until Kepler came along and proposed elliptical orbits that a heliocentric solar system was obviously a genuine advance on the model, both simpler and more accurate.

      There was no taboo being preserved by rejecting Copernicus's model. The thinkers of the day rightfully saw a conceptual shift with no apparent advantage and several additional costs.

      • kubanczyk 2 days ago ago

        > The thinkers of the day rightfully saw a conceptual shift with no apparent advantage and several additional costs.

        I'm holding a big fat Citation Needed banner. Seemingly none of these "thinkers of the day" took it far enough to write down the thoughts.

        While at it, were the "thinkers of the day" fond of the idea of Ptolemy's equant?

    • gary_0 3 days ago ago

      I'm pretty sure quantum mechanics already forgoes conventional causality. Attosecond interactions take place in such narrow slices of time that the uncertainty principle turns everything into a blur where events can't be described linearly. In other words, the math sometimes requires that effect precedes cause. As far as we can tell, causality and conservation of energy is only preserved on a macroscopic scale. (IANAQP, but I'm going off my recollections of books by people who are.)

    • raincole 3 days ago ago

      It's easy to give up existing concepts. It's called being a crackpot and you can find thousands of papers doing that online.

      • kubanczyk 2 days ago ago

        Yes. But crackpots are still vital.

        Let me put it this way. Once upon a time people didn't know about solar eclipse. But then a day came when a certain somebody was instantly promoted to a Lead Staff Senior Astronomer, just because they predicted to the hour that the sun is going to disappear.

        Well, but think about the field just one day before that:

        - maybe 10 theories that said "it's just a reformulation/refactoring, nothing to see here, all business as usual, no new predictions, very safe for the author",

        - maybe 100 crackpot theories. Undoubtedly, unashamedly crackpot, with wild predictions all over. Of which 99% were in fact pure trash, so, retrospectively, people were rightfully considering them trash. Yet 1 was the key to progress.

      • indymike 3 days ago ago

        I'm not sure the crackpot is what we're talking about here. We're talking about something tht violates the prevailing opinion in a way that can be verified, and results a change in what we know to be true. The crackpot is mostly the result of a very aspirational world view, and usually under the hood has bias and error that is often quite obvious.

    • mastermage 3 days ago ago

      the fuck you mean giving up causality?

  • ggm 3 days ago ago

    I am sure others will say it better, but the cat-in-the-box experiment is a shockingly bad metaphor for the idea behind quantum states and observer effect.

    I will commit the first sin, by declaring without fear of contradiction the cat actually IS either alive or dead. it is not in a superposition of states. What is unknown is our knowledge of the state, and what collapses is that uncertainty.

    If you shift this to the particle, not the cat, what changes? because if very much changes, my first comment about the unsuitability of the metaphor is upheld, and if very little changes, my comment has been disproven.

    It would be clear I am neither a physicist nor a logician.

    • plomme 3 days ago ago

      Well you are in luck because that was the point of Schroedingers cat; it was constructed to show the impossibly odd implications of quantum mechanics.

      From the wikipedia page: “This thought experiment was devised by physicist Erwin Schrödinger in 1935 in a discussion with Albert Einstein to illustrate what Schrödinger saw as the problems of Niels Bohr and Werner Heisenberg's philosophical views on quantum mechanics.”

    • BalinKing 3 days ago ago

      There are various theories about what's actually happening in quantum mechanics. Some theories have hidden variables, in which case the issue is simply one of measurement (i.e. there really is an "objectively correct" value, but it only looks to us like there isn't).[0] However, this is not known to be the case, and many theories really do claim that position and momentum fundamentally cannot both be well-defined at once. (The "default" Copenhagen interpretation is in the latter camp; AFAIK it's convenient in practice, and as a result it's implicitly assumed in introductory QM classes.)

      [0] Well, and the hidden variables are non-local, which is a whole 'nother can of highly non-intuitive worms.

      • ggm 3 days ago ago

        I'm not qualified to say. But, because of inductive reasoning, I have some concern that underneath the next level of "oooh we found the hidden variable" will be a Feynman moment of saying "yea, thats defined by the as-yet unproven hidden-hidden variables, about which much conjecture is being made but no objective evidence exists, but if you fund this very large machine...."

    • codethief 19 hours ago ago

      > What is unknown is our knowledge of the state, and what collapses is that uncertainty.

      Unless you believe in a hidden-variables theory, this is provably false, though, see Bertlmann's Socks etc.

    • sliken 3 days ago ago

      Along similar lines, the double-slit experiment, seems simple. Two slits let light though and you get bands where they constructively or destructively interfere, just like waves.

      However I still find it crazy that when you slow down the laser and one photon at a time goes through either slit you still get the bands. Which begs the question, what exactly is it constructively or destructively interfering with?

      Still seems like there's much to be learned about the quantum world, gravity, and things like dark energy vs MOND.

      • ggm 3 days ago ago

        I had a conversation about this in HN some months back. It's a surprisingly modern experiment. It demanded an ability to reliably emit single photons. Young's theory may be 1800 but single photon emission is 1970-80.

        (This is what I was told, exploring my belief it's always been fringes in streams of photons not emerging over repeated applications of single photons and I was wrong)

        • lefra 3 days ago ago

          To get single photons, you just need to stack up enough stained glass infront of a light source. That's been acheivable for aeons (the photon will go through at random time though).

          The difficult part is single photon _detectors_, they're the key technology to explore the single-photon version of Young's experiment (which originally showed that light has wave-like properties).

      • jasonwatkinspdx 3 days ago ago

        The most simple answer here is the "fields are real, particles are excitation patterns of fields." And that's generally the practical way most physicists think of it today as I understand it.

        If I make the equivalent of a double slit experiment in a swimming pool, then generate a vortex that propagates towards my plywood slits or whatever, it's not really surprising that the extended volume of the vortex interacts with both slots even though it looks like a singular "particle."

        • el_nahual 3 days ago ago

          And yet if you place a detector at the slits to know which slit the single photon goes through, you get no interference pattern at the end.

      • squeefers 2 days ago ago

        > However I still find it crazy that when you slow down the laser and one photon at a time goes through either slit you still get the bands.

        why does nobody mention the fact the photon doesnt keep going through the same hole? like why is it randomly moving through the air in this brownian way? the laser gun doesnt move, the slit doesnt move, so why do different photons end up going through different holes?

  • GMoromisato 3 days ago ago

    The use of "AI" in particle physics is not new. In 1999 they were using neural nets to compute various results. Here's one from Measurement of the top quark pair production cross section in p¯p collisions using multijet final states [https://repository.ias.ac.in/36977/1/36977.pdf]

    "The analysis has been optimized using neural networks to achieve the smallest expected fractional uncertainty on the t¯t production cross section"

    • BrandoElFollito 2 days ago ago

      I did my PhD in physics using nn back in 1997. It was not thriving yet, but was quite advanced already.

      I remember I used a library (THE library) from a German university which was all the rage at that time.

    • jdshaffer 3 days ago ago

      I remember back in 1995 or so being in a professor's office at Indiana University and he was talking about trying to figure out how to use Neural Networks to automatically track particle trails in bubble chamber results. He was part of a project at CERN at the time. So, yeah, they've been using NNs for quite awhile. :-)

      • elashri 3 days ago ago

        Particle identification using NN classifiers was actually on the early success stories of NN. These are pretty standard algorithms in tracking and trigger software in HEP experiments now. There are even standard tools in the field to help you train your own.

        What is more interesting currently is things like anomaly detection using ML/NN and foundational models..etc.

  • beezle 3 days ago ago

    I never liked that the physics community shifted from 'high energy' particle physics (the topic of the article) to referring to this branch as just 'particle physics' which I think leaves the impression that anything to do with 'particles' is now a dead end.

    Nuclear physics (ie, low/medium energy physics) covers diverse topics, many with real world application - yet travels with a lot of the same particles (ie, quarks, gluons). Because it is so diverse, it is not dead/dying in the way HEP is today.

    • grebc 3 days ago ago

      What’s the saying… if your only tool is a collider?

  • davidw 3 days ago ago

    It's impossible to tell without opening the box the particle physics is in.

  • jahnu 3 days ago ago

    I find the arguments from those who say there is no crisis convincing. Progress doesn’t happen at a constant rate. We made incredible unprecedented progress in the 20th century. The most likely scenario is that to slow down for a while. Perhaps hundreds of years again! Nobody can know. We are still making enormous strides compared to most of scientific history.

    • Insanity 3 days ago ago

      Although we do have many more people now working on these problems than any time in the past. That said, science progresses one dead scientist at the time so might still take generations for a new golden era.

  • mhandley 3 days ago ago

    One interesting gap in the standard model is why neutrinos have mass: https://cerncourier.com/a/the-neutrino-mass-puzzle/

  • ktallett 3 days ago ago

    Is it more that even the most dedicated and passionate researchers have to frame their interests in a way that will get funding? Particle Physics right now is not the thing those with the cash will fund right now. AI and QC is the focus.

    • Legend2440 3 days ago ago

      Well, it's hard to make an argument for a $100 billion collider when your $10 billion collider didn't find anything revolutionary.

      Scaling up particle colliders has arguably hit diminishing returns.

  • mastermage 3 days ago ago

    Its probably just very hard, in my opinion as a physicist

  • KolibriFly 2 days ago ago

    This feels less like a story about particle physics "failing" and more like a story about a field running out of easy leverage

    • davrosthedalek 2 days ago ago

      Exactly. The field has been a tik-tok between times of discovery and times of precision. We are now just swinging back from a discovery period. The next machine will be first a precision machine and then upgraded to be a discovery machine again.

  • aatd86 3 days ago ago

    Isn't it the mathematics that is lagging? Amplituhedron? Higher dimensional models?

    Fun fact: I got to read the thesis of one my uncles who was a young professor back in the 90's. Right when they were discovering bosons. They were already modelling them as tensors back then. And probably multilinear transformations.

    Now that I am grown I can understand a little more, I was about 10 years old back then. I had no idea he was studying and teaching the state of the art. xD

    • elzbardico 3 days ago ago

      Tensors are pretty old in physics; they are a central concept in Einstein's General Relativity.

      You can find tensors even in some niche stuff in macroeconomics.

    • ecshafer 3 days ago ago

      Tensors are like 200 years old in mathematics. Gauss talked about Tensors.

      • aatd86 3 days ago ago

        What was new was not tensors. It was the representation in SU of mesons for photon-photon collisions. But even saying that is skimming the surface. I can't read beyond the knowledge gap.

        • aatd86 2 days ago ago

          SO(3)*, not SU

  • Rury 3 days ago ago

    It's just hard. I mean... it could very well be, that there's so many deeper layers underneath what we know in particle physics, but from our scale, also so infeasible to build something to analyze and decompose the nuanced behavior happening at that level, to the point that it's practically impossible to do so. Just like it is impossible to split an atom with your bare hands...

  • kachapopopow 2 days ago ago

    None of the comments seem to mention that it's also really really really really really expensive.

    • layer8 2 days ago ago

      That depends on what you compare it to.

      • kdavis 2 days ago ago

        Almost anything, i.e. the next generation accelerator[1] at CERN is about 15B CHF which is about 20B USD.

        [1] https://home.cern/science/accelerators/future-circular-colli...

        • padjo 2 days ago ago

          So roughly half the the annual budget of ICE?

        • bloggie 2 days ago ago

          About a third of Madoff's fund, a little over a month of Google AI spend, about nine months of Ozempic sales. Fun!

      • kachapopopow 2 days ago ago

        "quite literally everything else", well you got iter etc but those are on the same spectrum. It's rare that military projects in the US get that kind of funding.

  • JackFr 2 days ago ago

    Maybe a dumb question here, but how would they discover a dark matter particle, if dark matter is basically invisible to us except for its gravitational effects?

  • sprash 2 days ago ago

    It is obviously not dead but it should be dead: Almost all of the technical and economic progress made in the last century was achieved with macroscopic quantum effects. Particle physics spends a lot of energy and material resources to measure microscopic effects. The priorities are esentially inverted. At this point it is not even about discovery. Experiments are relegated to precision measurements. What practical use will it be if we know the mass/charge distribution/polarizability of some particles more precicely by a few percent? About nothing.

  • nephihaha 2 days ago ago

    When the model appears to have massive problems, maybe it's time to go back and revise it.

    • squeefers 2 days ago ago

      or if youre micho kaku, just parrot it on low grade tv shows and public appearances because its easier to gain notoriety than to do science

      • nephihaha 2 days ago ago

        Yes, that too LOL. He really is a grifter. But he is entertaining on TV and has that eccentric professor look about him.

        Heinz Wolff used to fill a similar role on British TV.

  • tariky 3 days ago ago

    To my uneducated eye it looks like they are stuck in limbo for 120 years. Nothing practical has been create based on those theories. It is just words and calculations spinning in circles.

    I wish those people focus on practical real world physics. So we all can enjoy new innovations.

    • WantonQuantum 3 days ago ago

      The device you used to make this comment relies heavily on quantum effects to make efficient transistors. The necessary theoretical understanding of semiconductors did not exist 120 years ago.

    • potamic 3 days ago ago

      You're right. If you were educated, you would have learnt about the numerous applications of particle physics in modern technologies.

    • jacquesm 3 days ago ago

      > Nothing practical has been create based on those theories.

      Ever used GPS?

      A CD player?

      A laser?

      Semiconductors?

      • gary_0 3 days ago ago

        Einstein laid the theoretical foundations for lasers in 1917, and it took over 40 years of "impractical" scientific work before the first functioning laser was built. It took decades more for them to become a cheap, ubiquitous technological building-block. The research is still continuing, and there's no reason to assume it will stop eventually bearing fruit (for the societies that haven't decimated their scientific workforce, anyways). Look at the insanity required to design and build the EUV lasers in ASML's machines, which were used to fabricate the CPU I'm using right now, over a century after Einstein first scribbled down those obscure equations!

        • jacquesm 2 days ago ago

          I sincerely wonder how someone that is unaware of any of this finds their way onto HN, but at the same time it is an educational opportunity. 'nothing practical' indeed...

        • davrosthedalek 2 days ago ago

          In addition, lasers were long believed to be a scientific novelty without any real world use.

    • padjo 2 days ago ago

      You should probably invest in your education so.

  • Razengan 3 days ago ago

    Maybe this is all we can learn from home and we need to get out more.

  • tehjoker 3 days ago ago

    It's kind of legitimate, but it's kind of sad to see some of the smartest people in society just being like "maybe AI will just give me the answer," a phrase that has a lot of potential to be thought terminating.

    • emmelaich 3 days ago ago

      That's mentioned in the article too:

      >Cari Cesarotti, a postdoctoral fellow in the theory group at CERN, is skeptical about that future. She notices chatbots’ mistakes, and how they’ve become too much of a crutch for physics students. “AI is making people worse at physics,” she said.

      • yalok 3 days ago ago

        this. Deep understanding of physics involves building a mental model & intuition how things work, and the process of building is what gives the skill to deduce & predict. Using AI to just get to the answers directly prevents building that "muscle" strength...

      • gowld 3 days ago ago

        AI chatbots are also making people better at physics, by answering questions the textbook doesn't or the professor can't explain clearly, patiently. Critical thinking skills are critical. Students cheating with chatbots might not have put in the effort to learn without chatbots.

    • 0x3f 3 days ago ago

      I'm quite happy that it might give me, with pre-existing skills, more time on the clock to stay relevant.

  • gowld 3 days ago ago

    Information content of the article:

    The discovery of the Higgs boson in 2012 completed the Standard Model of particle physics, but the field has since faced a "crisis" due to the lack of new discoveries. The Large Hadron Collider (LHC) has not found any particles or forces beyond the Standard Model, defying theoretical expectations that additional particles would appear to solve the "hierarchy problem"—the unnatural gap between the Higgs mass and the Planck scale. This absence of new physics challenged the "naturalness" argument that had long guided the field.

    In 2012, physicist Adam Falkowski predicted the field would undergo a slow decay without new discoveries. Reviewing the state of the field in 2026, he maintains that experimental particle physics is indeed dying, citing a "brain drain" where talented postdocs are leaving the field for jobs in AI and data science. However, the LHC remains operational and is expected to run for at least another decade.

    Artificial intelligence is now being integrated into the field to improve data handling. AI pattern recognizers are classifying collision debris more accurately than human-written algorithms, allowing for more precise measurements of "scattering amplitude" or interaction probabilities. Some physicists, like Matt Strassler, argue that new physics might not lie at higher energies but could be hidden in "unexplored territory" at lower energies, such as unstable dark matter particles that decay into muon-antimuon pairs.

    CERN physicists have proposed a Future Circular Collider (FCC), a 91-kilometer tunnel that would triple the circumference of the LHC. The plan involves first colliding electrons to measure scattering amplitudes precisely, followed by proton collisions at energies roughly seven times higher than the LHC later in the century. Formal approval and funding for this project are not expected before 2028.

    Meanwhile, U.S. physicists are pursuing a muon collider. Muons are elementary particles like electrons but are 200 times heavier, allowing for high-energy, clean collisions. The challenge is that muons are highly unstable and decay in microseconds, requiring rapid acceleration. A June 2025 national report endorsed the program, which is estimated to take about 30 years to develop and cost between $10 and $20 billion.

    China has reportedly moved away from plans to build a massive supercollider. Instead, they are favoring a cheaper experiment costing hundreds of millions of dollars—a "super-tau-charm facility"—designed to produce tau particles and charm quarks at lower energies.

    On the theoretical side, some researchers have shifted to "amplitudeology," the abstract mathematical study of scattering amplitudes, in hopes of reformulating particle physics equations to connect with quantum gravity. Additionally, Jared Kaplan, a former physicist and co-founder of the AI company Anthropic, suggests that AI progress is outpacing scientific experimentation, positing that future colliders or theoretical breakthroughs might eventually be designed or discovered by AI rather than humans.

  • stefantalpalaru 2 days ago ago

    [dead]

  • AIorNot 3 days ago ago

    Curious what everyone thinks about this physicists idea

    - the universe as a Neural Network (yes yes moving the universe model paradigm from the old Clockwork to machine to computer to neural network)

    I found it interesting and speculative but also fascinating

    See video here:

    https://youtu.be/73IdQGgfxas?si=PKyTP8ElWNr87prG

    AI summary of the video:

    This video discusses Professor Vitaly Vanchurin's theory that the universe is literally a neural network, where learning dynamics are the fundamental physics (0:24). This concept goes beyond simply using neural networks to model physical phenomena; instead, it posits that the universe's own learning process gives rise to physical laws (0:46).

    Key takeaways from the discussion include: • The Universe as a Neural Network (0:00-0:57): Vanchurin emphasizes that he is proposing this as a promising model for describing the universe, rather than a definitive statement of its ontological nature (2:48). The core idea is that the learning dynamics, which are typically used to optimize functions in machine learning, are the fundamental physics of the cosmos (6:20). • Deriving Fundamental Field Equations (21:17-22:01): The theory suggests that well-known physics equations, such as Einstein's field equations, Dirac, and Klein-Gordon equations, emerge from the learning process of this neural network universe. • Fermions and Particle Emergence (28:47-32:15): The conversation delves into how particles like fermions could emerge within this framework, with the idea that useful network configurations for learning survive, similar to natural selection. • Emergent Quantum Mechanics (44:53-49:31): The video explores how quantum behaviors, including the Schrödinger equation, could emerge from the two distinct dynamics within the system: activation and learning. This requires the system to have access to a "bath" or "reservoir" of neurons. • Natural Selection at the Subatomic Scale (1:05:10-1:07:34): Vanchurin suggests that natural selection operates on subatomic particles, where configurations that are more useful for minimizing the loss function (i.e., for efficient learning) survive and those that are not are removed. • Consciousness and Observers (1:15:40-1:24:09): The theory integrates the concept of observers into physics, proposing a three-way unification of quantum mechanics, general relativity, and observers. Consciousness is viewed as a measure of learning efficiency within a subsystem (1:30:38).

  • albatross79 3 days ago ago

    Why are we even trying to look deeper? To fit our mathematical curves better? Abstract spacetime, fields, virtual particles, wave function collapse, quantized energy, wave particle duality, etc. This is all BS. And I'm not disputing the theories or the experimental results. These concepts are unintelligible. They are self contradictory. They are not even abstractions, they are mutually exclusive paradigms forced together into a bewilderment. I'm not disputing that the math fits the observations. But these are not explanations. If this is what it's come to, all we can expect from here on is to better fit the math to the observation. And in the end, an equation that tells us nothing about what we really wanted to know, like "what is it really"? Nobody is going to be satisfied with an equation, so why are we still funding this enterprise, for better lasers to kill bad guys?

    • drdeca 3 days ago ago

      The universe is not obligated to appeal to your aesthetic tastes in its innermost functioning.

      Maybe you aren’t going to be satisfied with the sort of complicated mathematics which appears to be correct (or, on the right track).

      If you have complaints about the aesthetics of how the universe works, take it up with God.

      Personally, I think there is a lot of beauty to be found in it.

      I’ll admit that there are a few parts that go against my tastes (I don’t like needing to resort to distributions instead of proper functions), but that’s probably just intellectual laziness on my part.

      • squeefers 2 days ago ago

        > The universe is not obligated to appeal to your aesthetic tastes in its innermost functioning.

        This is truly a copout. When science faulters in explaining the world we get answers like this. His argument isnt with the universe, but with out own scientific theories. If you dont want your theories about the physical world to explain physical world, then be an engineer. Science explains the world, engineers use those theories. QM has large gaps and doesnt actually explain much, but I guess the universe doesnt care whether our theories are wildly off the mark or not.

      • albatross79 2 days ago ago

        It's not a matter of taste. This is like going to a restaurant, expecting a delicious meal, and being brought a dish with a fancy name made out of the actual menu itself. Would anyone go back there to eat?

    • WantonQuantum 3 days ago ago

      I find quite a lot of it very satisfying. For example, the deep mathematical symmetries of gauge theory and how they relate to the observed forces of the universe is truly amazing.

      The excellent Arvin Ash has a very accessible video about it: https://www.youtube.com/watch?v=paQLJKtiAEE

      • squeefers 2 days ago ago

        maybe thats the problem. satisfaction isnt understanding. string theory is exciting maths, but fits nothing in reality. maybe scientists should go back to explaining reality instead of whatever this current paradigm is

        • drdeca 2 days ago ago

          Your conception of an “explanation of reality” is deeply flawed.

          • squeefers 2 days ago ago

            you can correctly predict reality whilst having absolutely know idea how it works (ie the path of a photon in the double slit experiment).

            • drdeca 2 days ago ago

              Sometimes nature tells us that the questions we are inclined to ask, are flawed questions.

              The “What path did the photon take?” question is one of those times. The answer to the question is Mu.

              Similar to the questions “How much phlogiston is there in iron?” or “Does sulphur have more earth than air, or more air than earth?”.

              • albatross79 2 days ago ago

                But the question is "what is the universe made of?", and the answer given is "mathematical abstractions that fit the data".

                • drdeca 21 hours ago ago

                  Asking what it “is made of” seems like a somewhat ambiguous question to me. Still, the answer would not be “mathematical abstractions that fit the data”, but “these mathematical abstractions”. (And, there is a lot of meaning behind these “abstractions”. For example, there is a close correspondence between the Higgs mechanism for mass and superconductivity.)

                  Really, what possible answer could you ask for that wouldn’t be of this form?

                  When you describe an idea sufficiently precisely, you do mathematics; that’s almost what mathematics is.

                  It feels to me like complaints like yours tend to derive from an unwillingness to believe that things aren’t at their core made of solid objects or fluids or other stuff which behaves like macroscopic objects we have everyday experience with.

                  Can you describe an explanation that wouldn’t be like that but which (if it were true) you would find satisfying?

                  If you can’t describe how an explanation could (if it were true) satisfy you without being like that, then, if the universe isn’t like that, you have to be disappointed. And, in that case, again, I have to say, take it up with God.

                  On the other hand, if you can describe how an explanation (if it were true) could possibly satisfy you without saying “at its core, the universe works based on [behavior that you have plenty of physical intuition for based on your everyday interactions with macroscopic stuff]”, I would very much like to hear it.

                  • albatross79 18 hours ago ago

                    I think probably in the past what one might have expected to find is akin to something like a magical material that couldn't be further probed. That would have been satisfying in a sense because it brings a wonder back into it while connecting you to the fundamental "thing".

                    What we have now is not that, it's still very much a mechanistic explanation where the "magic" is hidden within abstractions that make no sense to anyone, i.e abstract fields with properties but no material realty, instantaneous wave function "collapse", wave-particle duality, virtual particles etc. The reality of these things is glossed over.

                    But my point is that if that's what we've been driven to, why are we still engaged in this enterprise? We're just receding further into these abstractions. What are we going to find next year or next decade? A better mathematical model to fit the data? The mission has gone from finding out what the universe is made of to finding a better abstract model. Particles aren't real, they're excitations in a field, etc. It's an engineering enterprise now. So we're not going get a satisfying answer, were just going to get better lasers or whatever the next tech is.

                • squeefers a day ago ago

                  exactly. i yearn for more.

  • meindnoch 3 days ago ago

    Maybe it's time for physicists to switch to agile? Don't try to solve the theory of the Universe at once; that's the waterfall model. Try to come up with just a single new equation each sprint!