49 comments

  • mrighele a day ago ago

    Not everybody plays FPS (as in First Person Shooter) and need to squeeze every ms of latency.

    For those, some "free FPS (as in Frame Per Second)" is a good thing.

    edit: clarified the two FPS

  • wmf 2 days ago ago

    There's nothing I want less than multi-frame generation. I guess some people want to feel like they're getting their money's worth from their 240 Hz monitors.

    • boyter 2 days ago ago

      If you have a high frame rate to start with it’s pretty nice and feels smoother. But a low frame rate turned into a high one looks good but feels laggy.

      So arguably you never need frame gen for a game, since it only really works when it’s already pretty nice.

      • out_of_protocol 2 days ago ago

        fps getting increased but latency does not improve, and what's what important

      • ece 2 days ago ago

        Gamers chased high FPS, that's what they got.

        • boyter 2 days ago ago

          Chased the wrong thing. It’s the 1% lows that matter more generally.

          • formerly_proven a day ago ago

            You will never ever get decent 1% lows in most titles, the software stack is architecturally fucked in the popular engines and can’t do it. You would need a CPU that’s literally 100x faster than today’s top models for it to be able to compile shaders on-demand within a single frame without hitching. (Or maybe it’s more accurate to say that there’s a massive gulf between what the hardware/drivers need - compiled pipeline objects built/known ahead of time - versus what game engines are doing, building pipelines on the fly on demand, surfacing new permutations frame-by-frame)

            • orbital-decay a day ago ago

              Why not compile asynchronously ahead of time?

              • formerly_proven a day ago ago

                This requires knowing what to compile, which these engines don't really do, because the necessary data is pooped out by arbitrary game logic / scripts. That's why precompiling shaders in e.g. UE5 basically relies on running the engine through a pre-recorded gameplay loop and then making a list of the shaders/PSOs used; those are then pre-compiled. Any shader not used in that loop will cause stutter. A newer UE5 technique is to have heuristics which try to guess which PSOs might be needed ahead of time.

                There's this article from Unreal on the topic: https://www.unrealengine.com/en-US/tech-blog/game-engines-an...

                If you read their proposed solutions, it's quite clear they only have patchy workarounds, and the inability to actually pre-compile the needed PSOs and avoid shader and traversal stutter is architectural. It should be noted that these engines are also stuttering on console, but it's not as noticeable since performance is generally much lower anyway.

          • ece a day ago ago

            When getting rid of actual performance bottlenecks is too hard or costs too much, just make something up.

            XeSS is actually pretty great, played Talos Principle 2, a UE5 game on the Steam Deck at 800p 30fps thanks to XeSS.

    • Tyumyu a day ago ago

      You can see certain effects up to 1000 fps and more.

      The frame generation has access to movement vectors and can predict it quite well.

      I think its a great thing to have, whats your concern tbh?

      • rabf a day ago ago

        My concern is that it will make developers even more lazy when optimising their code. What one hand giveth the other takes away. When has any advancement in the hardware not led to the same or worse software performance in few years time? There surely must be a name for this paradox. This will not result in you getting 1000fps. You will end up with the same `acceptable` refresh rates with worse rendering through novel hacks.

    • bigyabai 2 days ago ago

      It's a great option to have. Once you reach the 2-7ms frame time territory, you're approaching the CPU bottleneck for many game engines even on the fastest hardware. For newer titles like GTA VI, framegen might be the only reliable path to 120+ FPS without pinning all of your cores.

      Framegen is also a good fit for low-end hardware like the Steam Deck, which can hit 30 or 45 FPS in stuff like Elden Ring but is far from the max 90hz of the OLED model's panel. For a handheld, trading a bit of 720p visual clarity for locked 90hz gameplay is a solid trade if you can get it working.

      • Borealid 2 days ago ago

        Would you say a game is running at 90fps if, 45 times per socond, two frames are produced, the second of which is a linear interpolation of the frame before and after it?

        How about if the two frames are 100% identical?

        Does either of these situations differ substantially from what is being discussed, wherein the render pipeline can only produce a new render 45 times per second?

        • Incipient 2 days ago ago

          My understanding is that frame generation uses motion vectors to (slightly?) adjust the scene to produce a "highly plausible" next frame to drop in before the following "real" frame.

          I've only seen videos, so from a somewhat unrealistic perspective, it seems like an acceptable compromise for low end hardware in particular.

          Boosting 120hz to 240hz admittedly seems silly.

          • Borealid a day ago ago

            My comment isn't denigrating frame generation, which can be useful.

            It's pointing out the absurdity of calling "45fps plus 1-for-1 frame generation" as if it is in any sense "90fps". It's not, and you aren't hitting a 90Hz refresh rate target at any more with it than you were without it. In point of fact, it lowers real FPS because it consumes resources that would have otherwise been available for the render pipeline.

            I wish reviewers in particular would stop saying e.g. "120fps with DLSS FG enabled" and instead call out the original render rate. It makes the discourse very confusing.

          • jasomill 18 hours ago ago

            120 Hz is around the point where I'd start to consider frame generation in the first place, assuming everything else in the system is optimized for minimal latency.

            At 100 Hz or less, I've yet to experience frame generation in any form that doesn't result in unacceptably floaty input relative to the same system with framegen disabled.

        • close04 a day ago ago

          > the second of which is a linear interpolation of the frame before and after it

          If I understand what you describe, this is generating a frame "in the past", an average between 2 frames you already generated, so not very useful? If you already have frames #1 and #2, you want to guess frame #3, not generate frame #1.5.

          The higher the "real frame" rate, the smaller the differences from one to the next. This makes it easier to predict those differences, and "hide" a bad prediction. On the other hand if you have 10FPS you have to "guess" 100ms worth of changes to the frame which is a lot to guess or hide if the algorithm gets it wrong.

          • Borealid a day ago ago

            I chose the two scenarios I did to illustrate that "frames per second" is clearly not meant to be measured in terms of times the display refreshed, but rather in terms of times content was actually rendered by the game engine.

            In my opinion it is quite difficult to provide a definition of "fps" that somehow makes 45-fps-native-with-frame-doubling be counted as 90 but doesn't also make either of the ludicrous examples I presented be counted as 90.

            • close04 a day ago ago

              I understand now, but I think any full frame that comes out of the GPU frame buffer is a frame. A real rendered frame or a generated frame using some algorithm. Even in the silly "I duplicate each frame" example, you are outputting that number of FPS. If you stand still in a game and nothing changes in the frame you're still counting all those practically identical frames.

              A measure for "FPS effectiveness" sounds interesting. Like how much detail, changes, information can you discretely convey per second relative to what the game is continuously generating.

              A Nyquist of sorts. Are you just duplicating samples? Are you sampling a high frequency signal (fast motion in the game) at high enough rate (lots of discrete FPS)?

              • Borealid a day ago ago

                I would say the correct missing metric is similarity to what would have been rendered had the GPU kept up.

                "90fps at 95% fidelity" is a meaningful way to describe performance. AFAIK nobody measures this when discussing xess or dlss or fsr.

    • Razengan a day ago ago

      And PC gamers think only Apple rips people off :')

      • bigyabai a day ago ago

        The majority of PC gamers don't even acknowledge that Macs exist.

    • joe_mamba 2 days ago ago

      If you're on Intel integrated graphics, it's a free potential upgrade that makes use of existing silicon, and you don't have to turn it on. I don't get the hate. Just don't turn it on if you don't want it.

      I get that people want more real frames rather than more "fake" frames, but in that case you wouldn't be buying integrated graphics, or if you did end up with iGPU, you'd be aware of the limits and be happy for any improvements arriving via software.

      It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.

      • nodja a day ago ago

        > It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.

        DLSS is also AI and people like it.

        People don't like framegen because the manufacturers are not being honest about it and using it for deceptive hype marketing. Anyone with a brain knows that it introduces latency and is only useful if you're already 40+ FPS, we also know that companies will use it to pad benchmarks. NVIDIA themselves said that the 5070 had 4090 performance because it supports framegen.

        • cubefox a day ago ago

          > we also know that companies will use it to pad benchmarks.

          Unlike Nvidia, Intel explicitly doesn't use it to pad benchmarks.

        • joe_mamba a day ago ago

          XeSS is also like DLSS, it's not just frame gen.

  • user2722 a day ago ago

    What does this mean? That Arc series is not being discontinued after all?

  • bpavuk 2 days ago ago

    I can't believe that some people are enjoying MFG, however small that group is. me personally? I hate that cognitive dissonance of "it looks like 120 FPS yet input lag is more like 40-60 FPS". plus, FG itself has performance tax, which in my case means input lag tax.

    it's input lag that defines experience, not frame time. I am comfortable with 30 FPS (sometimes less frames even fits the style of the game, e.g. Dishonored 2, Clair Obscur) as long as the game responds instantaneously.

    • kingstnap 2 days ago ago

      I think frame or even multi frame generation combined with Asynchronous Reprojection / Frame Warp might be a very good idea.

      https://youtu.be/f8piCZz0p-Y?si=OLq9iZUjuRMYKPDo

      If you have never heard of it, the basic idea is that you make low FPS feel responsive in first person games by having the mouse motion warp the existing frame independently of when a new frame is actually rendered.

      This could be combined with some AI techniques to help sort out the edge artifacts you get from this.

      • cubefox a day ago ago

        Exactly. Meta has successfully used this technology for years in their VR headsets. It's baffling that (to my knowledge) not a single normal FPS game has adopted the technology, even several years later.

        • DemetriousJones a day ago ago

          Nvidia announced their version of this technology called Reflex 2 around 1 year ago, but sadly we didn't get any news since then

          • cubefox a day ago ago

            I'm not sure how Reflex 2 works exactly, but it doesn't improve frame rate, only latency, unlike conventional VR reprojction, which improves both (or at least camera frame rate). So apparently it's not quite the same.

    • PacificSpecific 2 days ago ago

      I remember around 2012 having discussions about how the 6fps input lag on ultimate Marvel vs Capcom 3 PS3 version was making the game borderline unplayable and that's why the PS3 version was not used at tournaments. Can't believe how far we've blasted past that benchmark.

      Completely agree, input lag is the most important thing.

    • short_sells_poo 2 days ago ago

      At 30 fps you already have 30ms between frames so you aren't getting anything close to instant input.

      • bpavuk a day ago ago

        if my eyes see 120 FPS, my brain expects input lag to match. dissonance causes dissatisfaction.

        • short_sells_poo a day ago ago

          That's very individual. Particularly for games where twitch reflexes are irrelevant, I much prefer smoothness to minimizing input lag.

  • KronisLV 2 days ago ago

    Overall, I'm pretty happy with my Intel Arc B580, if framegen helps me squeeze out a little bit more life out of that card before it becomes more or less obsolete, then I'll gladly take it.

    Though, to be honest, with the amounts of UE5 slop out there, I'll probably need to give an unreasonable amount of my money to Nvidia or AMD sooner or later (since many games don't exactly let you turn off Lumen and Nanite).

    It's just unfortunate that Intel themselves won't provide the much needed market competition in the form of a B770.

    • cubefox a day ago ago

      An Arc B580 should easily be enough to play most games with Lumen and Nanite at reasonable resolution and frame rate, especially if the game also supports XeSS or even XeSS 3.

      • KronisLV a day ago ago

        It’s very hit or miss.

        Something like Split Fiction is delightful, Satisfactory is satisfactory with the right settings, Incursion Red River is pushing it, STALKER 2 is barely playable and The Forever Winter is unplayable trash.

        All on 1080p by the way, on progressively lower graphics settings and recent drivers, the latter half can’t get stable 60 FPS.

        I even lowered everything and ran Forever Winter at 10% resolution scale, it still wasn’t a smooth 60, whereas others report better success on different hardware.

        It’s abysmal cause something like War Thunder easily gets hundreds of FPS (with RT off) and the likes of Cyberpunk and KDC also run great. It saddens me how much of a mess UE5 can be, their defaults should be way different.

  • enjoykaz 2 days ago ago

    The Steam Deck case is the clearest test. Boss fight in Elden Ring: your inputs are still 45hz, eyes see 90.

    For readable patterns it's probably fine; for reaction-window timing you're being misled.

    • izacus 2 days ago ago

      What are you being "misled" about exactly?

      • enjoykaz 2 days ago ago

        Frame gen creates frames the game engine never rendered. You see an enemy wind-up at an interpolated timestamp — react to it, and your input lands on the next real frame, up to 22ms later. At least that's my understanding of how it works — happy to be corrected.

        • a day ago ago
          [deleted]
      • jaen a day ago ago

        Don't bother engaging with grandparent, LLM-generated comment which is a regurgitation of what bigyabai said upthread.

  • DeathArrow a day ago ago

    Does Intel even try to compete with Nvidia? Or are they content with the bottom end?

    • mdre a day ago ago

      Looks like they are acknowledging the gap and trying to offer something usable before they close it. B60 is still 8x weaker than a 5090 in real compute, which is sad

      • Edman274 a day ago ago

        Okay, and a B60 is also 5.25x cheaper than a 5090 in real dollars and has 75% the vram, so maybe less sad? I wouldn't expect a 650 dollar card to have the same performance as a 3500 dollar card, would you?