Ryzen 9000X3D performance according to MSI

(videocardz.com)

74 points | by doener 12 hours ago ago

60 comments

  • mahmoudhossam 11 hours ago ago

    Most games aren't CPU bound anyway, would be interesting to see compiler benchmarks or other tasks that actually stress the CPU.

    • diggan 9 hours ago ago

      > Most games aren't CPU bound anyway

      With the important disclaimer that this obviously is different for everyone. I play a lot of simulation/strategy games (like Cities: Skylines and Hearts of Iron) and most of the games I play actually are CPU bound, especially in later stages of the games.

      • lrae 9 hours ago ago

        Same for many (competitive) FPS games, which some have quite a bit of players (Valorant, CS, CoD, all the Battle Royales, ...)

        So, while saying most games is true, it's still a good junk of players that do play CPU bound games.

        • yunohn 4 hours ago ago

          > good junk of players

          Did you mean chunk, or is junk the actual term?

          • lrae 4 hours ago ago

            they might be both, but yes, lol.

        • scotty79 9 hours ago ago

          Why are Valorant and such CPU bound?

          • rcxdude 8 hours ago ago

            Mainly because they're relatively simple graphically but they want to run at super high framerates, so the bottleneck is the CPU feeding the GPU each frame.

            • olavgg 8 hours ago ago

              Which is basically memory bandwidth problem, which a large L3 cache helps a lot with. I've seen the same things with ClickHouse, having a larger L3 cache and fewer cpu cores can increase performance significantly.

              • chronid 7 hours ago ago

                Doesn't a lot of optimization target making your code and data more cache friendly because memory latency (not bandwidth?) kills performance absolutely (between other things like port usage I guess)?

                If something is in L3 it is better for CPU "utilization" than stalling and reaching out to RAM. I guess there are eventually diminishing returns with too much cache, but...

            • jorvi 4 hours ago ago

              There is virtually no use (and it’s arguably detrimental due to frame pacing) in running your FPS above your refresh rate.

              Any mid+ CPU will easily pump out 240FPS on Counterstrike or Valorant.

              • toast0 3 hours ago ago

                I certainly don't care to do it, but running FPS above your frame rate can reduce latency if the frame buffering policy is reasonable (or you don't mind tearing). The difference isn't very big, especially if you're at 240Hz, but getting stuff to the display one frame sooner makes a difference.

                But I've heard of triple buffering setups where you're displaying the current frame, and once you have a complete next frame, you can repeatedly render into the second next frame, but the next frame isn't swapped. In that case, it's hard to argue for any gain, since your next frame is way old when it's swapped to current.

                Some rendering systems will do double buffering where the render is scheduled to start just in time to finish with a small margin before vBlank. If you've got that tuned just so, that's almost the same latency benefit as running unlocked, but if rendering takes too long you have a stutter.

              • pton_xd 2 hours ago ago

                Higher FPS means lower input latency, so technically the game would continue to feel increasingly responsive even above the refresh rate.

    • Netcob 5 hours ago ago

      Interesting outlier: Factorio and probably other games from this genre.

      I usually upgrade my CPU just to prevent it from becoming a bottleneck (that's the official version, I very often buy technology for the sake of having cool new technology, who am I kidding). So a CPU upgrade hadn't been something that made a game "playable" for me since GPUs became a thing. But when I got my 7800X3D, that was the difference between having reached a limit with my Factorio megabase and being able to keep building it bigger!

      Also, since the simulation in that game has to churn through a lot of data in every tick, it's also a rare case in general where RAM speeds have a visible effect.

    • throwaway71271 11 hours ago ago

      many are cpu bound for stupid things like https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...

      or various accidentally quadratic functions

      of course they shouldnt, but such is life

      • mahmoudhossam 11 hours ago ago

        That's a perfect storm of bad coding though, not something I'd buy a new CPU for.

        • homebrewer 8 hours ago ago

          If you spend as much time in online games as some people do (it's not rare to find a steam account with thousands of hours in GTA 5), it would absolutely make sense to get a new CPU which will save you 10-15 minutes of waiting time per day.

          Also, "perfect storm of bad coding" is the norm for games, and not the exception, no?

        • throwaway71271 10 hours ago ago

          well ultimately you buy a new CPU so that your software runs faster

          sadly after few years faster CPUs are normalized and we write sloppier code that makes the programs slow again

          then amd has to work on next generation of speculative execution innovations and almost AGI branch predictors :) and we go again

        • hu3 6 hours ago ago

          Much easier to buy faster CPU than to decompile, debug and fix bottlenecks in games.

          • dundarious 4 hours ago ago

            When you socialize the benefit with 0 cost to players, that's "easier" still. It's all about perspective.

    • deng 10 hours ago ago

      You would think that, but multicore programming is really hard. Balancing all the tasks of your game across available cores is not easy, and given the intense crunch with which games are developed, there's often no time to do this properly. So single-thread performance is still very important, especially for newer titles, which is why Intel was still considered the best CPU for gaming (which is now changing, but mostly due to the stability problems).

      So this very much depends on which games you test and how you configure them graphics-wise. AMD was accused of heavily skewing their initial marketing numbers when introducing Zen5, almost exclusively testing with older games and testing them with very weak graphics cards to make them GPU-bound. In this case, MSI only tested with three games, which is a tiny, tiny data point. Channels like HardwareUnboxed test with up to 30 games to get a complete picture of the performance.

      • Epa095 10 hours ago ago

        How does the domination of a few big game engines(unity/unreal) change this? I get the impression that they handle more and more of the actual compute intensive stuff, and 'nobody' writes their own engines anymore?

        So then the economy of scale changes it a bit, and maybe they can make abstractions which use many cores under the hood, hiding the complexity?

        • deng 9 hours ago ago

          Yes, certain problems are very typical for Unreal and can be seen in many games using it, especially stutters from on-demand shader compilation and "traversal stutter" when entering new areas (so mostly problems with on-demand texture loading). These problems can be fixed, but there's no magic bullet, it simply requires a lot of work, so often this is relegated to later patches (if even that).

          But there are certain games which in addition are heavily bound by single-thread performance, although they are using Unreal, probably the most prominent lately being Star Wars Jedi Survivors, which isn't fixed to this day. You can watch Digital Foundry's video for details: https://www.youtube.com/watch?v=uI6eAVvvmg0

          Why exactly this is no one can say apart from the developers themselves.

          • diggan 9 hours ago ago

            > Yes, certain problems are very typical for Unreal and can be seen in many games using it, especially stutters from on-demand shader compilation and "traversal stutter" when entering new areas (so mostly problems with on-demand texture loading).

            This problem is so obvious and widespread now, I wish there was a toggle/env var where I could decide I'm willing to have longer loading-screens rather than in-game stutters/"magically appearing objects", just like we had in the good old days.

            Some games use a small percentage of my available VRAM and/or RAM, yet they decide to stream in levels anyways, regardless of it not being needed for any resource-usage reasons.

            • kbolino 6 hours ago ago

              I don't think the problem is that easy to solve in general. For a lot of modern games, there are only two amounts of total memory that would really matter: enough to run it at all, and enough to hold all game assets uncompressed. The former is usually around 4-16 GB depending on settings and the latter is often over 200 GB.

              Very few gamers have that much RAM, none have that much VRAM. Many assets also aren't spatially correlated or indexed, so even though a whole "level" might be discrete enough to load specifically, the other assets that might be needed could still encompass nearly everything in the game.

              For these games, amounts of memory in between those two thresholds aren't especially beneficial. They'd still require asset streaming, they'd just be able to hold more assets at once. That sounds better, and in some cases might just be, but really the issue is boiling down to knowing what assets are needed and having already loaded them before the player sees them. That's a caching and prediction problem much more than a memory size problem.

        • diggan 9 hours ago ago

          > How does the domination of a few big game engines(unity/unreal) change this? I get the impression that they handle more and more of the actual compute intensive stuff, and 'nobody' writes their own engines anymore?

          You still have to know what you're doing. Cities: Skylines 2 is a good example, as the first installation had awful performance when playing bigger cities, and it wasn't very good at parallelizing that work.

          For the second game, they seem to have gone all in with Unity ECS, which changes your entire architecture (especially if you use it wholesale like developers of Cities: Skylines did), which is something you have to explicitly do. Now the second game is a lot better at using all available cores, but it does introduce a lot of complexity compared to the approach they took in the first game.

      • cinntaile 9 hours ago ago

        Ever since the 5800x3d (released 2.5 years ago) AMD has produced the best gaming CPUs.

    • robdar 4 hours ago ago

      Flight simulator games (i.e. Digital Combat Simulator) tend to be CPU bound, though, depending high-end VR or having enough monitors with enough pixels can quickly become GPU-limited, the underlying simulation, systems modelling, sensor simulations, AI, etc, will still all be CPU limited.

    • guilamu 7 hours ago ago

      Wrong. Any competitive FPS is cpu bound and those 3ds amd cpus are doing great with those 1%low FPS.

    • alkonaut 10 hours ago ago

      The key metric for gaming is how to split a certain budget between GPU and non-GPU components, and where the sweet spots are on that budget curve. The trend in recent years have been to always go for a cheaper cpu and expensive GPU, but not too cheap.

    • broodbucket 9 hours ago ago

      A lot of games are, especially MMOs. Single-threaded performance still matters, and the additional L3 cache from the X3D series is a ginormous upgrade.

    • KingOfCoders 10 hours ago ago

      Interested in that too, we probably need to wait for a Phoronix review.

    • Jamie9912 10 hours ago ago

      Rust certainly is. I get the same FPS in 4k than at 1080p

      • time0ut 5 hours ago ago

        The impact of going from a non-X3D to X3D CPU is incredible in that game. I could be off on the details, but I recall benchmarks showing that just switching to an X3D has a much larger impact than jumping multiple generations of GPU. I get like 120 FPS with a 5600X and RTX 3080. I've been dreaming of a 9800X3D based build when it comes out, but realistically don't have the time to actually play.

    • heraldgeezer 6 hours ago ago

      If you go by number of games and console ports, sure. But those you play once or wait for a good price and play the story for 8-20h.

      If you look at consistent player numbers, some top games are CS2, Valorant, WoW, FXIV, LoL and Dota2.

      Competitive FPS are CPU bound as they are easy to run, but people want the highest framerates possible due to new 200+hz screens and general player input is better with higher framerates.

      MMOs are CPU bound once you get into large groups and into big towns with lots of players, even brand new MMOs like Throne and Liberty.

      MOBAs Im not sure, but in general, same with FPS games. More framerates = good for player input. These games are not a single player game you lock at 30 or 60 frames.

  • ramon156 8 hours ago ago

    > Rosen

    There's a third?! /j

  • 11 hours ago ago
    [deleted]
  • sylware 6 hours ago ago

    CPU bound for games does not mean the same thing than CPU bound for many other benchmarks.

    In a game, based on its type, it is being able to do all the required work required in less than a frame time without being bother by other tasks stealing the CPU cores from time to time (or significantly messing around the cache memory).

    144Hz/72Hz means you must do all the work in less than 6/13ms.

    Nowdays, it means you ask the kernel to favor CPU cores from the same CCD...

  • deafpolygon 12 hours ago ago

    If that’s the case, that’s slightly disappointing.

    • rcarmo 11 hours ago ago

      Well, not if they drive prices down for the 7xxx series. I’d rather not buy the bleeding edge stuff at premium prices.

    • Pingk 11 hours ago ago

      The whole 9000 series has been disappointing, in terms of price/performance you're better off getting something from 7000.

      It seems like 9000 (and the newly announced Intel 200 series) have a lot of restructuring work and lay the bed for future generations to push further

      • adrian_b 11 hours ago ago

        It is disappointing only for gamers.

        For scientific and technical computing, a 9950X provides the greatest improvement in performance per dollar since five years ago, in 2019, when the first 16-core desktop computer, 3950X, has been introduced by AMD.

        This is caused by the doubling of the AVX-512 throughput per core in desktop and server Zen 5.

        The new Intel Arrow Lake S desktop CPU, 285K, also provides a 50% increase in AVX throughput over Alder Lake/Raptor Lake, but this increase remains small in comparison with the throughput doubling in 9950X, which ensures a 4/3 throughput in 9950X (actually even more than that in practice, due to the better AVX-512 instruction set).

        For games and for applications that are not dominated by array operations, for instance for software project compilation, the performance of AVX or AVX-512 code does not matter much, but there are a lot of professional applications where 9950X will provide a great jump in performance.

        For things like software project compilation, it is likely that Intel 285K will provide the best performance per dollar, due to its 24 cores. Unlike the older Intel E-cores, the new Skymont cores have a multithreaded performance that can be similar with that of the old Zen 4 cores or of the P-cores of Alder Lake/Raptor Lake, except for the applications where a high contention between threads for their shared L2 cache memory happens.

        So this time one can truly consider Intel 285K as a 24 core CPU (without SMT) from the point of view of the applications dominated by integer/pointer computations, while in Alder Lake and Raptor Lake the E-cores were better viewed as half cores, due to their lower performance, so the top models of Raptor Lake were better thought as equivalent with a 16C/32T CPU, not with a "24-core" CPU, as advertised.

        The 9950X has become more important than the past desktop CPUs, because the prices of the server CPUs have greatly increased during the last decade, so now, for most small businesses or individuals, it has become not cost-effective to use a "real" server CPU. Instead of that, it is much more economical to use servers with 9950X (and ECC memory). Multiple servers with 9950X are much cheaper than a single server of similar capacity with an Epyc CPU.

        • lysp 10 hours ago ago

          I believe also that the 9000x3D series (from my memory of rumours) also has the 3D cache on both CCXs, meaning no latency with cross-CCX communication.

          • AnotherGoodName 4 hours ago ago

            I think that has the possibility to make it worse honestly. It’s not like the contents of the cache is duplicated. Instead it’s split across a ccx boundary and if the data is in the wrong cache you’ll be hit. Now clever thread management can help avoid this but so far the 9xxx series has shown terrible thread affinity choices with many existing games and apps. I’ll wait and see how the 3D cache helps here.

          • adrian_b 10 hours ago ago

            AMD claims that the 9000X3D series is the product that will provide the game performance increase expected by gamers for a new product generation.

            Of course, that remains to be seen, but it is plausible.

            • snovv_crash 9 hours ago ago

              Long term as games start using AVX512 I expect the 9000 series will be seen as a big step up against previous generations. One of those "fine wine" things.

  • yapyap 8 hours ago ago

    man autocorrect is a b*

  • smcleod 9 hours ago ago

    2-13% seems very disappointing considering how long the 7000 series has been out. I was hoping we'd see more like 80-200% based on the gains we've seen from Apple and Nvidia in this time/

    • homebrewer 8 hours ago ago

      AMD continues to carry 50 years of software compatibility. Apple broke it completely just a few years ago. It's not a fair comparison.

      • AshamedCaptain 8 hours ago ago
        • hu3 5 hours ago ago

          > There is code in Windows 95 through Me that performs the incorrect sequence of operations mentioned in the AMD manual.

          That's a small depreciation. Not to mention you'd have a much easier time running Windows 95 in VirtualBox or QEMU you really needed it (good luck).

          That's completely different than going from i386 to arm architecture.

          • AshamedCaptain 4 hours ago ago

            Actually no, since running under VirtualBox is effectively still running the same code in the same processor, it will also crash and burn under VirtualBox.

            Just pointing they don't care about "50 year program compatibility", never have. At best they care of last 10 years.

            • hu3 4 hours ago ago

              Funny, I'm running Windows ME in VirtualBox on my new AMD processor right now. And so are others with WinME/Win95 if you bother to search.

              And phasing out 0.0001% of the spec is still caring a lot more than replacing 100% of the architecture.

              • AshamedCaptain 3 hours ago ago

                You are running a _patched_ version of Windows ME. The original Windows ME will just crash on first boot if you try to use it. The fact that _you have to search_ to find the fix to use it proves my point...

                Why does it matter if it's "0.0000001%" of the spec they broke, if it's exactly the .% a major operating system needs to run? Does it matter if they break "0.1%" or "50%" of the spec if as a consequence your 50 year old software doesn't run any more?

                • hu3 3 hours ago ago

                  It matters because running old software is still trivial.

      • smcleod 8 hours ago ago

        Most people don't want 50 years of old software to work. They want current software to run fast, with low power consumption.

        • vid 7 hours ago ago

          "Most people" could get by with a Chromebook. But "most people" is made up of many groups, if you ignore those groups you have nothing but a race to the bottom.

        • Dalewyn 7 hours ago ago

          You will take my Winamp and mIRC from my cold dead hands.

    • syspec 9 hours ago ago

      Why not 300% - 30,000% at that point? Since it's just hope

      • n0n0n4t0r 8 hours ago ago

        I was always told to be careful with what I'm hoping for:)

      • smcleod 8 hours ago ago

        Because that starts to be quite unrealistic.