• @noiserrB
      link
      fedilink
      English
      28 months ago

      How dare anyone be positive about the fastest HEDT CPU on the market.

    • @GreenecakeB
      link
      fedilink
      English
      18 months ago

      What makes you say that? His enthusiasm is hardly suspicious, even though his usual territory is gaming. These are very powerful CPUs, unrivalled even, if expensive.

    • @Edgar101420B
      link
      fedilink
      English
      18 months ago

      Like Nvidia who paid off Steve to blame all connector issues on the user?

  • @surf_greatriver_v4B
    link
    fedilink
    English
    28 months ago

    As per the original TR series, damn I wish I had a use case for these to mess around with.

  • @Pamani_B
    link
    fedilink
    English
    28 months ago

    https://youtu.be/yDEUOoWTzGw?t=731

    > The 7970X required 3.6 min making the 7980X [2.2 min] 39% faster for about 100% more money. You’re never getting value for those top of the line parts though.

    Except that’s not it. The 7980X speed is 1/2.2 = 0.45 render/minute, which is 64% faster than the 7970X (1/3.6=0.28 render/minute). An faster way to do the math is 3.6/2.2 = 1.64 --> 64% faster. What Steve did is 2.2/3.6=0.61 --> 1-0.61=0.39 –> 39% faster.

    It’s not the first time I see GN stumble on percentages when talking inverse performance metrics (think graphs where “lower is better”). Sometimes it doesn’t matter much because the percentage is small. Like with 1/0.90=1.11, 11%~10%. But on bigger margins it gets very inaccurate.

    Another way to see this is by pushing the example to the extreme. Take the R7 2700 at the bottom of the chart completing the test in 26.9 minutes. Using the erroneous formula (2.2/26.9=0.08 --> 1-0.08=0.92) we get that the 7980x is 92% faster than the 2700, which is obviously silly, in reality it’s 12x faster.

    • @VenditatioDelendaEstB
      link
      fedilink
      English
      18 months ago

      Blender seems like it should be pretty close to embarrassingly parallel. I wonder how much of the <100% scaling is due to clock speed, and how much is due to memory bandwidth limitation? 4 memory channels for 64 cores is twice as tight as even the 7950X.

      Eyeballing the graphs, it looks like ~4 GHz vs ~4.6 GHz average, which…

      4000*64 / (4600*32) = 1.739
      

      Assuming a memory bound performance loss of x, we can solve

      4000*64*(1-x) / (4600*32) = 1.64
      

      for x = 5.7%.

    • @GravityblastsB
      link
      fedilink
      English
      18 months ago

      I wonder if LMG should make a video calling GN out for their incorrect numbers…doubt Steve would like that very much lol

    • @hieronymous-cowherdB
      link
      fedilink
      English
      18 months ago

      You’re never getting value for those top of the line parts though.

      Yeah, and saying “never” is not a good take either. Plenty of customers are willing to spend stiff upcharges to get the best performance because it works for their use case!

      • @ZevemtyB
        link
        fedilink
        English
        18 months ago

        The take is that you’re never getting more perf/$ (aka value) on top of the line parts compared to lower tier ones. Whether you can utilise that extra more expensive performance to make the worse value worth it for you is irrelevant to their take.

      • @dern_the_hermitB
        link
        fedilink
        English
        18 months ago

        Especially in professional settings where that extra minute or so, added up over multiple projects/renders and team members, can mean the difference of thousands or even tens of thousands of dollars, if not more. Looking at things in terms of percentages is useful but absolute values are important, as well.

    • @lyciumB
      link
      fedilink
      English
      18 months ago

      … but they got the whiteboard out? :'(

    • @Exist50B
      link
      fedilink
      English
      18 months ago

      This is the kind of stuff that GN would claim justifies a 3 part video series attacking another channel for shoddy methodology. But I guess they’ve never been shy about hypocrisy.

      • @GravityblastsB
        link
        fedilink
        English
        18 months ago

        EXACTLY!..I’ve been saying this since GN released their Anti LMG agenda, and no one believed me lol…

  • @ArtholosB
    link
    fedilink
    English
    18 months ago

    I was surprised to see how well the 2019 TR 3970x holds up!

    Considering how cheap you can get last Gen Threadripper on eBay now, and accompanying motherboards are very cheap now too. A 3970x could be a spankin good buy if your use case is right.

  • @vlakreehB
    link
    fedilink
    English
    18 months ago

    I wish more tech outlets knew about benchmarking developer workloads, Chromium compile time is such an irrelevant benchmark. Benchmarking the time to make a clean release build of an incredibly large C++ code base, especially one with tons of time dedicated to making the build as parallel as possible, isn’t at all representative of what 99% of programmers do in their day to day. I work on a large cpp codebase every day and it’s been months since I’ve done a clean release build on my local machine.

    A substantially better test methodology would be to checkout to a commit 30 or so back, do a clean build and running of the test suite to populate caches, and then time the total duration it takes to run the test suite until you land back on the latest commit. Most programmers don’t do clean builds unless they absolutely have to and they’ll have build caches populated. Do this for an array of languages with projects of varying sizes and then you’ll have a benchmark actually worth looking at.

    • @RonLazerB
      link
      fedilink
      English
      18 months ago

      Most programmers aren’t using Threadrippers, and if your use-case isn’t parallelised, why would you be in the market for a high-core count CPU?

    • @TwoCylToiletB
      link
      fedilink
      English
      18 months ago

      GN has asked for development benchmark suggestions on various platforms, especially recently specifically for TRX50 (and perhaps WRX90). Try to reach out to his team.

    • @tomvorlostriddleB
      link
      fedilink
      English
      18 months ago

      Wait, your devops infrastructure does run builds centrally and/or in the cloud but on the other hand does run tests locally?

      Not trying to be difficult here, genuinely asking.

      • @teceduB
        link
        fedilink
        English
        18 months ago

        I mean unless you are pushing out a release, local testing is way better because cloud runners are expensive and thats way too long to know what went wrong. I can compile my stuff and get stuff immediately

    • @ShanixB
      link
      fedilink
      English
      18 months ago

      The Chromium compile times are actually perfect for our gamedev workload since we have our build machines compile binaries without fastbuild or other caching when producing builds to submit to partners. And we can reasonably expect the performance scaling they see to apply to our general compile times too, though they’re fast enough on developer machines that you can barely feel differences lol

  • @claudia_553B
    link
    fedilink
    English
    18 months ago

    honestly, Gamers Nexus is where it’s at. They go deep into testing and their attention to detail is top-notch. highly recommend checking them out if you want thorough reviews.

  • @IC2FlierB
    link
    fedilink
    English
    18 months ago

    I realize that MacOS users will stay with Mac for a long time, but I wonder how much of a leap a TR+4090 rig is versus an Apple Silicon Mac Studio on M2 Max, power consumption be damned, on apps that are common to both MacOS and Windows.

    Cuz I still kinda think that AMD won the M2 (and now M3) keynotes despite Threadripper racking up way more wattage.

    • @GomaEspumaRegionalB
      link
      fedilink
      English
      28 months ago

      Honestly, it looks like Apple is pretty much conceding the upper end of the professional market. So a TR+High end NVIDIA GPU would likely obliterate a Mac Pro/Studio on those workloads, perhaps some very specific video encoding workflows may have an edge on AS.

      • @JShelbyJB
        link
        fedilink
        English
        18 months ago

        Apple is pretty much conceding the upper end of the professional market.

        This may not age well if local llm based apps become more common place - which I suspect they may.

        Can you run a 3rd party GPU with MacOS?

        • ferret
          link
          fedilink
          English
          18 months ago

          On the cheesegrater AS macs you explicitly couldn’t, despite the ample pcie slots

    • @rsta223B
      link
      fedilink
      English
      18 months ago

      This isn’t quite what you’re asking for, but my wife just got a new M3 Pro MBP with 18GB RAM (5 performance/6 efficiency cores) and it’s about 2/3 the all core speed of my 5950X in CPU rendering and about 1/4 the speed of my 3090 in GPU rendering.

      It’s (irritatingly) quite a bit faster in single thread rendering though - it’s got about a 50% edge there.

    • @badjettasexB
      link
      fedilink
      English
      18 months ago

      I have a 3990X and a M3 Max (full chip), but I don’t have a M2 Ultra to compare.

      The 3990X with a 4090 obliterate the M3 Max, but I’m not really sure there was any question there. Even a with a Titan RTX, it’s no competition. This being said, the MBP w/ that M3 is good enough for nearly anything, and far more efficient than an 3990X running full tilt. I would imagine that an M3 Ultra would be quite powerful, but I personally have no interest in a Mac desktop.

      I don’t think I’ll be getting the 7000 series for work for three reasons.

      1. I feel incredibly burned by AMD for investing heavily in the TR4 space. We were promised more, and purchased extra high end blocks early because of that.

      2. The 3990X has been troublesome in ways that never approach RMA, or the cost of being down that system, but just enough to drive me insane, from month one through all the years to now.

      3. Anything the full chip M3 Max MBP can’t run, the 3990X can still do, and anything that the 3990X is too unstable to do can get fed to a sidelined sapphire rapids build.

      I do think I’ll be getting a 7955WX and a WRX90 to play with. There’s something super dumb niche things I want to try, that I wasn’t able to fully pull off with a W3435X.

    • @hi_im_bored13B
      link
      fedilink
      English
      18 months ago

      I believe the M2 Max gpu is at the level of a 4070ti at best, but the larger issue is not many tools support metal for compute. On the other hand, all memory is shared on the mac so you (theoretically) get up to 192gigs of video memory, along with the neural engine for basic inferencing and matrix extensions on the CPU.

      Essentially, the max is simultaneously excellent and falling behind in different industries, but you won’t know until you optimize and test your software for the specific usecase, and after that you’re beholden to whatever hardware puts out

      Cuda is well supported and nvidia/amd scale well for different applications, unless apple picks up their software I don’t think the hardware matters much

    • @cheekybeakykiwiB
      link
      fedilink
      English
      18 months ago

      I realize that MacOS users will stay with Mac for a long time, but I wonder how much of a leap in performance a TR+4090 rig is versus an Apple Silicon Mac Studio on M2 Max, power consumption be damned, on apps that are common to both MacOS and Windows.

      Ampere Altra + 4090 already shits on the Mac Pro for price to performance.

    • @deefopB
      link
      fedilink
      English
      18 months ago

      It depends on the workload, but we’re comparing apples and oranges. No amount of apple magic or apple consumer money can change the wild performance differences between high end desktop and “ultra portable”, generally speaking.

    • @moofunkB
      link
      fedilink
      English
      18 months ago

      I realize that MacOS users will stay with Mac for a long time, but I wonder how much of a leap in performance a TR+4090 rig is versus an Apple Silicon Mac Studio on M2 Max, power consumption be damned, on apps that are common to both MacOS and Windows.

      While there is a certain performance difference now between two such systems, when you go out and upgrade your PC to a 5090 in a couple of years and a 6090 in 4 years, the difference will be laughable.

      If you want your Mac Studio to stay on the cutting edge of GPU power, the costs of constantly upgrading a whole Mac Studio vs. just the PC GPU is much worse than the initial cost of the first system of either type.

      The Mac Studio just cannot function long term as a GPU powerhouse. You can gloat about it for 6-12 months and that’s it. It’s a machine that can work solidly for you for 10 years, if you don’t demand cutting edge GPU performance, but it will be relegated to “servicable performance” in 5-7 years.

  • @claudia_553B
    link
    fedilink
    English
    18 months ago

    honestly, Gamers Nexus is where it’s at. They go deep into testing and their attention to detail is top-notch. highly recommend checking them out if you want thorough reviews.

  • @RogueIsCrapB
    link
    fedilink
    English
    18 months ago

    Do you need a high end water loop to cool these monsters for max efficiency? I’d think that they would throttle fast even with high end AIOs.

    • @0gopog0B
      link
      fedilink
      English
      18 months ago

      Large die surface area helps a lot. In many ways heat density is a bigger challenge than total heat generated.

  • @KrislainmB
    link
    fedilink
    English
    18 months ago

    yeah, Gamers Nexus is a solid choice for more thorough testing and analysis. They really dive deep into the details.

  • @OkDimension8720B
    link
    fedilink
    English
    18 months ago

    Will these efficiency gains trickle down to the Ryzen 8000 desktop parts, and will we see huge gains