• 2 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: October 28th, 2023

help-circle

  • Lots of good responses regarding why 128-bit isn’t a thing, but I’d like to talk about something else.

    Extrapolating from two data points is a folly. It simply can’t work. You can’t take two events, calculate the time between them, and then assume that another event will happen after the same amount of time.

    Besides, your points are wrong. (Edit: That also has been mentioned in another response.)

    x86 (8086) came out in 1978 as a 16-bit CPU. 32-bit came with the 386 in 1985. x64, although described in 1999, was released in 2003.

    So now you have three data points: 1978 for 16-bit, 1985 for 32-bit and 2003 for 64-bit. Differences are 7 years and 18 years.

    Not that extrapolating from 3 points is good practice, but at least it’s more meaningful. You could, for example, conclude that it took about 2.5 times more to move from 32-bit to 64-bit than it did from 16-bit to 32-bit. Multiply 18 years by 2.5 and you get 45 years. So the move from 64-bit to 128-bit would be expected in 2003+45 = 2048.

    This is nonsense, of course, but at least it’s a calculation backed by some data (which is still rather meaningless data).


  • ET3DBtoAMD@hardware.watchNeed help understanding FSR
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    First of all, it’s worth noting that FSR has several versions, namely 1,2 and 3. FSR 1 (what the Steam Deck, RSR and Hellblade offer) simply upscales. FSR 2 gets motion data, and extrapolates from multiple frames, so has significantly higher quality. FSR 3 adds frame interpolation.

    When it comes to game vs. drivers when using FSR 1, then main difference is that the game will render the UI natively over an upscaled image, while the drivers will upscale the entire frame including UI. Therefore the UI in the game implementation will look better.

    So it’s better to enable FSR in the game rather than the driver, and fall back to the driver only if the game doesn’t implement FSR. Most games implement FSR 2, which is significantly better than what the driver can do, but even with FSR 1, the UI difference will make the game implementation preferable.


  • 4 more cores would take about 20 mm^(2) and 12 more CUs would take 28 mm^(2). So that would reach 210 mm^(2) (ignoring ROPs and the like, which might take some more space). However, space is cut in some other places. For example the Series S doesn’t have AV1 decoding while the Steam Deck does have it. Also, as mentioned, memory controllers will have a different size.



  • Upping the CUs from 96 to 128 (and the ROPs similarly) will increase the GCD size from ~305 to ~372 mm^(2), based on the die image (and leaving some blank space at the side), and the total to 596 mm^(2). Whether performance will increase performance enough depends on the RAM bottleneck.

    It’s also worth nothing that RDNA 3 apparently didn’t reach the expected clocks, and if AMD managed to solve this problem it would be possible to get extra performance without much or any extra die space.

    In general if you’re just aiming to reduce chip size by removing two MCDs, that’s not really that much of a cost saving. That won’t make the chip a mid-range chip as rumoured.


  • Now, for price-sensitive products (such as the Steam Deck, or the other game consoles), APUs seem to be the way to go. You can even make powerful ones, as long as they have enough bandwidth. It’d seem to me that it’d be clear that APUs provide a much better bang for the buck for manufacturers and consumers

    It’s actually the other way round.

    APUs make sense in a power-constrained product, not a price-sensitive one.

    The Steam Deck is a good example. It has a pretty weak CPU/GPU combo (4 Zen 2 cores and 8 RDNA 2 CUs), but this doesn’t matter, because what matters is being able to run games on battery for a decent amount of time.

    When everything is on one chip, power requirements are lower, because there’s no need for inter-chip communication. Space is saved because there’s only one chip to use. This is great for small mobile products.

    What about price?

    APUs sell for cheap on the desktop because their performance is lower than other CPUs, but they aren’t cheap to make.

    For example, Raven Ridge was 210 mm^(2), while Summit Ridge / Pinnacle Ridge were 213 mm^(2). So the chip price was about the same, but the Ryzen 1800X debuted at $500 and then dropped to $330, where the 2700X also sold, but the top of the range Raven Ridge, the 2400G, was sold for $170.

    So even though it cost AMD the same to make these chips, Raven Ridge was sold for half the price (or a third for the 1800X). AMD therefore made a lot less money on each Raven Ridge chip.

    The console prices are deceptive. Microsoft and Sony subsidise the consoles because they make money on game sales and subscriptions. The consoles are often sold at a loss. If the consoles were not subsidised, they’d have sold for double the price, and would have likely lost to a similarly priced PC.

    Flexibility

    Even though laptops aren’t user-expandable, they are still configurable. When it comes to gaming laptops, there are a lot of CPU/GPU combinations. It’s impossible to create a special chip for each combination, and binning is hardly enough to create them.

    Without having separate CPUs and GPUs, you’d get an ecosystem similar to Apple’s, or the consoles, where there is a very small number of models available for purchase. That would kill one of the benefits of the Windows ecosystem, the ability to make your purchase fit performance and price targets.

    A silver lining

    Chiplets do make it possible to pair different CPUs and GPUs on the same chip, even together with RAM. You could call that an APU.


  • ET3DBtoAMD@hardware.watchGuide to AMD 7000 Mobile SKUs
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thanks for the effort. I’d rewrite the list like this:

    Mobile SKU 7TFA#:

    FA (family):

    • FA=20: Mendocino (Zen2, RDNA2)
    • FA=30: Barcelo-R (Zen3, Vega)
    • FA=35: Rembrandt-R (Zen3, RDNA2)
    • FA=40: Phoenix (and 7545U) (Zen4, RDNA3)
    • FA=45: Dragon Range (except 7545U) (Zen4, RDNA2)

    T (tier):

    • T=3: Ryzen 3 (4 core)
    • T=4: Ryzen 3 (4 core Phoenix)
    • T=5: Ryzen 5 (4 core Mendocino, 6 cores otherwise)
    • T=6: Ryzen 5 (6 core Phoenix or Dragon Range)
    • T=7: Ryzen 7 (8 core)
    • T=8: Ryzen 7 (8 core Phoenix, 12 core Dragon Range)
    • T=9: Ryzen 9 (8 core Phoenix, 16 core Dragon Range)



  • ET3DBtoHardware@hardware.watch3090 or 4090 for AI/ML/SD
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’d say that it depends on whether you’re making money from it and how much you will use it.

    If you’ll be making any money from it, and having a more powerful card will improve your output and therefore make you more money, then that’s the way you should go.

    If you’re not making any money from it, but you’re using it a lot and having a more powerful card will allow you to be more productive then it’s a reasonable way to go, too.

    Otherwise, if you’re not making money from it and you’re basically using it in the background for hobbyist things, then it might not be worth the extra money.

    I’d have said that if you have the money then the 4090 may be worth it, but since you’re taking about 9 installments, it sounds like it will be stretching your funds.





  • What you already see: conversation bots, art generation and manipulation…

    But I think most of that power will go towards language models, and in a few years it will be standard to talk to computers using natural language. However, the other functionality will be included in this, like the computer being able to illustrate what you tell it, teach you things, create works of art (songs, pictures, videos) for you, … But for a start it will be mainly talk.