x86 came out 1978,

21 years after, x64 came out 1999

we are three years overdue for a shift, and I don’t mean to arm. Is there just no point to it? 128 bit computing is a thing and has been in the talks since 1976 according to Wikipedia. Why hasn’t it been widely adopted by now?

  • ET3DB
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    Lots of good responses regarding why 128-bit isn’t a thing, but I’d like to talk about something else.

    Extrapolating from two data points is a folly. It simply can’t work. You can’t take two events, calculate the time between them, and then assume that another event will happen after the same amount of time.

    Besides, your points are wrong. (Edit: That also has been mentioned in another response.)

    x86 (8086) came out in 1978 as a 16-bit CPU. 32-bit came with the 386 in 1985. x64, although described in 1999, was released in 2003.

    So now you have three data points: 1978 for 16-bit, 1985 for 32-bit and 2003 for 64-bit. Differences are 7 years and 18 years.

    Not that extrapolating from 3 points is good practice, but at least it’s more meaningful. You could, for example, conclude that it took about 2.5 times more to move from 32-bit to 64-bit than it did from 16-bit to 32-bit. Multiply 18 years by 2.5 and you get 45 years. So the move from 64-bit to 128-bit would be expected in 2003+45 = 2048.

    This is nonsense, of course, but at least it’s a calculation backed by some data (which is still rather meaningless data).

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I’ll answer your question with a question: What are doing that requires 128-bit computations?

    After that, a follow up question: Is it so important you’re willing to cut your effective RAM in half to do it?

    • brian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Why would it be cutting your effective RAM in half? I know very little about hardware/software architecture and all that.

      • kakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Imagine we have an 8 bit (1 byte) architecture, so data is stored/processed in 8-bit chunks.

        If our RAM holds 256 bits, we can store 32 pieces of data in that RAM (256/8).

        If we change to a 16 bit architecture, that same physical RAM now only has the capacity to hold 16 values (256/16). The values can be significantly bigger, but we get less of them.

        Bits don’t appear out of nothing, they do take physical space, and there is a cost to creating them. We have a tradeoff of the number of values to store vs the size of each value.

        For reference, per chunk (or “word”) of data:
        With 8 bits, we can hold 256 values.
        With 64 bits, we can hold 18,446,744,100,000,000,000 values.
        With 128 bits, we can hold 3,402,823,670,000,000,000,000,000,000,000,000,000,000 values.
        (For X bits, it’s 2^X)

        Maybe one day we’ll get there, but for now, 64 bits seems to be enough for at least consumer-grade computations.

    • kakes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Oh for fuck sake, I replied to a bot.

      To the dev that’s spamming Lemmy with this garbage: You aren’t making Lemmy better. You’re actively making it a worse experience.

  • Quatro_LechesB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    32 bit has only 4 gigabytes which is easily saturated by a single socket computer while 64 bit has 16 million terabytes of addressable possible memory,. you will never see that number ever used in a single socket machine in the history of humanity.

  • chx_B
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    tl;dr: we could but what for?

    Practically all comments here are wrong although a few does mention why they are wrong: the address space has nothing to do with the bitness of the CPU.

    Now, let’s review what’s what.

    Let’s say you want to get the word “GRADIENT” from the memory into the CPU. Using a 8 bit instruction set you need to loop eight instructions. A 16 bit instruction set need four instructions; GR, AD, IE, NT. A 32 bit CPU only two and a 64 bit instruction can read it in a single step. Most of the time the actual CPU facilities will match the instruction set – in the early days, the Motorola 68000 for example had a 16 bit internal data bus and a 16 bit ALU but had a 32 bit instruction set. This was fixed in the 68020. This “merely” meant the 68000 needed internally twice as much time as the 68020 to do anything.

    Now, in the past the amount of memory addressable has often been larger than what a single register could address. For example, the famous 8086/8088 CPUs had 20 bit address space while they were 16 bit CPUs. The Pentium Pro was a 32 bit CPU with a 36 bit address bus. These tricks, as the RISC-V instruction set manual drily notes

    History suggests that whenever it becomes clear that more than 64 bits of address space is needed, architects will repeat intensive debates about alternatives to extending the address space, including segmentation, 96-bit address spaces, and software workarounds, until, finally, flat 128- bit address spaces will be adopted as the simplest and best solution.

    That manual thinks we might need more than 64 bit address space before 2030. And to be fair going to 128 bit is not a big engineering challenge, not for a long time now, after all as early as 1999 even desktop Intel CPUs have included some 128 bit registers although for vector processing only. (A computer with a 128 bit general processor register existed in the 70s.)

    Let’s review why we needed 64 bit! Say you want to number your records in a database, if you do that with a 32 bit register then you can have four billion records and game over. Sure you can store your number on two machine words but it’ll be slower. As an example there are more than four billion humans so this was a very real, down-to-the-earth limit which we needed to move on from. Also as per the note above, it’s much nicer to have a big single address space than all the tricks which were running out fast, 64GB was addressable and even run-of-the-mill servers were able to reach 16GB. 64 bits can address 16 billion billion records or bytes of memory, this seems to be fine for now. Notably current CPUs can only address 57 bits worth of physical memory so a hundredfold increase is still possible compared to currently existing machines.

    Going 128 bit would require defining a whole new instruction set or at least an extension of one existing. RISC-V has a draft for RV128I but even they didn’t bother fully fleshing it out yet. Each register, internal bus and processing unit widening to 128 bit would consume significant silicon area. The memory usage of everything would at least double (note Apple still selling 8GB laptops at top dollar in 2023). So there are significant drawbacks and so far we have been fine with delegating the 128 bit computing to vector processing units in CPUs and GPUs.

    So:

    1. Addressing has tricks aplenty should a future system need addressing more than 16 exabytes.
    2. General purpose computing works fine with 64 bit for now.
  • SomeKindOfSorbetB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Is address space the only reason we moved away from 32-bit for high-performance computers though? Does 64-bit have any performance advantages over 32-bit apart from that? What about SIMD performance?

  • fediverser
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    This post is an automated archive from a submission made on /r/hardware, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

    Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !hardware@hardware.watch that can benefit from your contribution and join in the conversation.

    Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.