I was recently reading Tracy Kidder’s excellent book Soul of a New Machine.

The author pointed out what a big deal the transition to 32-Bit computing was.

However, in the last 20 years, I don’t really remember a big fuss being made of most computers going to 64-bit as a de facto standard. Why is this?

  • 3G6A5W338EB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    any time you use ram more than 4 GB that is part of the 64 bit change

    64bit cpu not needed for that. See PAE.

    The actual limiting factor (in x86 specifically) is that a single process view on memory is 32bit thus 4GB. This is specific to the design of the CPU; it’s very well possible to get around that with techniques such as overlays or segmentation, as 16bit x86 demonstrated very well.

    Then there’s processors like the 68000, which offered a 32bit ISA with direct 32bit addressing (although only 24 exposed in the physical bus, until 68010 had versions with more address lines, and 68020 with full 32bit), despite 16bit ALU.

    Similarly, SERV implements a compliant RISC-V in a bit-serial manner.

    Of course, having 64bit GPRs specifically is very convenient past 4GB.

    or having a file bigger than 4 GB

    Large offsets are possible in 32bit too. In e.g. Debian Linux, it is common in all architectures other than x86.

    or a disk partition that is bigger than 32 GB

    32bit block addressing to 512 byte blocks yields 2TB.

    And again, software can handle 64bit values in 32bit (even 16 and 8) architectures no problem. It’s just slower and more cumbersome, but the compiler will abstract this away. For disk I/O addressing, it is a non-issue, as latency of the disk will make the cost of these calculations irrelevant.