what sort of math can cpus do that gpus can’t
that’s really restricting the problem space. Obviously (?) math wise it’s the same thing: both of them can flip bits and arrange this bit flipping in ways useful for mathematics.
But CPUs are not just math. They always had I/O, almost always had interrupts, for many decades now they had protection rings and virtualization is now really common (Intel/AMD released support in 2005/2006). These are all supported in the instruction set.
a right angle adapter
Out of specification, not recommended.
( /r/UsbCHardware is leaking )
Could someone explain to me why was this connector necessary?
Gven the size of these cards … just add a third eight pin and call it a day? Most PSUs in the relevant range ship with six anyways…
it’s organic, it’s much cheaper to cram the LEDs in small pens and grow them there than let them graze freely…
2024 will be amazing if MTL and Z5 dukes it out in ThinkPads.
Some folks do this to securely wipe a drive.
Then some smartass controller runs a little RLE on it and then comes the surprise.
U and T chips rule this chart https://www.cpubenchmark.net/power_performance.html and that’s what this test shows too: the 35W version obliterates everything on the desktop efficiency wise. Ironically, it’s even more efficient than the E core only N100.
It’s quite interesting to see this in light of https://benchmark.chaos.com/v5/vray?index=1&ordering=desc&by=median&my-scores-only=false – on server platforms AMD is way more performant. Perhaps Intel’s core doesn’t scale as well up as it does down?
I’d say Illumos as the OS was a given as Bryan Cantrill is one of the founders. While obviously it’s too early to see whether they succeed, I have some trust in these people because of the roster they assembled and also because they managed to convince Intel to the tunes of 44 million.
this is old new, this is a bottom feeder website, the article has factual inaccuracies ( dual Thunderbolt 4 connections rated for 20Gbps maximum bandwidth – QNAP doesn’t mention any such limitation and there are no 20gbps USB4/TB4 controllers in existence )
tl;dr: we could but what for?
Practically all comments here are wrong although a few does mention why they are wrong: the address space has nothing to do with the bitness of the CPU.
Now, let’s review what’s what.
Let’s say you want to get the word “GRADIENT” from the memory into the CPU. Using a 8 bit instruction set you need to loop eight instructions. A 16 bit instruction set need four instructions; GR, AD, IE, NT. A 32 bit CPU only two and a 64 bit instruction can read it in a single step. Most of the time the actual CPU facilities will match the instruction set – in the early days, the Motorola 68000 for example had a 16 bit internal data bus and a 16 bit ALU but had a 32 bit instruction set. This was fixed in the 68020. This “merely” meant the 68000 needed internally twice as much time as the 68020 to do anything.
Now, in the past the amount of memory addressable has often been larger than what a single register could address. For example, the famous 8086/8088 CPUs had 20 bit address space while they were 16 bit CPUs. The Pentium Pro was a 32 bit CPU with a 36 bit address bus. These tricks, as the RISC-V instruction set manual drily notes
That manual thinks we might need more than 64 bit address space before 2030. And to be fair going to 128 bit is not a big engineering challenge, not for a long time now, after all as early as 1999 even desktop Intel CPUs have included some 128 bit registers although for vector processing only. (A computer with a 128 bit general processor register existed in the 70s.)
Let’s review why we needed 64 bit! Say you want to number your records in a database, if you do that with a 32 bit register then you can have four billion records and game over. Sure you can store your number on two machine words but it’ll be slower. As an example there are more than four billion humans so this was a very real, down-to-the-earth limit which we needed to move on from. Also as per the note above, it’s much nicer to have a big single address space than all the tricks which were running out fast, 64GB was addressable and even run-of-the-mill servers were able to reach 16GB. 64 bits can address 16 billion billion records or bytes of memory, this seems to be fine for now. Notably current CPUs can only address 57 bits worth of physical memory so a hundredfold increase is still possible compared to currently existing machines.
Going 128 bit would require defining a whole new instruction set or at least an extension of one existing. RISC-V has a draft for RV128I but even they didn’t bother fully fleshing it out yet. Each register, internal bus and processing unit widening to 128 bit would consume significant silicon area. The memory usage of everything would at least double (note Apple still selling 8GB laptops at top dollar in 2023). So there are significant drawbacks and so far we have been fine with delegating the 128 bit computing to vector processing units in CPUs and GPUs.
So: