CPU: Undisclosed performance vs M2 Pro, 20% faster than M1 Pro
GPU: 10% faster than M2 Pro
M3 Max
CPU: 50% faster than M2 Max
GPU: 20% faster than M2 Max
It seems the biggest improvements were on M3 Max. For the M3 family, all will enjoy an upgraded screen brightness (From 500 nits to 600), hardware accelerated raytracing and hardware accelerated mesh shading
/u/uzzi38 any indication if the e-cores are less useless at all, with the renewed focus on gaming and not just pure background work?
what are the overall cache structure changes especially in the GPU etc? Enough to compensate for the reduction? Things like cache structure or delta compression etc can def make a difference, we have seen memory performance ratios soar since kepler etc. But it definitely seems more tiered than M1/M2.
Obviously this all exists in the shadow of the N3 trainwreck… N3B vs N3E etc and the like. Any overall picture of the core structure changes here?
It all just is so much less interesting than an ARM HEDT workstation would be right now
Apple’s E-cores were never useless, they’re easily best in class by a large margin. They’ve got the best perf/W in the industry by a country mile really, the things sip power, and while they’re not the fastest little cores, they are still extremely potent. I don’t see them changing away from that core structure any time soon.
As for the GPU, idk off the top of my head, but the IP is likely similar to A17. I wouldn’t expect much - the main advantage is the addition of hardware RT support, but from what we’ve seen the RT capabilities aren’t hugely improved over shaders. Definitely going to be a more modest improvement than prior iterations here.
Comparisons:
M3 base:
CPU: 20% faster than M2 base
GPU: 20% faster than M2 base
M3 Pro:
CPU: Undisclosed performance vs M2 Pro, 20% faster than M1 Pro
GPU: 10% faster than M2 Pro
M3 Max
CPU: 50% faster than M2 Max
GPU: 20% faster than M2 Max
It seems the biggest improvements were on M3 Max. For the M3 family, all will enjoy an upgraded screen brightness (From 500 nits to 600), hardware accelerated raytracing and hardware accelerated mesh shading
/u/uzzi38 any indication if the e-cores are less useless at all, with the renewed focus on gaming and not just pure background work?
what are the overall cache structure changes especially in the GPU etc? Enough to compensate for the reduction? Things like cache structure or delta compression etc can def make a difference, we have seen memory performance ratios soar since kepler etc. But it definitely seems more tiered than M1/M2.
Obviously this all exists in the shadow of the N3 trainwreck… N3B vs N3E etc and the like. Any overall picture of the core structure changes here?
It all just is so much less interesting than an ARM HEDT workstation would be right now
Apple’s E-cores were never useless, they’re easily best in class by a large margin. They’ve got the best perf/W in the industry by a country mile really, the things sip power, and while they’re not the fastest little cores, they are still extremely potent. I don’t see them changing away from that core structure any time soon.
As for the GPU, idk off the top of my head, but the IP is likely similar to A17. I wouldn’t expect much - the main advantage is the addition of hardware RT support, but from what we’ve seen the RT capabilities aren’t hugely improved over shaders. Definitely going to be a more modest improvement than prior iterations here.
didn’t they say there was a big leap in GPU ? 20% is tiny
I am surprised.
Is this data really accurate ?