You must log in or register to comment.
H200? Theres a new accelerator?
And that’s on a die just slightly bigger than the 4090. Unless they increased the size compared to h100?
(With a massive batch size*)
Its would be better if they provide single batch information for normal inference on fp8.
People look at this and think its astonishing, but will compare this with single batch performances as that’s all they have seen before.
70b with 2048 context and 128 reply is about 303 t/s.
That sounds more reasonable. And assuming they aren’t quantized. The batch size is just theoretical batch I think.
my waifu would be super happy if she could speak to me faster
How much you want for your old H100? - me to ai devs