I think one of us, including some other comments with links to similar benchmarks, is misunderstanding the conclusion behind a sequential write.
My question is, if a drive is, say, 90% full, how much slower it is compared to 0% full.
The linked test starts with an empty drive and writes data for 60 seconds, which is not enough to fill the drive. If you use the WD numbers as an example, it gets ~6000MB/s for ~35 seconds before the speed plummets. That’s 210GB filled for a 1000GB drive (which is explained in their methodology, they are filling 20% of the drive). Here, the speed going down is a result of the cache filling up and forcing the drive to write directly to the flash memory.
In my question, I am assuming that when the drive is 90% full and idle, the cache is not being used, but I could be wrong. But if so, when I start writing the cache should be used as normal, keep the data there temporarily before writing it to flash at a later date. Question is how much slower this entire process is when full, but not when the cache is saturated. I don’t think the test answers that.
990 Pro 4TB has two 16Tb TLC chips (2TB each),and 442GB SLC cache.
Does that mean the SLC cache is included in that 4TB, or is it separate? Because if it was separate, this would imply that it’s there to be used even if the TLC chips are completely filled so the cache speed would not decrease when full, only writing to the TLC flash afterwards.
Unless that’s how overprovisioning works? 990 Pro has 370GB overprovisioning within the TLC flash and 442GB SLC cache, together they roughly cancel out to give a total of 4TB capacity, which I guess would explain why cache runs out when the drive is full.