skywriter wrote: two things:
1) enterprise SSD's keep as much at 1/3 flash capacity for pre-erase buffer to keep up with performance. you can buy FC and SAS/SATA drives in the several hundred GB range that keep up with wrire speed in bandwidth, sub millisecond response times, and as much as 30K IOPS (Read). SCSI SSD are likely to be well engineered too. the little PCI-E ones are dinky.
2) XFS IO traffic has a lot of 4K IO for data, and small 512byte data for meta data. there are small writes for logging as well. i don't remember the numbers off the top of my head, but it can still be substantial. we just put a fix in for XFS in linux fragmenting large IO's up into smaller ones. what a hack it's become.
the internal architecture of the drive has a heavy influence on the read vs. write performance. multiple flash channels allow you to keep a high random IO as long as there are no writes. once a write occurs, it picks a channel that will now be busy for 3ms during an erase cycle. your read could be stuck behind that erase, and that channel can't be used for anything else. the little cheap ones only have a couple of channels, you're unlikely to see this effect.
the IO working set of heavy DB application can be several hundred GB easy, that couple of GB write buffer will only cut the very small top of your IO skew. i've seen applications with several TB of working sets over a few short hours. in mainframe it's much worse.
Your point 1, if the SSD does not know the FS(most dont) how can it make use of the "spare" capacity?
/michael
--
No Microsoft product was used in any way to write or send this text.
If you use a Microsoft product to read it, you're doing so at your own
risk.
No Microsoft product was used in any way to write or send this text.
If you use a Microsoft product to read it, you're doing so at your own
risk.