I've been testing out a new [home, not work] fileserver that has been put together from spare parts and after a couple of weeks of constant battering it's passed burn-in and is running nicely. It's nothing special - just a basic x86 with an Adaptec 2400 raid card and 4 x 250Gb disks in raid 5. The XFS filesystem was built with a basic 'mkfs.xfs /dev/whatever' command and although performance is already better than the LVM/ext3 system it's replacing I'd like to optimize it before this box is promoted from testing to production and I start migrating all my data to it.
The man pages, SGI site, google, etc have got loads of information which I've been wading through: sunit/su, swidth/sw, log and inode options, etc for the filesystem and the raid card obviously supports various different stripe width, block sizes, etc as well. To save me spending the rest of my life rebuilding arrays and doing I/O benchmarks any advice would be really appreciated - I already spend far too much time doing stuff to computers rather than with them.
The machine runs debian and is a general purpose file server which makes things awkward - it will handle everything from rotating rsync backups of other machines to storing mp3s and getting 200Gb+ forensic disk images dumped on it so I can't really optimize in favour of small or large files but it doesn't have to do anything clever like real-time. None the less, I'm sure that I can get better performance than just using the default options.
So, can any XFS experts point me in the right direction? What values should I be setting the physical raid stripe/block and XFS filesystem variables to? Should I align the values on the hardware and filesystem exactly - I'm more used to ext2/3 filesystems and '-R stride= x '. Although I'm already happy with the default configuration it would be nice to squeeze whatever extra overhead I can out of it before I build it once and leave it to run for a couple of years.
The man pages, SGI site, google, etc have got loads of information which I've been wading through: sunit/su, swidth/sw, log and inode options, etc for the filesystem and the raid card obviously supports various different stripe width, block sizes, etc as well. To save me spending the rest of my life rebuilding arrays and doing I/O benchmarks any advice would be really appreciated - I already spend far too much time doing stuff to computers rather than with them.
The machine runs debian and is a general purpose file server which makes things awkward - it will handle everything from rotating rsync backups of other machines to storing mp3s and getting 200Gb+ forensic disk images dumped on it so I can't really optimize in favour of small or large files but it doesn't have to do anything clever like real-time. None the less, I'm sure that I can get better performance than just using the default options.
So, can any XFS experts point me in the right direction? What values should I be setting the physical raid stripe/block and XFS filesystem variables to? Should I align the values on the hardware and filesystem exactly - I'm more used to ext2/3 filesystems and '-R stride= x '. Although I'm already happy with the default configuration it would be nice to squeeze whatever extra overhead I can out of it before I build it once and leave it to run for a couple of years.
_________________
hardware/software agnostic sadmin