The plural of "anecdote" isn't "data", but I'll add another anecdote to the pile anyway.
At home I have a 16-disk array consisting of 8x 750GB, 4x 1TB and 4x 1.5TB SATA drives configured in mostly-RAID10 using ZFS. I can't say I've had any more issues with the larger drives than the smaller ones - all 16 seem to be fully functional. However, once I exceeded the PSU's maximum 12V current output, the 1.5TB units dropped offline first. I'm not sure why - maybe they need more juice, or are just more sensitive to power fluctuations. That said, after installing a beefier PSU, they came back without any permanent damage. By the same token, this server only gets used by 5-7 people, and doesn't get hit all that hard.
As for how to set it up, I'd also advise against RAID5 unless the data is read-mostly, and even if it is, I'd prefer RAID6 over RAID5+hotspare. (RAID5 without a hotspare is, IMHO, too risky unless there are /very/ frequent backups). Space-wise, RAID6 will get you 10.5TB - which you'd need 14 disks to match with RAID10. As much as I prefer RAID10 for both performance and reliability reasons, RAID6 might make more sense in your case if you can't afford/use 14 disks.
It may be too late now, but I've heard it suggested that your drives should come from a mix of manufacturers, to reduce the odds of multiple drives puking simultaneously due to the same manufacturing fault/firmware bug/etc. Honestly I don't know how much this helps, but intuitively it makes sense.
As for filesystems, that's always a question. Recently I've been partial to ZFS, but if you're using Linux, and can't use FreeBSD or Solaris, XFS is a very good choice. ext4 or jfs might work well, too, but I'd have to look at some benchmarks and such to be sure. (Also, ext4 is the least well tested of the lot. It does seem to deserve the 'stable' label, but that's still something to consider)
At home I have a 16-disk array consisting of 8x 750GB, 4x 1TB and 4x 1.5TB SATA drives configured in mostly-RAID10 using ZFS. I can't say I've had any more issues with the larger drives than the smaller ones - all 16 seem to be fully functional. However, once I exceeded the PSU's maximum 12V current output, the 1.5TB units dropped offline first. I'm not sure why - maybe they need more juice, or are just more sensitive to power fluctuations. That said, after installing a beefier PSU, they came back without any permanent damage. By the same token, this server only gets used by 5-7 people, and doesn't get hit all that hard.
As for how to set it up, I'd also advise against RAID5 unless the data is read-mostly, and even if it is, I'd prefer RAID6 over RAID5+hotspare. (RAID5 without a hotspare is, IMHO, too risky unless there are /very/ frequent backups). Space-wise, RAID6 will get you 10.5TB - which you'd need 14 disks to match with RAID10. As much as I prefer RAID10 for both performance and reliability reasons, RAID6 might make more sense in your case if you can't afford/use 14 disks.
It may be too late now, but I've heard it suggested that your drives should come from a mix of manufacturers, to reduce the odds of multiple drives puking simultaneously due to the same manufacturing fault/firmware bug/etc. Honestly I don't know how much this helps, but intuitively it makes sense.
As for filesystems, that's always a question. Recently I've been partial to ZFS, but if you're using Linux, and can't use FreeBSD or Solaris, XFS is a very good choice. ext4 or jfs might work well, too, but I'd have to look at some benchmarks and such to be sure. (Also, ext4 is the least well tested of the lot. It does seem to deserve the 'stable' label, but that's still something to consider)