Cool, I should check jedit.
Does the latest work fine with IRIX's lagacy java?
Does the latest work fine with IRIX's lagacy java?
_________________
Code: Select all
jason$ mkfile 6g testfile ; diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Unspecified
# Test date : Sat Oct 6 16:54:54 2012
# Test machine : IRIX64 hydra 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=4096 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 6442450944 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
4096 19.72 23.50 15.11 11.12 6.04 1.35
8192 37.85 44.73 28.20 14.81 12.25 2.68
16384 65.83 75.17 51.67 16.56 23.96 5.23
32768 107.45 115.58 82.97 10.81 44.62 10.07
65536 151.64 147.51 126.62 21.80 80.12 19.14
131072 239.40 274.34 207.96 35.51 80.77 31.53
262144 278.30 391.11 119.72 66.84 132.85 55.84
524288 321.62 391.24 196.07 105.88 200.64 96.11
1048576 350.83 391.28 201.78 158.65 258.66 157.86
2097152 364.98 390.96 126.92 220.66 122.15 217.32
4194304 373.96 391.04 197.45 284.72 195.51 277.28
mapesdhs wrote: What controller setup was this using?
Code: Select all
Integral SCSI controller 6: Version SAS/SATA LS1068
Disk drive: unit 0 on SCSI controller 6 (XVM Local Disk) (primary path)
Disk drive: unit 1 on SCSI controller 6 (XVM Local Disk) (primary path)
Integral SCSI controller 3: Version SAS/SATA LS1068
Disk drive: unit 0 on SCSI controller 3 (XVM Local Disk) (primary path)
Disk drive: unit 1 on SCSI controller 3 (XVM Local Disk) (primary path)
mapesdhs wrote: And in which slots?
Code: Select all
Bus Slot Stat Power Mode/Speed
[...snip...]
3 2 0x00 0c 7.5W PCIX 133MHz
4 1 0x00 0c 7.5W PCIX 133MHz
Code: Select all
jason$ mkfile 4g testfile ; sync ; diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Unspecified
# Test date : Sun Oct 7 18:56:56 2012
# Test machine : IRIX64 hydra 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=4096 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
4096 26.07 25.16 18.97 14.25 13.54 1.47
8192 47.78 46.60 33.74 20.20 26.53 2.87
16384 82.62 78.85 61.31 25.04 48.55 5.64
32768 124.91 122.39 96.80 20.33 86.26 10.93
65536 166.90 154.36 135.94 20.90 133.95 20.19
131072 331.76 308.49 268.67 34.62 176.46 33.89
262144 475.52 460.05 430.89 78.23 177.99 58.86
524288 539.13 515.28 197.42 126.35 286.29 106.31
1048576 622.51 600.20 398.52 178.15 422.77 184.51
2097152 692.61 660.22 396.16 287.42 538.49 273.79
4194304 727.27 685.00 244.63 387.93 243.50 408.68
what model cards are they? 3442X-R? 3800X?
Any tips on how you setup the RAID in XVM?
Code: Select all
xvm:local> label unlabeled/*
xvm:local> slice -all phys
</dev/lxvm/dks3d0s0> slice/dks3d0s0
</dev/lxvm/dks3d1s0> slice/dks3d1s0
</dev/lxvm/dks6d0s0> slice/dks6d0s0
</dev/lxvm/dks6d1s0> slice/dks6d1s0
xvm:local> stripe -volname stripe0 slice/*
</dev/lxvm/stripe0> stripe/stripe2
~# mkfs_xfs /dev/lxvm/stripe0
Code: Select all
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Unspecified
# Test date : Wed Oct 10 10:03:17 2012
# Test machine : IRIX64 hydra 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=4096 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
4096 30.68 34.42 21.38 14.92 3.14 1.36
8192 56.97 62.69 38.80 23.89 6.33 2.69
16384 96.75 101.59 73.10 33.13 12.31 5.26
32768 142.16 144.42 106.07 40.37 23.22 10.39
65536 184.89 169.50 155.02 37.96 41.34 19.42
131072 201.84 202.07 183.06 38.96 67.83 34.29
262144 201.65 202.13 179.71 96.00 104.86 58.67
524288 202.00 201.83 95.59 97.41 93.77 91.01
1048576 201.98 201.89 153.65 153.52 127.25 124.77
2097152 201.89 201.79 154.48 154.61 156.44 152.44
4194304 201.77 201.84 180.16 178.27 175.27 171.43
mapesdhs wrote: True, the Fuel could hold 8. I don't like the idea though that they'd always be powered on, thus the external case.
mapesdhs wrote: For the Fuel, the Startech unit is basically it. The only other often-used SGI I have atm is my gateway O2, and that just has a 2nd disk for cloning every now & then (very little changes on it, just running ipfilter). All my other 'for keeps' SGIs are not
really setup properly yet (original Effect O2 with all manuals, general R7K/600 O2, quad-1gig Tezro, Octane2).
Code: Select all
root@plum:/mnt# diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Unspecified
# Test date : Thu Oct 18 15:44:29 2012
# Test machine : IRIX64 plum 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=4096 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
4096 8.31 14.80 8.85 14.75 5.03 0.92
8192 15.24 26.80 15.12 25.85 12.91 1.78
16384 24.20 41.61 23.51 37.39 22.47 3.50
32768 32.82 52.38 33.18 44.34 23.89 5.86
65536 42.26 59.25 42.64 21.11 31.91 9.75
131072 44.83 63.53 46.15 25.68 32.12 15.08
262144 47.30 69.03 46.97 32.19 28.54 20.39
524288 46.78 69.99 39.64 41.26 37.37 22.61
1048576 45.64 70.74 41.75 43.68 39.97 34.29
2097152 41.44 74.54 42.69 49.77 39.57 40.78
4194304 45.52 72.21 38.86 59.10 40.88 48.07
> This is NetApp a SAN filer and not a JBOD or an FC RAID shelf, right?
> I have an SGI TP9100 as well. It's loaded with a grab bag of old disks, some even 1Gb/s models (Cheetah 73LP). But with some striping I got this: [...]
> My experience with the QL2342 is quite good. It will deliver the expected performance. My experience with the LS driver (SAS/SATA, U320 SCSI and 4Gb FC) is more spotty: [...]