The collected works of mia - Page 3

Cool, I should check jedit.
Does the latest work fine with IRIX's lagacy java?

_________________
:Onyx2:
vishnu wrote:
But if you model something using Maya 6.5 on Irix, you can open the model with Maya 2012 on Windows, yes? Then you can render it with the Windows version of Mental Ray, yes/no?


In which respect is the new mentalray better? Do you think a difference in rendering (besides maybe cpu time and IO) is significant as far as the final product is concerned?

I've never really done any rendering but on povray, but I'd like to take a look at this.

_________________
:Onyx2:
zmttoxics wrote:
Yeah, it depends on the job at hand of course, but the T1 based servers have a very specific niche where they do "OK". haha


Making Larry richer so he can buy more islands.

_________________
:Onyx2:
THANK YOU!

_________________
:Onyx2:
okay, I'll start learning german.

_________________
:Onyx2:
Has anyone played with vizserver on "cheap" (sub $2k) sun workstations? Is it usable?

_________________
:Onyx2:
Winnili wrote:
mia wrote:
Has anyone played with vizserver on "cheap" (sub $2k) sun workstations? Is it usable?

My max'ed out Ultra 10 would certainly have adequate hardware-accelerated graphics processing capability ― with an Elite 3D-M6 (UPA) board installed ― for anything requiring or recommending OpenGL. If I had the means to try it (i.e. having the software at my disposal), I would definitely try it. (I haven't used this system much in a while, I'm actually thinking of parting with it. Maybe some interesting software might persuade me otherwise.)


Hm, I'm willing to give this a try, are GL libraries available in OpenSolaris?

_________________
:Onyx2:
Talking about pov, has someone managed to compile 3.0-RC6 on Irix (with or without SMP and/or display options, doesn't matter).

_________________
:Onyx2:
vishnu wrote:
mia wrote:
Talking about pov, has someone managed to compile 3.0-RC6 on Irix (with or without SMP and/or display options, doesn't matter).


Actually, the current non-Windows version of povray is 3.6.1, and it's in nekoware-current... :mrgreen:

What we should do is port Erlang to Irix so we can use Wings: http://www.wings3d.com/


The beta (I really meant 3.7 earlier, not 3.0) has a lot of great features, which makes spectral rendering possible.

http://www.lilysoft.org/CGI/SR/Spectral%20Render.htm

Beautiful.

_________________
:Onyx2:
Not bad, my turn.

Code: Select all

jason$ mkfile 6g testfile ; diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Unspecified
# Test date     : Sat Oct  6 16:54:54 2012
# Test machine  : IRIX64 hydra 6.5 07202013 IP35
# Test type     : XFS data subvolume
# Test path     : testfile
# Request sizes : min=4096 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 6442450944 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
4096   19.72   23.50   15.11   11.12    6.04    1.35
8192   37.85   44.73   28.20   14.81   12.25    2.68
16384   65.83   75.17   51.67   16.56   23.96    5.23
32768  107.45  115.58   82.97   10.81   44.62   10.07
65536  151.64  147.51  126.62   21.80   80.12   19.14
131072  239.40  274.34  207.96   35.51   80.77   31.53
262144  278.30  391.11  119.72   66.84  132.85   55.84
524288  321.62  391.24  196.07  105.88  200.64   96.11
1048576  350.83  391.28  201.78  158.65  258.66  157.86
2097152  364.98  390.96  126.92  220.66  122.15  217.32
4194304  373.96  391.04  197.45  284.72  195.51  277.28


[email protected], xvm raid 0+1, 4xST9300653SS.
:Onyx2:
Where are you located?

_________________
:Onyx2:
mapesdhs wrote: What controller setup was this using?

Code: Select all

Integral SCSI controller 6: Version SAS/SATA LS1068
Disk drive: unit 0 on SCSI controller 6 (XVM Local Disk) (primary path)
Disk drive: unit 1 on SCSI controller 6 (XVM Local Disk) (primary path)
Integral SCSI controller 3: Version SAS/SATA LS1068
Disk drive: unit 0 on SCSI controller 3 (XVM Local Disk) (primary path)
Disk drive: unit 1 on SCSI controller 3 (XVM Local Disk) (primary path)


mapesdhs wrote: And in which slots?

Code: Select all

Bus Slot Stat    Power Mode/Speed
[...snip...]
3    2 0x00 0c  7.5W PCIX 133MHz
4    1 0x00 0c  7.5W PCIX 133MHz


Let's take this thing up a notch, with raid0, same hardware configuration:

Code: Select all

jason$ mkfile 4g testfile ; sync ; diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Unspecified
# Test date     : Sun Oct  7 18:56:56 2012
# Test machine  : IRIX64 hydra 6.5 07202013 IP35
# Test type     : XFS data subvolume
# Test path     : testfile
# Request sizes : min=4096 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
4096   26.07   25.16   18.97   14.25   13.54    1.47
8192   47.78   46.60   33.74   20.20   26.53    2.87
16384   82.62   78.85   61.31   25.04   48.55    5.64
32768  124.91  122.39   96.80   20.33   86.26   10.93
65536  166.90  154.36  135.94   20.90  133.95   20.19
131072  331.76  308.49  268.67   34.62  176.46   33.89
262144  475.52  460.05  430.89   78.23  177.99   58.86
524288  539.13  515.28  197.42  126.35  286.29  106.31
1048576  622.51  600.20  398.52  178.15  422.77  184.51
2097152  692.61  660.22  396.16  287.42  538.49  273.79
4194304  727.27  685.00  244.63  387.93  243.50  408.68
:Onyx2:
what model cards are they? 3442X-R? 3800X?


3800 rebranded, but probably identical to the LSI one.

Any tips on how you setup the RAID in XVM?


Code: Select all

xvm:local> label unlabeled/*
xvm:local> slice -all phys
</dev/lxvm/dks3d0s0>  slice/dks3d0s0
</dev/lxvm/dks3d1s0>  slice/dks3d1s0
</dev/lxvm/dks6d0s0>  slice/dks6d0s0
</dev/lxvm/dks6d1s0>  slice/dks6d1s0
xvm:local> stripe -volname stripe0 slice/*
</dev/lxvm/stripe0>  stripe/stripe2
~# mkfs_xfs /dev/lxvm/stripe0


I'm going to try cxfs this weekend.
:Onyx2:
For completeness, I have run an identical diskperf benchmark on a single drive, to understand, perhaps to some extend, the xvm "overhead", those results are shared below:

Code: Select all

#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Unspecified
# Test date     : Wed Oct 10 10:03:17 2012
# Test machine  : IRIX64 hydra 6.5 07202013 IP35
# Test type     : XFS data subvolume
# Test path     : testfile
# Request sizes : min=4096 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
4096   30.68   34.42   21.38   14.92    3.14    1.36
8192   56.97   62.69   38.80   23.89    6.33    2.69
16384   96.75  101.59   73.10   33.13   12.31    5.26
32768  142.16  144.42  106.07   40.37   23.22   10.39
65536  184.89  169.50  155.02   37.96   41.34   19.42
131072  201.84  202.07  183.06   38.96   67.83   34.29
262144  201.65  202.13  179.71   96.00  104.86   58.67
524288  202.00  201.83   95.59   97.41   93.77   91.01
1048576  201.98  201.89  153.65  153.52  127.25  124.77
2097152  201.89  201.79  154.48  154.61  156.44  152.44
4194304  201.77  201.84  180.16  178.27  175.27  171.43
:Onyx2:
Ian,

Impressive results.

I currently use those 4xSAS drives for backups (where streaming matters) with Amanda on IP35.

I'm satisfied with those results; although I might also include some large SATA drives for long term backups; while the SAS will be for the daily backup tasks, which are mostly incremental per nature.
:Onyx2:
me want.

_________________
:Onyx2:
I use this same startech in my tezro, in place of the dvd; with those 4xSAS I benchmarked above.
Cool setup; but I could use the fuel to get 8 like that.
:Onyx2:
I broke so many multias; which is a shame, it was a really great workstation, with a terrible cooling and a case obviously too small.
I ran everything on it, digital unix, nt, netbsd, linux, openbsd, I loved them, I had many of them, but they're completely ill-designed.

too bad.

_________________
:Onyx2:
I would buy a Longsoon if I could find a 3A, I emailed tekmote, but they don't reply to their mail. And couldn't find a us reseller that has any.

_________________
:Onyx2:
mapesdhs wrote: True, the Fuel could hold 8. I don't like the idea though that they'd always be powered on, thus the external case.


If you want to have data "always" available to your Irix box (regardless what constitutes your dataset: movies, isos, documents, pictures, archives etc.) I suggest that you give DMF (Data Migration Facility) a try (documentation id #007-3681-019). It would allow you to use local or remote destinations to store files, which would be migrated on demand, in a transparent manner.

In a nutshell, under a xfs filesystem with dmf attributes:

mplayer ian.avi

The file ian.avi, if not locally present, will be automagically retrieved from either tape/ftp/nfs/xfs/cxfs/cifs... and, depending on your policy, sent back (if modified) to the remote location (which can be a windows, osx, openbsd or linux server if you want, it doesn't really matter); or even to multiple locations (great for online data mirroring). To you, it will be entirely transparent, if your network is slow, a disk cache (on a separate filesystem/directory) can be added to the mix. While this is not really a backup solution, it's a convenient way to unify "archipelagos of storage", in a transparent manner for the user, and requiring no administration overhead.

Policies can be set based on uid, file size, disk space left, etc. it's very versatile. It's meant to handle a lot of data, think NASA-large datasets, where you can't obviously store everything on a single disk, or even a few of these; if the data is modified, it will be pushed back to the remote store(S) (singular or plural) based on a weight factor (assuming you have rights to modify the data). Those "remote store(S)" don't have to run irix (but they can), so if you need cheap storage readily available on irix without the disadvantages of NFS, you should look into this.

It's relatively trivial to setup DMF to automatically fetch a file from tape (movie?) when one request access (mplayer ian.avi) to this file, the end-user will not even know the data is coming from tape (unless you use a DDS3 perhaps, heh).

At least, it's worth investigating; I intend to deploy it for data rarely accessed, which I don't want to keep on NFS; simply because NFS, despite its advantages in unix-land, doesn't offer a great way to replicate the data amongst remote stores (unless if you use EMC/Netapp/Hitachi asynchronous mirroring).

To store data (remotely) DMF supports: NFS, FTP, CXFS, XFS, SMB/CIFS; it's a transparent filesystem migrator. Data can be pushed (and pulled) to offline media as well, assuming it's used conjointly with openvault or tape management facility; but most commonly, the secondary storage can be any bulk-storage device accessible through NFS or FTP. Data is verified (checksumed) on the fly.

So rather than spending a lot of money to store a lot of data on very fast storage, you can automatically tier it, data will be automatically migrated between fast and slower drives (or stores) in a transparent way.

To sum things up, in your case, you could have a primary storage on SSD, secondary storage on SATA or SAS, to the end user, it's a single filesystem with absolutely no way to know it's automatically tiered. Cool no?
:Onyx2:
Ian,

Understandable; of course. I like to throw those things to give people new ideas or different viewpoints.
It's cool however that DMF is available for Irix, it used to be mostly on big Unicos systems (Cray C90 & J90 and such) with some relatively big tape transports (thousands of slots in silos). In other words, it's good to see software that was available on systems worth tens of millions of dollars come to "affordable" workstations.

Even for "standalone" systems, it makes sense to tier between SSD/SAS and SATA (no network connection necessary, all XFS); and it's well supported by xfsdump/xfsrestore.

In any case, I'm still curious to hear what you're doing for backups these days, I imagine one cannot talk storage without mentioning backup strategies.
:Onyx2:
mapesdhs wrote: For the Fuel, the Startech unit is basically it. The only other often-used SGI I have atm is my gateway O2, and that just has a 2nd disk for cloning every now & then (very little changes on it, just running ipfilter). All my other 'for keeps' SGIs are not
really setup properly yet (original Effect O2 with all manuals, general R7K/600 O2, quad-1gig Tezro, Octane2).


You seem to prefer using the Fuel as your primary workstation, while you own a 4x1Ghz-IP53, is it just a color preference? :)
:Onyx2:
Makes sense. I think we have a pretty similar setup; I use the O2 as a firewall, and shell server (~50W is well worth it); and IP27/30/53 for compute intensive tasks (compilation and optimization); where a lot of ram and IO helps.
:Onyx2:
Winnili wrote:
Not everything, you didn't run VMS. (By far the most tricky one.)


At the time, I was a starving student. I did most of my work on my Indigo2 (with Solid Impact).
The reason why a starving student had an Indigo2 (which was real money back then) explains why I was starving.

_________________
:Onyx2:
I had one of those, but I ended up tossing it away, it was just crippling my system, so I suspect their proprietary drivers are not worth a dime; I think I still have a 4x100 one (which didn't work in IP32).
In the US, unlike in Europe and Asia, Internet is still pretty slow (for the masses), all things are relative, it's still much faster than 10 years ago.

http://www.netindex.com/download/allcountries/

Does your site goes through the O2, or is it hosted somewhere else? just curious.
:Onyx2:
What about a nice vaxstation 4000?

_________________
:Onyx2:
dclough wrote:
http://www.ebay.com/itm/HP-Compaq-XP1000-SN-E2G6W-32-667Mhz-256Mb-memory-9gb-drive-scsi-graphics-/190640608572?pt=US_Thin_Clients&hash=item2c6310d53c
How common are these and what could I expect to pay for one?


This is a complete ripoff, I paid $150 for my [email protected] with 1GB ram, cdrom and 2 drives. It's not a bad box per say; it's not *great* but for $150, it's worth it. I'm not sure which video card it has, I think Elsa3D something.

_________________
:Onyx2:
Quote:
Or, what makes you think that the Multia was not intended to be useful as a standalone machine?


I think that the multia was a great machine, yet, physically too fragile (unlike the sun LX/IPC/IPX, notably). It was very versatile (scsi, ide, floppy, network, pci slot, and a great video card); and so quiet! I owned 2 by 1996. Unfortunately, they're not so modular, and when they break, they break; and there's not much to do about it.

Amazingly, another system I really liked was the DS10L (and NOT the clunky desktop DS10), it was a great server, I'd gladly trade a few DS10 for a DS10L. Running Digital Unix or OpenVMS in 1U of rackspace was neat, and those boxes are pretty snappy and well designed. I wish they had made a DS15L.

I also wish it would be possible to run OpenVMS on Altix 350, that would be... interesting; unfortunately, too much SGI magic in there that would prevent it to work.

_________________
:Onyx2:
Thanks for clarifying, I though "workstation" meant a computer which can not run any microsoft or apple product; those being rightfully called "playstations".

_________________
:Onyx2:
This is really cool, I have to admit.
HP/Dec/Compaq didn't have to do that, or even renew this program after so many years, but I'm glad they did.

_________________
:Onyx2:
How much is the subscrition these days?

_________________
:Onyx2:
Ian,

I'm sorry, this is a dumb question, but have you tried to disable ipfilter on your O2 when using the gigE card? By disabling I mean unloading the kernel module all together.
:Onyx2:
Lupin, take my advice as you want, but look for another machine, like a DS10 or XP900~ish, I found the multia to be unreliable with time, give it a try if you want but from my own experience, with the 166Mhz models, they don't have an appropriate cooling, and break easily. The DS10 are dirt cheap and quite capable. I spent a good amount of cash trying to get my multias (please notice: plurial) fixed, not a good deal.

Of course if you're a collector, then ignore this. They are by far one of my favorite workstations, just too unreliable IMHO.

_________________
:Onyx2:
Lupin_the_3rd wrote:
mia wrote:
Lupin, take my advice as you want, but look for another machine, like a DS10 or XP900~ish, I found the multia to be unreliable with time, give it a try if you want but from my own experience, with the 166Mhz models, they don't have an appropriate cooling, and break easily. The DS10 are dirt cheap and quite capable. I spent a good amount of cash trying to get my multias (please notice: plurial) fixed, not a good deal.

Yes I already own a DS10, an XP1000, a PWS500au, and a 264DP. All with maximum CPU and memory configuration. You could say I'm an Alpha geek. :lol:

Mostly I'm looking for something with Alpha processor that I can tinker with that doesn't eat lots of power and make lots of heat like these other machines do.


Then I'm totally with you on that. Which OS are you running?

_________________
:Onyx2:
Another one for fun: (this one is work in progress)

I've replicated FC jedi's Chris Kalisiak's configuration from http://www.futuretech.blinkenlights.nl/fc.html (Ian's website):

Took a "relatively large Origin server", a netapp 14-disks (Seagate 300GB FC) lun array, and got those laughable results:

Code: Select all

[email protected]:/mnt# diskperf -W -D -r 4k -m 4m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Unspecified
# Test date     : Thu Oct 18 15:44:29 2012
# Test machine  : IRIX64 plum 6.5 07202013 IP35
# Test type     : XFS data subvolume
# Test path     : testfile
# Request sizes : min=4096 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4294967296 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
4096    8.31   14.80    8.85   14.75    5.03    0.92
8192   15.24   26.80   15.12   25.85   12.91    1.78
16384   24.20   41.61   23.51   37.39   22.47    3.50
32768   32.82   52.38   33.18   44.34   23.89    5.86
65536   42.26   59.25   42.64   21.11   31.91    9.75
131072   44.83   63.53   46.15   25.68   32.12   15.08
262144   47.30   69.03   46.97   32.19   28.54   20.39
524288   46.78   69.99   39.64   41.26   37.37   22.61
1048576   45.64   70.74   41.75   43.68   39.97   34.29
2097152   41.44   74.54   42.69   49.77   39.57   40.78
4194304   45.52   72.21   38.86   59.10   40.88   48.07


At the time of the tests the load was roughly ~40% on the netapp; so I'm trying to understand where is the bottleneck, plausible causes:

- qla2342 driver?
- single channel (2Gbps) FC, not appropriate? (This netapp, while using a 2Gbps GBic is pushing only 1Gbps at most). I should trunk more ports.
- gremlins
:Onyx2:
The array is created and managed by the netapp head; then exposed to the host/switch using either the fibre channel protocol or iscsi.

Netapp encapsulate lun blocks within those 4k WAFL (netapp's filesystem) blocks. WAFL itself is stripped between all the drives (with redundancy bits, here raid-DP). WAFL is the underlying filesystem of iscsi, nfs, cifs, fcp on netapp. Yes, this is not a typo, a block device is chunked on a filesystem.

There are advantages and (performance) drawbacks of this method. But this overhead probably explains why I can't reach 1Gbps line rate; because the cost of IO operations (including parity checks). Add to this that WAFL is ironed on disk blocks, which are checksumed (in the case of those FC drives, the disk blocks are 524 bytes, where 8 bytes are reserved for checksums, and 512 bytes are available.

So there's a lot of overhead, but also a lot of features (mirroring to another array, symmetrically or asymmetrically, snapshots etc.) it's a tradeoff of features vs. throughput; as it's often the case. Finally, this specific array doesn't have a lot of cache available for read-ahead, write-back, etc. But I do appreciate its features. Regardless, I think this is a good test anyway. Please note that this disk shelf is 10 years old; but again, so is this Origin.
:Onyx2:
JJ,
> This is NetApp a SAN filer and not a JBOD or an FC RAID shelf, right?


Yup.
> I have an SGI TP9100 as well. It's loaded with a grab bag of old disks, some even 1Gb/s models (Cheetah 73LP). But with some striping I got this: [...]


Neat! XVM stripe I assume?
> My experience with the QL2342 is quite good. It will deliver the expected performance. My experience with the LS driver (SAS/SATA, U320 SCSI and 4Gb FC) is more spotty: [...]


I couldn't agree more; there's something fishy with the LS driver; I found that under serious loads (500MB/s and more) the LS driver seems to drop data, likely a bug, either in the firmware or driver; I've had XFS corruptions with very fast 15k drives, regardless if I enable/disable read+write cache, CFQ, etc. Of course, maybe I did hit a XFS bug, that's a possibility as well; or simply some interaction between XFS and LS doesn't quite work right. Amazingly, I've looked into this, and those same errors are being seen with the linux implementation of XFS (and I'm referring to "current" (10-2012) issues; no patch yet, that said, only one person currently supports XFS, so I'm not sure he has all the time necessary to look into this.
:Onyx2:
Out of curiosity, do you run renderman on alpha/linux?

_________________
:Onyx2:
seriously, no; you'd kill yourself.
:Onyx2:
My Altix 350 rack is in the basement, a direct shot from the street (through the garage), and it took 3 people to move it. It was as empty and dismantled as could be. Next time I'll just build the house around it.
:Onyx2: