The collected works of SAQ - Page 6

josehill wrote:
mattst88 wrote: You can't really think that companies are going to think it's worthwhile to pay for a support contract that gets them access to trivial and sporadic patches for an operating system that is no longer being developed.
Actually, yes. It happens all the time.


There are probably two sides to that pthread patch. A company paying for support probably ran into a problem with the IRIX pthreads library and a piece of their software. Since they had support, they called up SGI and said "we have a problem with your software, can you please fix it", and SGI had their programmers fix it. Oscar45 has or until recently had a SGI support contract, and he's said numerous times that they were generally very helpful in fixing what needed to be fixed. That comes at a cost, which SGI recoups through support contracts and limited patch releases.

Keep in mind that things are much better than the early '90s, when you got what you bought, and any patch access required a support contract. For the most part, SGI and other companies make the important patches (security and major functionality issues) available to everyone now.
Damn the torpedoes, full speed ahead!

There are those who say I'm a bit of a curmudgeon. To them I reply: "GET OFF MY LAWN!"

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
Rhys wrote:
My complaint isn't that I personally love Windows and care that it doesn't run on my own Itanium hardware, but that this is a win for x86/x64. Maybe I'm stupid and obsolete, but I refuse to believe that that architecture is the future or anything close to it...


Unfortunately it seems to be, at least for the short term, but not for technical merit.

98% of the time the underlying architecture is of no consequence to the end user. and much of the time it's of little consequence to the programmer thanks to high-level languages. Money, at least immediate cash outlay, is very evident to the purchaser, and it's hard to argue with AMD64 on a price/performance standpoint.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
Rhys wrote:
eMGee wrote:
With all those high-level programming ‘object-oriented miracles’ one would need all the AMD64 CPUs that money can buy... What a wonderful market the whole ICT industry is, isn't it?

I think I'm going to give up buying anything new if the last RISC, or otherwise solid *N*X or VMS-capable and proven, architecture should at some point die off...


The following is on a minimal amount of sleep, and may not be entirely coherent.

Honestly, I don't think you need to worry. SPARC and IA64 are both seemingly at death's door, but POWER is going strong, and plenty of new chipmakers (Tilera, for example) are coming into the market. I would not be surprised to see a "RISC Renaissance" in the high end as personal computing becomes increasingly based on thin-client type technologies (like "cloud computing.") All the architectural band-aids in the world can't make x86's disadvantages go away. The really cost-sensitive segments (desktops and laptops, netbooks, etc) are already about as fast as they need to be, and will probably stick with x86 until something better (ARM?) comes along that can run the same software at a reasonable speed. On the other hand, RISC is still the fastest thing around on severs and large-scale workstations, where performance still matters, and I think that will increasingly shift in RISC's favor. x86 just isn't that fast, doesn't comfortably go above about six cores without MCM's, and in general it's I/O and memory bandwidth don't come close to RISC solutions, especially POWER. Nehalem goes a long way toward correcting this, but getting relatively close in performance to the Power6, which is last-gen, really isn't good enough.

I predict RISC workstations will, in fact, come back. I think personal computing processors over the next few years are going to be more architecturally to embedded processors than to workstation/server processors, and that personal computing will always be where Intel and (probably) AMD do the most R&D. This leaves a hole in workstation and server processors that the fast RISC chips have an excellent chance to fill. I don't think that SiCortex was the last attempt at making RISC workstations; I think it was one of the first of the new generation.


What do you mean by "large scale workstation"? IBM seems to have discontinued their framebuffer equipped POWER boxes (though I have trouble figuring out exactly what they sell rapidly, so I could have missed something).

RISC/VLIW (though they really aren't "RISC" anymore - non-x86 or load/store are better terms) are great technologies, but they are likely to only hold onto the high end of "serious computing" and, possibly, the low-end and client (a la Godson/SunRay/etc.). AMD64 is good enough for what most of the midrange server/commercial machines do (as long as you can put up with the lack of serious RAS facilities) and they're CHEAP! Power-wise they're OK, too - RISC uses lots of power in the gigantic caches usually attached. Now that the high-perf IO and interconnects are filtering down (PCIe/SATA with smarts/HyperTransport and its ilk) AMD64 is just good enough to where it's hard to justify the costs of a POWER system or Itanium, especially when you factor in the costs of a serious OS (OpenVMS, HP-UX or AIX) on the hardware. Linux/xBSD are good, but it's hard to find non-x86 software excluding FOSS.

I'd love to see POWER take over, coupled with a rebirth of Alpha, but I'm not sure that's going to happen anytime soon except for the big boxes (and even then Alpha is going to stay dead).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
Rhys wrote:
@SAQ - that's one of those things I just don't understand. PA-RISC was not slow, faster for a lot of things than most or all of its competitors. There's a good reason PA-RISC systems stayed on the top500 until a couple years ago, so why were they so desperate to drop it? Was there a good economic reason or was this just Fiorina-era bullshit?


They wanted someone else to take over a big chunk of the costs associated with processor design/manufacture. It didn't help that HP was generally fabbing their own processors at the time that the decision to go Itanium was made, and fabs were getting more and more expensive. Performance didn't enter into it - Alpha, PA-RISC and MIPS were all at or near the top around the time that the decision to drop them and go with Itanium was made.

Indeed, if SPECmarks were the main category than SPARC (optimized for commercial workloads rather than technical) would have been the likely candidate. It was costs for design and manufacture that did in the custom processors, especially when you couple in CEOs that don't look more than one year down the road (though it's doubtful that the custom designs could be justified even long-term).

Once gate-array processes became too slow to compete things started going up in price exponentially, and you couldn't inexpensively get a design made. Once clock speeds started skyrocketing you had to shrink so often that it was even worse.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
Itanium was (on paper) much better than the original chips turned out to be, and everyone made the switch (i.e. committed to Itanium) when it was still on paper.

One thing that you didn't bring up (people don't think this way much anymore) is the difference between commercial and technical workloads. SPECers and FLOPs people tend to be technical/HPC workloads, whereas the companies you mentioned tend to be commercial (TPS) workloads. Many of the commercial-oriented machines tend to be rather anemic for their generation in SPECint/SPECfp/FLOPS, but are really good at pushing the data out, which is what a number of customers (i.e. banks) need. Add on hardware accelerators where needed (i.e. crypto), and you wind up with something much faster at what you need than a general purpose HPC box.

I think Itanium was supposed to be targeting the "balanced" market, such that it could be used in either place (Altix as well as Integrity), but the IBM z is definitely a commercial machine, and for many years the i was as well (RS64 was commercial-optimized).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ginopilotino wrote: Thanks!
But the last row is a little worrying: "Worked for 6.5.22 but did not work with variables changed for 6.5.28 as of 1st April 2009. Your Milage May Vary."


I'm pretty sure I used the scripts to make a .30 overlay with few problems.
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
bigD wrote:
SAQ wrote:
ginopilotino wrote: I'm pretty sure I used the scripts to make a .30 overlay with few problems.


Stupid question, and I'm pretty sure the "I have to scp these into my linux desktop to do this, your setup might be different." is the answer, but I assume that the .image files created by the script can be burned on any platform?


You need to have a platform that allows you to write arbitrary format CDs. Most do, including the CDRtools-based UNIX programs. In fact, the only ones I've had problems with are the Adaptec programs (EZ-CD Creator and Toast), which "help" you out by checking for a valid ISO9660/Joilet/UDF/HFS filesystem on the image before burning.

As to getting the images to the machine, I'll leave that as an exercise for you. Scp works, as does NFS, FTP, portable hard drives (official or contrived) or any number of other methods. ;)
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
Just noticed a couple-months-old post on the GCC list bringing up the possibility of obsoleting IRIX before 6.5 (and Solaris 7, Tru64 before 5.1).

Towards the end it looks like they're moving more to possibly obsoleting the o32 ABI or IRIX and maybe 5.3, but I don't know where that leaves 6.2.

Since 6.2 doesn't include a SGI compiler (IRIX 5.3 does), supports POSIX, is Y2k clean, and doesn't have a current (c99) MIPSpro compiler perhaps we should lobby for continued support.

FWIW 6.3 and 6.4 are included in the original idea as well - I know some people run 6.3 on O2, haven't met any 6.4 people myself.

We should be able to scrape up equipment for test builds here - I personally have a Challenge R10k that has a 6.2 system disk option.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
I brought up 6.2 specifically because his reasons for dropping seem to be primarily concerned with equipment to test on (same case with Solaris 7). If that's the concern then it's not a big deal for me to run builds on the Challenge (for that matter I could put Solaris 7 on an Ultra 2 if that would be helpful). The technical difficulty seems to resolve around o32, and since IRIX 5 has other issues with modern standards compliance (no POSIX libpthreads among others) it's unlikely that programs requiring newer compilers would build anyway.

I was aware of IDO 6.2 and successor versions, but it was never made available as a free download from SGI, rather it was always an extra-cost option. For IRIX 5.3 cc was included in the downloadable IDO5.3, so GCC isn't as big of an issue (it would be for C++ support, but again it's unlikely that a program requiring a modern C++ compiler would even build on IRIX 5.3).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ginopilotino wrote: What about patch 5086?


The miniroot is taken from the downloaded overlays, so it already includes the updated inst. If you're doing a live upgrade (not booting from the miniroot) you'll need to install the patch.

It might be a good idea to put the patch on the CD. The scripts take the files from a certain directory tree structure, so it wouldn't be that hard to have it on one of the Overlay CDs.
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
neozeed wrote:
I bet there is some pretty pissed off people at HP, as they have basically screwed themselves out of control of their own destiny... So where will they be in 5-10 years from now? Trying to fab their own Itaniums? Or porting to the x64..?


As far as OpenVMS goes, the work that they did porting from Alpha to Itanium essentially removed most of the hardware dependencies of OpenVMS (no more PALcode, even though it really should have been the other way around with other processors picking up the idea), so provided HP keeps VMS around the switch to AMD64 should be pretty smooth. The hardest part will be IEST or whatever they call the Itanium->AMD64 version of DECmigrate.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
hamei wrote:
jan-jaap wrote:
You cannot expect "GNU" to support systems indefinitely, having been EOLed by their manufacturer for 10+ years.

I don't think anyone does :D

The older versions of gcc still work on the old hardware so I don't see a problem, myself. New software isn't going to run well on Personal Iris anyhow.


The newer GCC probably doesn't even run well on the older Personal IRISes.

I'm not sure how MIPS optimization is going currently, but I remember that for a while (GCC 3.2-3.3 era) they were making substantial improvements on the MIPS compilers. If that's still going on (substantial speed/generated code improvements) then it would be beneficial. If not I suppose it isn't that big of a deal. GCC 4.x does c99 and all that jazz.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
nekonoko wrote:
RetroHacker wrote:
Thing still works flawlessly.


I've had the same experience. Both my Apple III and Apple Lisa had tatalums explode in their power supplies with huge quantities of acrid smoke pouring out the vents, and both machines continue to operate just fine without them.


Check the board to be sure everything's fine. "Blasting caps" can sometimes damage traces and other stuff - they burn hot.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
I my Radius IntelliColor 20e apart to adjust focus, and upon putting it together a screw escaped, necessitating a complete disassembly to get it out.

Now my convergence is way off - on a text console I get three copies of everything (in red, green and blue) separated by a half-line :? . I've checked to make sure I plugged the proper feeds into the proper guns, and when I tell it to show red, green, and blue it does so, so I'm not sure what I messed up. I was very careful around the convergence rings and the monitor yoke, so I don't think that's it.

Anyone had this happen to a Sony Trinitron before? It makes using the computer difficult...

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
Do yourself a favor and run 4.3.3. AIX v5.1 is very slow on the older, lower memory hardware. AIX v3 is very odd. Throughout the v4 series IBM made is much more friendly, though 4.3.3 is the easiest to find software for and the most friendly.

Don't mess with the Java admin tools, either. Go SMIT all the way (until you learn enough about AIX to do it on your own, though even IBM doesn't recommend that).

_________________
Damn the torpedoes, full speed ahead!

Systems available for remote access on request.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
eMGee wrote:
Pontus wrote:
Off the top of my head I have four or five vaxstations, three microvax 3X00 (where X is 4,5,6 I think), a VAX-11/750. A handful of alphastations and an ES45. A bit more than I can handle at the moment :)

Very nice! I only have one system, a surprisingly snappy AlphaServer 1000 (4/266) , but I'm very happy with it nonetheless. I hope to get a hold of something more powerful in the future, like the rx2600 or rx2620 , like some (lucky) people on the forum own.


Quote:
I know that I don't need to add my own machines to Deathrow, I would join to get a reference system and some help I hope :) But the machines could maybe be added to the cluster using the HECnet bridge.

I assume you mean DECnet? I'm familiar with it, but I've never tried coupling a cluster on longer distances, only in local networks, is what I meant. Also, I think you'd have to run 7.3 or 8.x with ODS/2 (without hard links), for file system backward compatibility.


DECnet Plus is TCP/IP based, so no problem there. HECnet is different - it's a hobbyist DECnet Phase IV network, see http://www.update.uu.se/~bqt/hecnet.html . No word yet if the mascot is Phil from Dilbert.

VAXclusters/VMSclusters can work very nicely over long distances.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
eMGee wrote:
I know what you mean now, I forgot; the similarity in names must've confused me. I know they can work nicely, it has been one of VMS' selling points afterall, I was just not sure how it could be done with hobbyist licensed nodes.


The rule of thumb is that you can do pretty much anything with a hobbyist licensed node that you can with a commercially licensed node (provided you can lay your hands on the software distributions), just not do it commercially. The licenses are almost all included in the hobbyist PAKs.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ramq wrote:
Just look at Windows NT/2000 on Alpha, Mickeysofts tryouts in the PPC/MIPS territory, rescent IA64-venture... It all boils down to the volume segment and where they can get enough money for their investments. Just about any company financial director would look at the facts and just axe it, like they've done several times before.


MS had a crazy-sweet deal on Alpha NT. DEC did the porting, DEC did the patches, DEC did the support. MS just took a cut of each copy. It got cancelled because DEC/Compaq realized there was little-to-no point in putting in all that work - people who bought Alphas generally wanted Linux/DUNIX or OVMS.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
I thought I remembered them having more systems a few years ago - at least a couple more VAXes, and I though one or two more Alphas.

Didn't they have a VAX 4000/500 or better?

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
n1mr0d wrote: Also don't forget the PM2 processor module with a R4400-150.


If you really want to do it right you get a PM1 heatsink to put on the PM2 as well.
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
DEC never understood the computer market once machines started to be sold on price and dipped well below $50k - odd, given the fact that the company started by selling machines that were small and cheap enough to be used by a couple of people. The maintenance of the huge price differential for "real" VMS/UNIX machines vs. Windows boxes was another pointless exercise. DEC is an example of a company that died because of their marketing ineptness (whereas MS could be construed as a company that suceeded soley because of their marketing prowess).

Linux will survive on Itanium for a while yet. Altix commitments will guarantee that. No one's saying that the PC market was doomed, citing IBM's leaving as evidence.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
It would depend what it had and what you want. The big problem is that it wouldn't be legal for someone to use a direct copy to insert support for hardware in something - you'd have to do a "dirty room/clean room" setup, and even then you'd possibly fall afoul of trade secret provisions.

It's probably just the kernel and some base support utilities, but there are some things that might (probably not, but depending on where it came from...) be in it that might be interesting, such as: SGI's "r" programs (rcp, rsh, rexec, etc.), so xBSD/Linux/Solaris versions that work for SGI netinsts could be built (not super necessary, we have DINA now).

DGL and IRIS GL - especially DGL on other platforms could be very nice.

Xsgi and graphics internal stuff that would help people to port support for X.org to the SGI hardware, also stuff that could help to understand the internals.

Most of the interesting stuff (DGL, Xsgi, graphics microcode, ARCS PROM, other trade-secrety stuff) is almost guaranteed not to be in there. It's probably just a base kernel. Even if it was the contaminated code problem would take some doing to get around.

Also, if it's on the net, people who really know what's going on have probably looked at it and taken the interesting stuff away already.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
When you have more than one console open they both show the system status messages. They are separate processes/shells, though, so you can be running separate commands in them.

I would recommend one console (to see messages) and do your work in winterms (so you don't get the system messages mucking things up).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
When you get "command not found" it's usually a mistype (in your case the :// instead of space, so the system was looking for a command called "ftp://192.168.1.101" when the command is called "ftp"), or your PATH variable isn't set up the way you think it is, so the system isn't checking the directory where the binary is (or it isn't there period).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
VenomousPinecone wrote:
pilot345 wrote:
...run large mathematical models in excel.


Only Compaq engineers would think that was a good idea.


I'm sure that Microsoft sales would think so as well. Not so sure about MS engineering...

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
zmttoxics wrote: x86 doesn't have to be a disaster - its just a cpu. It is the platform that surrounds it that matters and the Mac platform is pretty fantastic.


x86 isn't pretty though. Architecturally it's pretty ugly (then again look at VAX - that was so ugly that they started modifying it with the second generation of CPUs - solid but ugly), but it does work, and hey - with compilers we're all Turing machines now, right?.

I would like to see a discussion with people who really know on what the best implementation architecturally is - no concerns about price/perfomance, just a discussion of how things were laid down and planned. Some things are interesting but difficult (AS/400's "everything is an address including the FS" and the intermediate code level), some things are on paper beautiful but wound up with a number of tweaks in practice (most 1st-gen RISC processors), some have too much baggage (Itanium - original spec had the x86 units tacked on. Not sure if the second gen would be a runner, as I don't know enough).

Alpha would probably be a contender - pretty clean, and PALcode was neat. What else?
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
snowolf wrote:
vishnu wrote:
IIRC to cross-compile you can use the -TARG: argument but for mips3 I think it's as simple as cc -mips3 (???)


Right; there are compiler options that will need to be set but my concern was more the MIPS-3 flavor libraries I imagine most software will need to compile against and setting that up on the build machine. I would also like to know people's general experience cross compiling between MIPS-3 and MIPS-4.


For the most part (provided makefiles are done right) there isn't a problem. The libraries are the same as far as the linker goes - it happily links away to whichever version is installed on the target machine (many IRIX 6.5 system libraries are MIPS-3 anyway, so it doesn't matter). I build -mips3 -n32 on a O2k (when it's running, need to get it uncovered but haven't had the time) for my MIPS3 builds and my only problem with arch diffs so far has been one file where -mips4 was hardcoded in the makefile and had to be patched.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
R10k is right. Computing is straight Dewey, and I was trying to twist it into Keats.

If it works and no one has to see the inside who cares. Save the rhapsodies for something that matters. It's a testament to the quality of Intel engineering that they were able to work around the deficiencies of a processor design that grew far beyond what was originally envisioned (with limited addressing and registers) - compare that to Motorola, who managed to kill an architecture that was in theory better thought out (although it looks like 020 added too much cruft).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
I remember seeing that a few years ago. I can't remember if it was O2-only though - I seem to recall that it was limited in what graphics hw could run the photoshop accelerators.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ChristTrekker wrote:
Put me down for one of those, too. It would be a nice step up from my U5. If local, I'll happily drive over and pick it up from you.


Don't waste your time or money. Look for a Blade 150 or 1k (or over), or even an Ultra 60/80. U10s can have Creator graphics and a bit more memory than a u5, but it doesn't feel like much of a difference. Something with an UltraSPARC II (full II) or III would be much better, and the Blade 150s have improvements in several subsystems over U5/10.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
The "how" has been covered fairly well, but if anyone's interested in the "why" here it is:

The IRIX installation system hasn't been updated in a long while - it's very old-school UNIX, derived from the days when software came on tape. Because of that, there was no way to "boot" off of it, so accommodations had to be made. For older UNIXes the answer was the miniroot, a bootable, generic, small-but-complete-enough filesystem that could be block-copied to a disk in the partition that would later become the swap. This begat other problems, mainly this one: "how do I know what the swap partition is if the disk isn't labeled?"

The answer to this is to have an even smaller UNIX implementation (Sun's MUNIX) or standalone disk tools (SGI's fx or Apollo's SAUs) that can run entirely from the memory and label a disk (well, not the only way - you can require that people buy pre-labeled disks from you at horrendous markups, but that fell out of favor real fast).

Since that time, with the introduction of CD-ROMs and machines with crazy amounts of memory (64MB+), many implementations have gone to install process that's more like a "live CD", where the read-only filesystem is mounted from a CD and stuff copied over to memory as needed (e.g. modern Solaris, HP-UX, Tru64, AIX, Linux, xBSD). Much easier, but couldn't be done on 1987 machines with QIC drives.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
R4400s use the 32-bit version (sashARCS) instead of the 64 bit version (sash64).

The one exception to this is the Crimson, which uses the older style naming (sash.IP17)

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
Auroras are very solidly built compared to most other machines. Better than Indy, better than the Ultra series, and the Mbus does a good job considering its age.

Sun was a latecomer to SMP, but their desktop models were pretty good once they finally came on board (PA-RISC, RS/6000, SGI and Alpha didn't have anything comparable in the desktop field until much later, if at all).

Hate to burst your bubble, but there was another revision of the Aurora chassis that's newer than yours (if I'm reading your machine right) - it used a low-profile FDD allowing the use of a standard 1/2 ht CD-ROM drive.

I have two: 1 2x150 HS and one 2xSM81 SuperSPARC (can run NeXTSTEP).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
NeXTSTEP/OpenStep is SuperSPARC/MicroSPARC on Sun4m only :( . They'd given up by the time HyperSPARC/TurboSPARC/UltraSPARC came out (that would have been nice - a U2 or U80 running on Elite3D or Creator3D with a SMP kernel). They made the wise choice to not mess with trying to work around Sun4c's oddities as well.

Solaris from 2.3 up through 9 will work, but I'd recommend not messing with anything under 2.6 - there's a reason that Sun kept SunOS 4.1 around. If you want the BSD experience you can run SunOS 4.1.4 just fine on SM151s (done it) - plenty fast, but it isn't the best at SMP and it's hard to find software for it.

If you want the *Step experience you might want to try and pick up LuBu OpenMagic for Solaris. It's semi-offical, since it's a repackaging with a few tweaks of Sun's Openstep/Solaris product and will run on modern Solarises. Find it at http://alge.anart.no/projects/openmagic/

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
mgtremaine wrote:
I too have soft spot for the SS20. The memory max is 512MB [8x64mb] you can use the later 128MB sticks from the early ultra's the SS20 will see them as 64mb.

-Mike


Some of them I suspect. I tried it and got memory errors with the sticks/SS20 combination I had (worked fine in an Ultra).

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ChristTrekker wrote:
bri3d wrote:
Agreed - unless it's got sentimental value like it does for VenomousPinecone (or you think it's particularly cool, although I don't see why you would) there's no point to a U10. There are better Sun hardware deals to be had.

My U5 was a freebie. That's a hard budget to beat when looking for Sparc upgrades. I don't know about where you live, but here, anything that's not x86 fetches a premium. It's very rare indeed to come across good deals on any SGI/Sun/whatever equipment.


Which is why I suggested you make your dollars count and skip the U10 "upgrade". A Blade wouldn't cost much more and would be much better.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
ritchan wrote: I'm also interested in finding a cheap copy of XLC.


It doesn't exist. IBM isn't interested in hobbyists.

There's an academic program for professors, graduate research assistants, and high school teachers, but that's about it.
"Brakes??? What Brakes???"

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
mgtremaine wrote:
Hmm I have 4 in mine that I had put in by mistake from a larger pile of memory and they worked. Next time I open the bugger I'll check the part numbers for clues. [Which ROM where you on by the way? I've only used 2.25 and 2.25r]

-Mike


I just popped one in to see what would happen, noted the errors and took it out again.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
dir_marillion wrote:
Just wondering, where is the new mips4 packages... why do we leave platform die ?
I would like to see recent browsers/clients/apps being ported to super Silicon platform.


Because it's a lot of work for the newer programs. Many expect newer libraries/calls than IRIX provides, many have things from GNU/Linux in them that need to be rewritten to work with IRIX, and many Nekochanners don't have the spare time. Kudos to those that do, but I haven't been building as much b/c I don't have the time to learn the programming skills necessary to do the porting.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL:
A few years ago they did some research and found that RAID5 arrays tended to have a problematic failure mode much more often than others. A single disk would go down (OK, that's why I have RAID5 redundancy, right?), but then during rebuilding the array a second drive would go down. I can't remember the numbers, but it was enough of a risk to where I do not use RAID-5 any more. While the research did not directly mention RAID-3, I would suspect similar results because the same processes are in place (all disks need to be read completely during the rebuild of the array).

It uses more disk space, but RAID-10 doesn't have this problem nearly as often.

_________________
Damn the torpedoes, full speed ahead!

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O200: :ChallengeL: