The collected works of R-ten-K - Page 1

Why exactly would IBM buy SGI, and where did you get the notion that Columbia was "donated" by SGI to AMES, if anything... Columbia is what has kept SGI alive, seriously.
swmanager is IMHO one of the best assets of Irix, however my only pet peeve is that the installation process of the OS itself is a tad retarded, but once you get it going swmanager rules!

I do not think there is really any equivalent in the free software community. Although I am partial to the ports from the *BSDs and gentoo emerge when it comes to dealing with free OS. Apt-get is also rather powerful, but I do not think the dependency hell is a fault of the tool but rather the software being managed itself. You still have to deal with the dependency hell when installing gnu stuff under swmanager... so I think people are confusing the installation tool with the faults of the software being installed :) On the other hand, apt-get and its ilk can do things that swmanage can't like installing packages on demand, try updating a gnu software toolchain with swmanager vs. apt-get. Swmanager is a wonderful tool, but seriously it shows its age, I really do not think it has been updated in like 10 yrs or so... IMHO.

I know that hindsight is 20/20 but my feeling is that SGI should have concentrated on just furthering their key technologies with NUMA-link and continue their gfx engines, and adopt a 3rd party CPU earlier in the game once they decided not to make MIPS a competitive platform like 5 yrs ago. At this point, even if SGI releases an $8K workstation it is almost impossible to justify purchasing one, unless you are on a very specific set of requirments which reduces significantly the target audience for these machines which puts SGI back to square one. Oh well....
Sincerely I have encountered plenty of problems when dealing with swmanager, but since everyone here seems to be in the "proof by counter example" mode then I should then infer that SWManager is a POS, right? Not so, as I said before a lot of the problems associated with some of the wares being packaged not the packager itself.

I have yet to experiene any major pain with gentoo's emerge, for example, but it sure does have a certain degree of learning curve... same with SWManager. Once you know what you are doing a lot of problems can be solved, and that also applies to SWManager....

The lack of on-demand installation from the net for me is a major hassle, updating the freeware collections in some of my Irii machines can be a pain if I did not have some handy scripts to manage it.
That picture is of the larger visualization system, not the workstation BTW.

So the item in that picture can actually go higher than dual processors :)
That thing is hughe! For only 2 processors and 2 HDD's

Also am I missing something, or there is no sound HW?

As per the software, SGI claims that it comes with Linux ProPack. Whatever that means....

And lastly, why Gnome? Why don;t they just port 4DWM, and with the Irix library translator it would be a nice transition system for current Irix users. But maybe that would just make sense, and SGI can not have that.

My take is that they went a tad overboard, maybe a smaller system with fewer slots... cheaper. The sub $4K entry price would have opened a lot of doors, as in a single Itanium config/Single gfx board. Oh, well...
Antnee wrote:
Look back at the old Motorola 68000. I never saw an Atari ST running Mac OS! ;)


Evidently you did not look hard enough, there were packages like magic sac which allowed an ST to run MacOS. You needed the original apple ROMS though to make it legal though...
Dr. Dave wrote:
And at that point, basic compatibility across the entire line makes the $8500 base price pretty palateable, particularly if you use the 'big' iron to run the analysis, and the desksides to build the apps and fine tune them.


Except that SGI uses plain vanilla Intel compilers, so you could actually just use a Xeon machine and use the ICC and do a cross compile, and no need to waste money in a 8.5K developmnet doorstop really. In fact you could just get an Opteron from SUN, run Linux in it... do crosscompiles using ICC, and run it in your Altix if you feel evil enough...
Numalink came from Cray.


Nope, NUMALink is 100% from SGI, in fact it went from SGI to CRAY ironically enough. The GigaRing interconnect died with the T3, the CRAYLink begat the NUMALink but it was just a marketing gimmick from SGI to get CRAY's name recognition, the CRAYLink had 0 CRAY tech in it....
hamei wrote:
assyrix wrote:
I own SGI stock and am still holding it as I bought it around $15 and now it's worth $0.79.


It occurs to me that for $80 (on the penny-stocks market, soon) one could buy 100 shares then attend an annual meeting to bring up a few management deficiencies :-)


You could actually buy enough to replace Bishop, rumor has it that the only reason he is at the helm is due to the fact that he was the largest shareholder left.
The thing about the Blue Gene is that it offers a density of computation that is very very very hard to beat. Both Columbia and Blue Gene are geared towards similar algorithms, so there is no clear design win from Columbia's more flexible communication/memory hierarchy in each of the members of the cluster when it comes to compare it against BlueGene.

In any case, Blue Gene L provides 100+ Tflops in less space than columbia :(
The thing is that both Columbia and Blue Gene L are both used as large clusters, under algorithms that are fairly embarrasingly parallel so their algorithmic use is fairly similar. Memory wise, Blue Gene L has actually 16+TB of RAM, you also need to account for the I/O nodes.
Diego wrote:
87Porsche wrote:
I don't like the chocolate Pop-Tarts. The strawberry ones are better.


Yup!; pretty much the same! Two kids fighting! :lol: :lol: :lol:


No, it is two PEOPLE having a normal technical discussion... a pretty civilised and normal thing really. People having different points of view, whoah what a concept!

I don't really appreciate being called a "kid" OK?
Well, MIPS never did any silicon. They have 3rd party foundries.

The only reason why SGI bought MIPS to begin with was because in the early 90s MIPS was about to go bankrupt and SGI needed to ensure they had their processor supplier around. Maybe it was the adquisition of MIPS that doomed SGI....
MooglyGuy wrote: What annoys me is that there are no true system specs and system details posted anywhere about the Pixar computer. :(


I dunno if you have access to the ACM digital library, but there you can find a paper with the description of the CHAP. The CHAnnel Processor, which was the building block for the PIC (Pixar Image Computer).

The paper is "CHAP: A SIMD Graphics processor" by A. Levinthal and T. Porter

http://portal.acm.org/ft_gateway.cfm?id ... EN=6184618

I have some literature/brochures from Pixar and their PIC from the late 80s. There were not really a full blown computer, but rather a co-processor. They implemented most of renderman in HW (or more specifially most of renderman was implemented in CHAP microcode)[. They were used as render engines for high end workstations of the era, they could display the output to their own frame buffer, or overlay the result using image onto the host workstation's frame buffer.I believe they worked in 48bit colour. I think they could manage up to 128MB of image data per chassis, which was rather remarkable for the era.


They were also sold by Wavefront as their high end creative workstation. Basically it was an SGI Iris 3000, with a Pixar Computer attached to it. And all tied together using Wavefront. The Iris was used for interactive/modelling, and the Pixar would render the images.

There were two families, I believe, one had a 1280x1024 frame buffer, and the second generation could do 1024x768 on multiple channels. Which was cool beans in 85/86 when they were introduced. I have some original brochures from these systems, that I will scan one of these days :-) I think they could do some video i/o too (but mostly for overlaying)
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
MooglyGuy wrote: What annoys me is that there are no true system specs and system details posted anywhere about the Pixar computer. :(


Also, check out this paper... may shed some light on the machine (pun intended)

http://accad.osu.edu/~waynec/history/PD ... cessor.pdf
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
brams wrote:
zahal wrote: Kinda looks like a NeXT monitor...and a NeXT cube!!


It does in fact look identical to an N4006 NeXT (Sony) Megapixel monitor, as both companies where owned by Steve Jobs then it probably is a NeXT monitor rebranded.



No, PIXAR had that machine out (86ish) way before NeXT had any colour product (90ish). It is just a standard Sony trinitron workstation monitor. Basically everyone and their mother used either a rebranded Sony, Hitachi, or Mitsubishi monitors. SUN, SGI, NeXT, Apollo, HP, IBM, etc.

The same vintage NeXT and SGI 21inch monitors were basically the same except that the SGI's was cream and NeXT black.....
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
*sigh* I used to have a similar machine, a 4D/440 10-span VGXT. I even had the videolan board and breakout box. And even with the rare white/cream colour doors for VGXT...

I had to debug the VGXT boardset through the serial port of the command/control processor (VGX* had their own dedicated 68020 I believe), man it was fun!
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
This is back in the 90s, let me see if I can dig and find some of my old log books.

But from what I can remember, the command processor on the VGXT (and VGX for that matter) can actually dump state and you can set state w/o need of pushing breakpoints from the host. The VGXT is really its own self contained computer, and there were lots of stuff that you could run on the command processor to push down the pipeline. It actually boots, and theoretically you can boot it independently from the host (although then you can not do anything useful really).

Most of the stuff in the VGXT pipeline is hardwired, so you can only play with whatever is in the buffer or modify whatever is microcoded. You need however a special loader that talks to the supervisor thing running on the control processor on the VGX. And that was something only SGI tech had, I believe one was called Burt and the other Ernie (forgot which side vgxt/remot was Burt and which Ernie) but I am not 100% sure, I rember something simliar because when you enabled this mode you got a prompt on your console that said: "Hey burt! What's up Ernie" I guess if you haven't seen Sesame Street this may not make sense :-)

Anyhow, once you get Ernie to talk to Burt you can then download your microcode and modify local memory at will. It seemed to be a common problem of the VGXT of developing the "pinstripe of death" and it was just stuck addresses, but w/o going to that level the easiest solution usually is to get a new span board.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Hum.... just a quick question: Can I get the phone of your supplier?

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
pentium wrote:
The only differencd between the two is like 10 or 12 transistors and a differently named operation command that sounds uber complex (eg, MMX technology and 3D now)


Either you were trying to be funny, or got a hold of whatever it was the original poster in this thread is/was smoking.

Because if either of those statements isn't true, that means that

a) you have never taken a computer architecture class
b) you wouldn't know what a transistor is if it kicked you in the pants
c) your computer knowledge is limited to succesfuly compile helloworld.c

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
VenomousPinecone wrote:

I managed to seperate the flash device from a disposable camera in junior high. Gave it to a friend in shop class, he literally shit his pants when he shorted a capacitor with his bare hand.


What's with American youth and their desire to do these things... :-)

I have seen a big f*ck off transistor melt a monkey wrench when it shorted its leads. It was from a power distribution system for an old mainframe... that was fun, but nothing matches the show and awe of an electric pickle!!!

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
Cauldronborn wrote: Well I thought about VMS but don't know what I'd do with it and just one more funky OS to learn. Since I doubt I can get Tru64 running on it I'm torn between some version of BSD and Linux (probably Debian but now not so sure). Anyone know which has the most current support for these?


Since you provided little in terms of identifying the actual machine model you have, there is not a sure way of knowing what OS will be a good fit.

Both, CentOS and Debian should support most low and medium alpha systems. There is a Gentoo branch that builds using the Compaq C compiler, which is far better than gcc for AXP.

NetBSD also runs like a charm.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Ebbi wrote: Is there a sticker somewhere on the case, where you can read something like a model or product number?


There should be a alphastation or alphaserver moniquer somewhere, is it DEC or Compaq branded?

Google is your friend too in these cases. depending on the srm version, you should be able to use "show config" to get the machine model and configuration info from the srm prompt.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Again, google is your friend, try if you can identify the machine here:


http://people.freebsd.org/~wilko/Alpha-gallery/
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Cauldronborn wrote: Well my question was more geared to what people's opinions/impressions are of using different OS's on Alpha and so Google is pretty darn useless. Perhaps I worded it badly but I already know I can use Tru64, OpenVMS, NT, Free/Net/OpenBSD, and various versions of Linux.


Google is your friend when it comes to find out what your machine model was, not really hard if you spent the proverbial 5 minutes :-) . Not knowing which box you own is quite the hurdle when it comes to give you an opinion.


An AS 1200, depending on the memory configuration should perform quite nicely w. OpenVMS. You can obtain a hobbyist kit really cheap.

Tru64 is pretty much dead, and there is nothing other than advfs with the builting storageworks trays that makes it interesting.

I am partial to netbsd, it is well supported and it flies on 21164 based systems.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Ebbi wrote: One harddisk with Tru64, another with OpenVMS.
In my opinion there is no reason to run an operating system, which also runs on x86 architecture.


Some people are interested in learning the innerworkings of non-x86 platforms.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Cauldronborn wrote: Hmm, turns out I have a copy of Tru64 5.1. Not sure its is all here. I only have one OS disk labeled volume 1, and another that says Firmware update v5.8, oh and a Software doc cd.


Tru64 should come with at least 3 CDs: 1 base/installation + 2 Associate products.

Depending on the package you have, you also need the license pak(s). Otherways I believe it only lets you operate in single user mode, and some services need a license pak to be enabled.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
For tastes there are colours, I guess.

Linux is not that well supported on most SGI machines. However, *BSDs/Linux do support AXP boxes fairly well, so the analogy is not 100% accurate. Also, not everyone has access to the media or licenses for the original OS.

Tru64 is for all intents and purposes dead, so unless the box is to be used to reminisce about the good old day, it makes sense to use a somewhat still supported OSOS or OpenVMS.

SCSI disks are cheap, and since it sounds the system is going to be used to simply dick around, just install whatever...
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
You need to map your windows user names into your local debian's user database. You can also make the samba machine the controller of the domain if you are going to have many users sharing files betweend the debian server and windows clients to make administration easier. Basically the unix machine freaks out because it sees requests from users that do not have any sort of id that it knows of.

Probably the same problem is going on between the nfs connections. Basically your server knows about the machines it is being connected from, it does not however know anything about the users in those machines.

Try some of the millions of webpages about debian and samba :-)

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
The Keeper wrote: whereas the other platforms are relatively stable, is a testament to IRIX's usefulness. I'd like to think that's the case, anyway.


I think the word you were looking for is "proprietary-ad-ness" :mrgreen:
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
I think a lot of things in the mac are affected by the reality distortion field, and none more so than the prices of used macs. For some reason the asking prices for 2nd hand macs are fairly insane, but as long as there are people willing to pay those insane prices.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
The G5s have had a dismal track record regarding reliability. My G5 had its main board replaced 3 times during its first 6 months. And none of the G5s at work were issue free during their first year after purchashing. I would not recommend a second hand G5 unless the machine has been under applecare for a while, long enough to guarantee all the boards and components were up to reliable revs.

Ironically, if you look at Ebay for broken G5s, plenty of morons are still buying non functional machines for $500+ Unbelievable.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
SPEC is not multithreaded, throwing more cores does not change the SPEC results.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
OK, let me expand on my answer.

SPECcpu (SPECint + SPECFP) are/were designed to test single CPU, not system performance. That sounds like a quaint distinction, but it really is not.

In fact you can actually compile SPEC with all the system calls and static dataset generation, and basically run that spec executable without the OS. For each benchmark in SPEC, after initialization, there is no disk access and usually it is ran in single user mode as to limit the interference of interrupts etc. This is, SPEC is designed to really point out at the performance of your core and its associated memory subsystem. Not IO, not gfx, nt OS, and not the compiler.

Whenever some one publishes a result with flags like autoparallel et al, a lot of people normally tend to discard those results. Why? Because when high levels of optimization are reported, there is a belief that then there is a lot of interference by the compiler. In that case, it is not the CPU that you are measuring, but the combination of CPU+compiler. Note, that in the cases in which you may be interested in knowing what is the effect of the compiler optimizations in performance, then it makes perfect sense to take into account aggressive levels of optimization.

Now, Intel makes the claim that the Itanium is really a CPU+compiler combination. And that is true, unlike modern out-of-order machines, the IA64 is far more dependent on the compiler, since the quality of static scheduling and compiler hints do make a huge difference in its performance (heck it is designed that way as a matter of fact).

The problem the architectural community has with very aggressive levels of compiler optimization by people like Intel is that for the most part, there is limited levels of instruction parallelism in the benchmarks (by design actually) and that enough ILP to warrant more than 1 core is usually scheduled by hand. There are tons of research on automatic parallelization of SPEC that basically concluded that it is not worth the effort (at least up to SPEC06, which may change in the next SPEC interation).

However, Intel et al, actually employ teams whose only job is to manually tweak code for SPEC. The reason being that the publicity that can be harvested from a good SPEC result, is well worth the investment. That fact makes some people weary of taking results with very aggressive compiler optimizations without a grain of salt. That is because the rest of us mere mortals don't make a living from running SPEC but actual code, most of the actual code we will run in our CPUs is not going to be optimized by hand...

And thus, one has to be careful how to read SPEC results. For the most part as I said before, the understanding in the architectural community is that SPEC CPU is to isolate single core/memory subsystem performance. And even though some optimizations and deviations are available, then those results tend to not give you the performance of what you are trying to isolate. To isolate the effect of the multiple cores/arch support for multiprocessing, there are things like SPLASH-like suites, or even SPECRate (which is basically a bunch of SPEC CPU processes launch in parallel), same for IO, or networking, or even GFX subsystem. Every benchmark suite is designed to try to provide a good behavior estimation of the subsystem you are trying to analyze.

Heck there is even some groups that speed up some of the kernels in SPECFP with the GPU, but then in that case it is easy to understand why that is not useful. When you are interested in knowing the FP performance of your CPU, it doesn't help if part of that benchmark is accelerated by the GPU...

I did not have enough coffee yet, but does this make sense?
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
nekonoko wrote:
hamei wrote:
That could be on your list next, Neko. Clatter-clatter-bang bang is kind of a friendly sound thunking away in a back room of the house ... plus you have an excuse for ALL CAPS - "sorry, I was posting from my Teletype" :)


Heh, I'm getting there. Just dragged home an IMSAI S-100 system, complete with external wood/metal tower containing dual full height 8" drives. Perfect system for an ADM-3a - terminal output only. Best of all, it was free :)



Next thing we know, you will be running from the law because you just logged into protovision using "joshua" as the backdoor password and tried to nuke Las Vegas...

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
nekonoko wrote:
R-ten-K wrote:
Next thing we know, you will be running from the law because you just logged into protovision using "joshua" as the backdoor password and tried to nuke Las Vegas...


Heh - funny you should mention that; picked up another IMSAI yesterday. This one is an 8080 with the front panel LEDs and toggle switches - just like the one in WarGames :)



Just remember, make sure you don't play tic-tac-toe, old computers tend to blow up playing that game.

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
BTW, Is the 8080 IMSAI in full operational order?

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
Wow, most excellent score. Much respect!


I remember watching the computer Chronicles, that was the dude that did the co-hosting with the CP/M feller, right?

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
I thought PA-RISC had its cache virtually addressed, isn't that the case? (at least 1.1 and 2.0 were).

PA-RISC is weird as an architecture for sure though...
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
Some poor souls at QEMU are trying to get HP-PA emulation going.

Also, didn't the Itanium offer some level of support to emulate HP-PA? Which may be a reason why IA64 is so @#[email protected]#$ up? Trying to offer support for x86 and HP-PA must have been a nightmare, ugh....

Also to be fair to HP-PA, some architectures do require physical addressing for certain privileged instructions. Looking at their docs they are physically tagging the cache, so "equivalent" mapping shouldn't be too bad (probably in some XX megabyte offset chunks). Alas multiprocessing must be a PITA, unless they have some very very clever cache controller going on there.

Anyhow, for those interested in full system emulation. I recommend to take a look at QEMU, full opensource... and they support X86, SPARC, MIPS, PPC, and ARM among others. You may be able to extend it to do your own system emulation if you are so inclined. SIMH does support a ton of old (historical) architectures.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"