Miscellaneous Operating Systems/Hardware

Growing out of the RISC itch - hopefully - Page 1

After spending quite some money buying an SGI Octane, a Sun Blade 100, a rather powerful and at the same time useless IBM 9114-275, and lastly a PowerMac G5, I've come to the conclusion that x86 can't be beat for desktop use.

In retrospect, that might seem like a really dumb and obvious statement. But I think it's finally gotten me over this "halo" that computers with RISCky ISAs will work wonders for me. They don't. Actually, they just sit around looking cool, and the SGI Octane in particular is a great conversation piece, although it seems like it's losing out to the PoweMac G5. One thing's for sure - they're very reliable considering their age.

First there's the price/performance ratio. Well, actually the price performance ratio doesn't matter to me that much since I do have a soft spot for these babies still. But I'll just note that I've spent enough on all this hardware that I could buy myself a new x64 computer that would run circles around any one of these. That's true for any old computer - the only exception to this is a used Mac Pro, but I'll get to that in a moment.

My typical use for the machines goes like this - I run Firefox, a terminal, sshd, play music in at least several different formats (SPC/PSF, WavPack, FLAC, APE, Musepack, Vorbis, MP3, AAC, SHN, TTA and I even have the occasional MP2 file), play anime HDTV rips, uTorrent, and I generally keep all these open at the same time. I generally go out of my way to find something for the computer to do. In the case of the Octane, Blender was an excuse to do something cool with it - Maya was just scary. It looked like C4D, but was nowhere as intuitive. Occasionally I even manage to compile code with MIPSPro, getting something useful out of it. Bottom line, I'm still looking for some use for it.

In the case of the Blade 100, it was supposed to be an unobtrusive file server. That fell through because I'd have to buy a SATA card for it and run Linux, defeating the purpose of getting a SPARC, and I'd need a gigabit ethernet card. No file server supports 137GB maximum on a hard drive. Plus the fan that came with it was rather loud, and on top of that it crashed every now and then. I searched online and found that the error, which only gets sent out via the serial port, was some general umbrella term for god knows what's actually happening. I don't have the time and patience to troubleshoot that, plus my Octane's sitting on top of it.

The 9114-275 had enough power to be a desktop machine, but it just didn't have the software to do so. That was really unfortunate, and I think that's why I was really disappointed with it. To use it as a desktop, you need Linux. Take it from me, AIX as a desktop sucks. You should think of AIX's X11 as something that exists solely to let CATIA run on it. But you need to use this machine with '95 era graphics cards if you want good support. Everything faster than a Radeon 7000 is a hack. And it was terribly loud, and the sound card is of the lowest quality possible. As a person who keeps his music playing whenever he's awake, that's a definite no. I think it'd do well as a server, since it's powerful and has plenty of RAM and integrated gigabit, plus LPARs and WPARs are cool. I don't know much about what IBM's customers use their POWER CPUs for, but it's probably not LAMP.

The PowerMac G5 has quickly supplanted my laptop as my desktop. It can never be the main machine, since it can't run foobar2000, but it's quiet, takes large hard drives (I'll definitely set this up as some file server once I get the $ for HDDs), is still RISCky, and as a bonus, runs OS X and has all the perks of a desktop OS. I think I've come to ask a lot of it, much more than what I expected out of the Octane/9114-275, and so far it's done all that admirably, except for foobar2000's incredible tag management - I have to use my Windows 7 laptop for converting the codepage within music tags, for instance, and keeping the filenames in a certain scheme. It plays 720p well, barely manages to keep up with 1080p with a custom compiled mplayer+ffmpeg-mt, but that's not a big problem. The big problem is that it can play 1080p, but barely manages to do so. I can't stand that - I want it to play smoothly, or not play it at all! A dual 2.3-2.7GHz G5 would probably be able to play 1080p smoothly.

That was when I thought to myself: if I had a thousand bucks, I'd get a used Mac Pro, use OS X for everything, and WINE foobar2000. And then it hit me - I had spent the equivalent of a new computer touring around the world of RISC, only to come back to square one of x86. I felt a bit of regret, but then I was happy that I had taken this path. It really is a sinkhole for money, although not as much as being an Apple fanboy. I know a lot more about computers in general, and have a lot more experience with UNIX, which will hopefully this Wednesday get me a job.
Originally Posted by Tommie
Please delete your post. It is an insult to all the hard work society has put into making you an intelligent being.

Like somebody at AMD said about a decade ago: Benchmarking is like sex. Everybody brags about it, everybody loves doing it and nobody can agree on performance.
Nothing is price-competitive with x86 in terms of raw performance and desktop applications - Intel and AMD produce x86-based CPUs and have poured so much R+D money into their microarchitectures that to compete with them in their arena (general-purpose, commodity computing) is essentially impossible. ARM have a strong hold in the embedded and mobile market and most other CPU architectures remain in their own niches, on the high-end or in specific embedded devices (Nikon digital cameras oddly seem to be the last bastion of SuperH, for example, and Broadcom MIPS SoCs still own the consumer router market). I reached the same conclusion you did before I'd even really owned one, so I can second that - never buy an "alternative"/"vintage"/RISC desktop or server for home expecting to use it daily. There's just no point when spending the same amount on an x86 system gets twice the speed.

That conclusion didn't change my systems collection hobby though - I've just always understood that any money I put into it is not going to much use (thus why I've spent very little, all things considered).

Good luck on the job! Depending on how my current startup goes, I might find myself working some big-iron UNIX stuff come this fall.
I think I would have gotten to the same conclusion without spending a cent if only I wasn't so good at convincing myself :D It's surprising to hear the SH is in Nikon cameras. Great, now I have to have a Nikon camera too.
Originally Posted by Tommie
Please delete your post. It is an insult to all the hard work society has put into making you an intelligent being.

Like somebody at AMD said about a decade ago: Benchmarking is like sex. Everybody brags about it, everybody loves doing it and nobody can agree on performance.
Apple Fanboyism - such a terrible term. At what point is someone a fanboy? I have 5 macs, am I a fanboy? Some people here are collectors and have 20 macs. Are they fanboys?

Investing a quality product is like investing in a workstation. It is hard to call an iMac workstation grade because if a part fails the whole machine has to come offline for repair, but otherwise this machine is just as stable. I have no problem spending the big dollars if that means I can safely rely on my equipment. x86 doesn't have to be a disaster - its just a cpu. It is the platform that surrounds it that matters and the Mac platform is pretty fantastic.
Stuff.
have you tried, MPlayer OSX Extended? It lets me play modern xvid coded videos in my G3 Pismo, should play everything in your G5 machine.

http://mplayerosx.sttz.ch/
:Indigo2: Extreme R4400 200MHz (1MB L2) 160MB
Yup, MPlayer OS X Extended came with some official binaries that couldn't play 1080p by default, but I compiled the backend myself and stuck it in there and it seems to be smoother. In any case, still not enough for butter smooth 1080p. The aggravating thing about the SVN ffmpeg-mt binary that came with MPlayer OS X Extended was that the 1080p video was clearly lagging, yet top/htop showed that both CPUs were only stressed ~80-90%. Might have something to do with the fact that the binary was for a G4, and the G5 has the 5 instruction per group scheme that it inherited from the POWER4. Although if that's the case I'm not sure how top/htop would find out.

Apple fanboyism is when you see Apple's pricing and you start to think that's a pretty good deal, without considering normal market prices for the components. And then you start to spew shit about how Mac hardware is obviously better than everyone else's, clearly not having a clue of what you're talking about, like the idiot who asserted that a G5 is basically a souped up G4, while you slowly and surely get locked into their applications... then you get an iPod, you need iTunes for that too - oh look an iPad, dunno what it uses to sync with a PC/Mac but it can't be any better.
Originally Posted by Tommie
Please delete your post. It is an insult to all the hard work society has put into making you an intelligent being.

Like somebody at AMD said about a decade ago: Benchmarking is like sex. Everybody brags about it, everybody loves doing it and nobody can agree on performance.
I would recommend getting the biggest, nastiest, HP (xw9 or z800) you can afford and maxing out EVERYTHING. You get your workstation label, killer performance, and sweltering power consumption, but you can still run windows- and I've found windows 7 to be far more stable than osx SL or ubuntu intrepid.
You eat Cadillacs; Lincolns too... Mercurys and Subarus.
dang, MPlayer OSX Extended should (theoretically) fly in a G5 processor, don´t know what´s keeping it from working properly, I guess it is platform related problem as you think.

Nice G4 vs G5 from apple:

http://developer.apple.com/legacy/mac/l ... n2087.html
:Indigo2: Extreme R4400 200MHz (1MB L2) 160MB
There were faster clocked after market G4s (up to 1.8ghz). A 1.8ghz g4 performed almost identically to the 1.8ghz g5.
Stuff.
zmttoxics wrote: There were faster clocked after market G4s (up to 1.8ghz). A 1.8ghz g4 performed almost identically to the 1.8ghz g5.


In what benchmark? I'd imagine the 1.8Ghz G4 would perform similarly to G5 in synthetic 32-bit int and possibly float (not sure how Motorola's AltiVec units stack up to IBM's SIMD stuff but I'm guessing it's similar). However, the G4 had a painfully (almost ridiculously) slow bus and thus incredibly poor memory bandwidth compared to the G5, so I'd assume in the real world the G5 still blows it away.
zmttoxics wrote: x86 doesn't have to be a disaster - its just a cpu. It is the platform that surrounds it that matters and the Mac platform is pretty fantastic.


x86 isn't pretty though. Architecturally it's pretty ugly (then again look at VAX - that was so ugly that they started modifying it with the second generation of CPUs - solid but ugly), but it does work, and hey - with compilers we're all Turing machines now, right?.

I would like to see a discussion with people who really know on what the best implementation architecturally is - no concerns about price/perfomance, just a discussion of how things were laid down and planned. Some things are interesting but difficult (AS/400's "everything is an address including the FS" and the intermediate code level), some things are on paper beautiful but wound up with a number of tweaks in practice (most 1st-gen RISC processors), some have too much baggage (Itanium - original spec had the x86 units tacked on. Not sure if the second gen would be a runner, as I don't know enough).

Alpha would probably be a contender - pretty clean, and PALcode was neat. What else?
Damn the torpedoes, full speed ahead!

Living proof that you can't keep a blithering idiot down.

:Indigo: :Octane: :Indigo2: :Indigo2IMP: :Indy: :PI: :O3x0: :ChallengeL: :O2000R: (single-CM)
bri3d wrote:
zmttoxics wrote: There were faster clocked after market G4s (up to 1.8ghz). A 1.8ghz g4 performed almost identically to the 1.8ghz g5.


In what benchmark? I'd imagine the 1.8Ghz G4 would perform similarly to G5 in synthetic 32-bit int and possibly float (not sure how Motorola's AltiVec units stack up to IBM's SIMD stuff but I'm guessing it's similar). However, the G4 had a painfully (almost ridiculously) slow bus and thus incredibly poor memory bandwidth compared to the G5, so I'd assume in the real world the G5 still blows it away.


Ya, still many many benefits to a G5 (especially ddr2 / pcie models). I cant find the article I had read, but it was all movie / image editing and rendering was like 5-10 seconds difference. In the end though, still comparing old with old. The world has moved away from the PPC Mac.
Stuff.
ritchan wrote: You should think of AIX's X11 as something that exists solely to let CATIA run on it.

That's probably a true statement - not just "think of", but fact. UNIX workstations were usually purchased as "workstations" to accomplish a specific task/run specific software - not as general purpose desktops. Anything else was gravy, SGI's included.
I don't know how correct it is to say "x86 is architecturally ugly" when there have been tens of microarchitectures implementing the x86 ISA - if you're talking ISAs, it'd probably be better to specify the x86 instruction set as ugly, and I think that's debatable too (it's certainly huge and not very pretty).
I've always disliked the ARM ISA, especially all of the edge cases surrounding switching between ARM and Thumb mode, and in terms of ISAs, MIPS without extensions (specialized COPs, VFP, etc.) is definitely the cleanest and simplest.
SAQ wrote:
zmttoxics wrote: x86 doesn't have to be a disaster - its just a cpu. It is the platform that surrounds it that matters and the Mac platform is pretty fantastic.


x86 isn't pretty though. Architecturally it's pretty ugly (then again look at VAX - that was so ugly that they started modifying it with the second generation of CPUs - solid but ugly), but it does work, and hey - with compilers we're all Turing machines now, right?.



I would recommend people don't use qualitative arguments when dealing with something as quantitative in nature as computer architecture.

It is not about "ugly" or "pretty" it is about fast or slow, or power consumption, or price/performance... or any other sort of meaningful metric. I.e. things that can be easily objectively measured and demonstrated/modelled.


I would like to see a discussion with people who really know on what the best implementation architecturally is - no concerns about price/perfomance, just a discussion of how things were laid down and planned. Some things are interesting but difficult (AS/400's "everything is an address including the FS" and the intermediate code level), some things are on paper beautiful but wound up with a number of tweaks in practice (most 1st-gen RISC processors), some have too much baggage (Itanium - original spec had the x86 units tacked on. Not sure if the second gen would be a runner, as I don't know enough).

Alpha would probably be a contender - pretty clean, and PALcode was neat. What else?


To me it sounds as if you are still hung up in the 80s/70s and still think of ISA and microarchitecture as being the same thing. Instruction decoding and programming models for scalar machines stopped being a main limiter a while ago. At this point x86 induces a single digit overhead in terms of area/power and it seems that with current microarchictertures CISC decoding no longer induces any significant performance hit to a well balanced pipeline.

So when you get an almost unlimited backwards compatibility with the largest software catalog on earth, while offering far better price/performance points than competing architectures... MIPS/Alpha/et al stopped having much in terms of value proposition on the desktop and mid range segments. The main space where x86 will have a tough time competing is in the deeply embedded markets, where there are other more entrenched platforms (esp. ARM) whose main value proposition ironically is the same as x86's in the desktop: they have larger software base and more developer momentum than embedded x86.

Wether we like it or not, scalar processors are now a commodity. I.e. there is little room for truly disruptive developments in that space. It is well understood and the price/performance that is achieved is very very very hard to beat.

So we can croon about beauty all we want. But when you can get an Intel/AMD part which costs just a few hundred bucks, and gives you 80 GFlops and you can be almost guaranteed that just by waiting you will be able to double that performance at a similar price point in the future. There is little incentive to try to figure out how to reach the same level of performance with a large SGI system, which costs orders of magnitude more money, costs more to operate, and can be a PITA to program correctly to get the large level of parallelism needed to get large number of slower cores to match the performance of a small number of very fast cores.

At least for the very exciting high end of things, probably the whole GPGPU is where I would advice people start looking at if they like a challenge which is still being waged. And honestly, when you can get a 1 TFlop ASIC for a few hundred bucks under your desk. A lot of very challenging SW approaches can be investigated.

I think it is a rather exciting time. Once you get past the weird psychological hangup some people have regarding instruction sets which they are never likely to program in assembler for. There is no denying that nowadays you can get a system with a few very fast and aggressive out-of-order cores, with immensely fast data-parallel co-processors, gobs of memory bandwidth, and almost unlimited storage... all at very affordable price points. Imagine having a machine a few years back, which fit neatly under your desk, and allows you to explore MIMD, SIMD programming models, it is able to run different OSs, etc. I would have creamed my pants then, so there is no reason why I should not be very excited about it now.
"Was it a dream where you see yourself standing in sort of sun-god robes on a
pyramid with thousand naked women screaming and throwing little pickles at you?"
I missed reading your posts R-ten-K welcome back.

-Mike
mgtremaine wrote:
I missed reading your posts R-ten-K welcome back.


Ditto.

_________________
Maverick 3: Athlon X2 7750, 2gb, Windows Vista
Frank Dux: SGI Octane2 R12k 400mhz 1.5gb

"Chief, look! I learned to make fire! Who knows what we could do with this... We should learn to control it!"

"Ridiculous. How can you justify wasting time and effort on this so-called 'fire' when our children are freezing to death at night?"
All of you are heretics! :evil: :twisted:

_________________
iBook G4 1GHz "Kursk" (PPC)
Terian Tiger 4 "Vyasma" (Itanium)
Boring Homebuilt "Voronezh" (Athlon X2)
RS6000 7044-170 333MHz/1GB
Thanks guys, i feels good to have time again to get lost on the interwebs... ;-)

I understand where SAQ is coming from, I assume he is old school. So he has witnessed the growing pains of the computing field. And who can blame anyone for having a romantic ideal of the past. There are obviously, for us geeks at least, certain emotional attachments to some technologies, which defy logic and enter the realm of aesthetic preferences for example.

RISC was something born as a response to a very specific challenge under a very specific state of things regarding the technology of the time. The thing is that by now, what used to be open ended HW challenges and which deeply shaped this field, have been overcome by competent comoditized products. What I think it happened is that as far as the desktop/workstation/small server are concerned, RISC and CISC processors ended up offering similar execution performance levels. However x86 had a huge inertia due to its larger catalog of applications, and drastically smaller cost due to larger volume of production. Thus, RISC's original value proposition for the desktop et al... evaporated.

Now we are at a point in which personal computers have evolved into very powerful programmable machines, which can execute different programming models in parallel very fast, with very large storage and fast communications...

That the ISA those machines may be ugly, or that their organization may not be the most elegant, or that their OS may not be the most robust become irrelevant when the machine is fast enough and cheap enough. There will be markets which require higher levels of reliability, or higher levels of speed than what commodity stuff offers. But there are also new kids of applications enabled by very powerful cheap machines. So maybe more people can now concentrate on actual Computer Science than on figuring out what the best configuration for a programable machine should be. ;-) .

An old professor of mine used to be adamant about saying that Computer Science is not about computers, in the same sense that Astronomy is not about telescopes.

And if x86 is too ugly, there are cool stuff out there like LLVM. Maybe people can simply create their own ideal processor in software and abstract the x86 out of the equation ;-) Maybe a cool project would be to create an idealized ISA target with all the interesting features from other instruction sets.

_________________
"Was it a dream where you see yourself standing in
sort of sun-god robes on a pyramid with thousand
naked women screaming and throwing little pickles
at you?"
R-ten-K wrote:
It is not about "ugly" or "pretty" it is about fast or slow, or power consumption, or price/performance... or any other sort of meaningful metric. I.e. things that can be easily objectively measured and demonstrated/modelled.


I sort of see where SAQ is coming from though. As you correctly point out, x86/x64 may be the undisputed standard nowadays, and it is good enough for what much of the market wants. But it is also undoubtedly ugly. For example, look at how much startup code is required for the x86 architecture to get to start_kernel() in Linux (for example, compared with Alpha, which SAQ cites as a good design). A lot of it is caused by the continual addition of new features and the need to retain compatibility with old ones in x86/x64.

Of course, most of the population neither need to know nor should care about this, but that doesn't stop it from seeming ugly to those who do need to know.

R-ten-K wrote:
Maybe people can simply create their own ideal processor in software and abstract the x86 out of the equation ;-) Maybe a cool project would be to create an idealized ISA target with all the interesting features from other instruction sets.

Virtualisation/emulation and binary translation have been an interesting step forward in this area. The traditional dependence on backward compatibility and ISA is becoming less relevant as these technologies develop further. Apple's transition from PPC to x86 was a good example.