The collected works of kramlq - Page 1

zizban wrote: Heck port Irix to x86.


If what people on here who appear to have seen the IRIX source say is true, then the code is quite specific to MIPS/SGI hardware, so an x86 port would involve quite a bit of work. Then consider the additional work needed to update IRIX to what would be considered a modern UNIX desktop OS that can compete for business with Linux - full USB + firewire support, lots of PCI drivers for x86 hw, more file systems, Linux/X86 binary compatibility, a more impressive UI.... etc.

I think the new CEO would not last long if he tried to finance all this, especially when Linux is now fairly acceptable to companies*, and it already has all this functionality.

* It is quite ironic for me to see how companies now find Linux/Open Source so acceptable. I still remember the long series of high level meetings held at a multinational I worked at a decade ago in order for senior management to determine if SAMBA was ok to use in a production environment.
I am just curious why IRIX used the following directory layout decisions. Anybody know the reasoning behind it?
- Having system admin binaries in /etc/. This makes it a little awkward when you want to grep for something in the configuration files. Is it to ensure that they are always accessible even when only root is mounted? [EDIT: Just remembed, many are symlinks, so mount issues can't be the reason]
- Having root's home directory as "/". As you use editors, and in some cases install applications (e.g. adobe), you tend
to get silly .<something> file and log files left in "/". Why not use "/root"?
- Why are home directories in /usr/people as a default - it makes things like like a "find /usr -name <something>" search through even more directories and files than it should have to, compared with a "/home" layout (e.g. when searching to locate a binary or library that you have forgotten the path to).

Perhaps I got too used to Linux... but these are some places where I think Linux makes more sense than IRIX.
Will anything break by changing root's homedir to "/root", or users to use "/home"?
Thanks for the replies. I've got some passwd file editing to do... :)
The Keeper wrote: If nothing else, the fact that DEC, Sun, SGI, etc., have all been around for about the same amount of time, and the fact that Linux on SGI MIPS is in such early development, whereas the other platforms are relatively stable, is a testament to IRIX's usefulness. I'd like to think that's the case, anyway.

There is a big difference though. DEC's policy was that their Alpha systems were part of a hardware business, and they wanted as many OSes as possible running on that hardware. Hardware documentation is available for just about everything. They also placed no restrictions on modification/use/redistribution of the PALcode/monitor example source that ships with their early eval boards, meaning Linux didn't have to start from scratch wrt PALcode (would have been a lot of very low level work). Also, DEC themselves did a lot of the early work on the port.

Initially Sun were like SGI, with a mixed feeling about opensource. They even went as far as taking back a loaned multiprocessor machine once they found out it was being used for Linux development. They wouldn't release hardware documentation, so some reverse engineering had to be done. But even so, things like SPARC, SBus and OpenFirmware are all well documented open specifications. And vendors like Tadpole did release hardware documentation, which helped for Suns SPARC sun4m systems. Gradually Sun's attitude changed, and by the time of the Ultra series, they were releasing hardware documentation.

SGI released hardware documentation for the Indy, and the initial port of Linux to Indy was done at SGI by an intern.
But only parts of the hardware documentation were released, and it isn't like a normal hardware spec (e.g. in one place, the interrupt controller doc says something like "contact someone from the VINO team to find out the exact addresses used"). So a basic Indy with Newport graphics is documented, but most other SGI configurations are not. Combine that with the fact that almost everthing in an SGI MIPS is very proprietary, and you have a very difficult porting job for everything but the standard Indy. On top of this you have little information about the hardware workarounds needed for things like R10k speculation or the alleged PCI bus issues on O2 that make PCI card support troublesome.
The swap space really needs to be bigger than 256Mb. Heres a little background...

In modern systems swap needs to be big enough to hold the data from all processes that is not easily reobtainable from its original source. So things like code and read-only variables in a program don't need to be stored in the swap file - they can be discarded at any time and reobtained later from the executable file of the related program. But everything else (read/write variables and arrays in programs, writable shared memory, program stacks, program heaps) will need to be stored in the swapfile if it ever gets paged out of memory for some reason. So you have the above items for every program you are running, and also libraries will potentially have a copy of their associated read/write data for every application that links with them. Even if such items never actually get physically swapped out to disk (swapping ideally only happens due to tight memory conditions), many operating systems like to reserve the space in the swap file for them anyway, as it makes handling out of memory situations easier. (I don't know if IRIX does swap file reservation like this as it is a closed source kernel)

Notice all the calls mentioned in the errors are ones that involve allocating/reserving memory in some way - fork clones an existing process, brk extends the heap.

So you should always have a large-ish swap file. Its difficult to say what size is best, but 1-1.5 or even 2 times the size of RAM is a common setting. Contrary to popular belief, this is purely as a heuristic - there is no way to calculate the exact amount of swap you need based on the amount of RAM you have installed. I've witnessed certified Microsoft "experts" try to explain otherwise to people right in front of me, purely based on their misunderstanding of what happens at kernel level.
nekonoko wrote: If you're unable to repartition or create a new swap partition on a secondary disk, IRIX does have the ability to use swap files for additional swap space. There are examples for setting them up in the swap man page - here's the relevant section:

Code: Select all

The following adds 50Mb of swap space to the system using a file in the
/swap directory:

/usr/sbin/mkfile 50m /swap/swap1

/sbin/swap -a /swap/swap1

To make this swap area permanent (automatically added at boot time) add
the following line to /etc/fstab:

/swap/swap1 swap swap pri=3 0 0


If at all possible, you should always use a dedicated partition. Using a file for swap means the swapping code needs to go through the filesystem switch every time it needs to read or write a page to disk, so it ends up having to go to other parts of the disk to read and update filesystem metadata first (bitmaps, journalling logs, translate file blocks to blocks on disk etc.). With a raw swap partition on disk, the layout is simpler (typically just a bitmap at the start, followed by the pages) and device access is also simpler (fewer layers of translation).

Considering how easy it is to back up and restore or clone a harddrive under IRIX, I think it is worth taking the time to properly extend the existing swap partition. Expecially if you know your system is already actually needing to swap while running (as in the original post).
stuart wrote: Changing the subject slightly, does anyone know how modern multi-platter disks arrange their data? Are they effectively a JBOD-type setup where the first platter is filled and then data starts being written to the second, or is it more like a striped setup, where head movement is minimised by writing as much data to the current cylinder as possible before moving on?


I am mainly familiar with the internals of traditional UNIX style file systems (Sun/BSD/OSF1 UFS, Linux EXTx). The way they do it is to divide the disk up into logical allocation areas called cylinder groups (each consisting of a number of adjacent cylinders, i.e. disk tracks right down through the platters). The idea is that this forms a localised storage area on the disk, as there is a known limit to the amount of disk arm movement needed between reading any two items in that group. Each cylinder group is essentially treated as you might expect the entire disk to be - it stores its own (portion of the) inode table, its own block allocation bitmap, and then it has its own file blocks. Important stuff like partition tables and superblocks may also be replicated in each cylinder group.

The file system is implemented to build upon this locality (whenever possible): e.g. the data blocks of a file will be allocated in the same cylinder group as the files inode, the inode will be allocated in the same cylinder group as the directory that refers to it, subdirectories will be allocated in the same cylinder group as the parent directory.

The idea is that having read one of the above, you will probably soon want to read the thing that it points to. Having them both in the same cylinder group means they are physically nearby on the disk (i.e. at most no more than N tracks of arm movement apart, where N = number of tracks per cylinder group). If you didn't do this, then something as simple as 'ls -l', which involves iterating through a directory, obtaining the inode for each file referenced, could potentially send the disk arm over and back across the entire disk many times. With a volume group, you know the disk arm will have to move (on average) half way across the volume group (... so N/2 tracks) for each item.

There are various heuristics regarding file placement. Some might try to allocate all data blocks of the same file consecutively. Others used to try and space each block of a file N blocks on disk apart in the same track, where N is based on the time taken to read one block and then transfer it to the host. This area is just guestimates, and is also influenced by disk controller capabilities and caching. As disk technology has progressed, there is less emphasis on trying to be clever than there used to be.

EDIT: Forgot to mention, the on disk layout of XFS uses a similar scheme, but I am not familiar with its allocation policies.
If something runs on a CPU, most smart schedulers will apply a weight value in the scheduling algorithm to bias it toward that CPU again in the future, as the code/data is likely to still be cache resident (or "warm"), and entries in the TLB are more likely to be useful. Moving it to a "cold" CPU means potential cache misses, cache coherence, and TLB misses.

This is in general. IRIX internals are a mystery, so I am not definite this is what is happening on your Origin.
bjames wrote:
hamei wrote:
bjames wrote: So does this mean I can drop in a RM7000 processor to replace my R10K 200mhz?

Yes, but you need a couple of parts. The R10/R12 cpu card sits farther off the mainboard so you need different connectors and maybe standoffs. There's an R-5 350 around here for a tad over a hundred $$ but maybe that's too much ?

Btw, you have an R10k-200 ? or 195 mhz ? People have overclocked the r5 is why I ask ... otherwise, you're liable to get people confused between the r5-200 if you call the 195 mhz r10k cpu a 200.


Sorry its an R10K 250 mhz :)

The RM5200/RM7000 have a different CPU design to the R10000/R12000. An RM5200 @ 300MHz is about on a par with a R10k at 195MHz, so don't go "downgrading" your R10k 250MHz. I have a RM7000 at 350MHz, and while there are no official performance figures for O2, I think real world tests show it to be close to an R12000 270MHz, so IMHO it would not be worth the cost of upgrading to that either, given the price of the RM7000, and the motherboard/chassis differences.

Capturing is fine on O2, but the best thing is to then transcode and edit on a fast PC/Mac system. If you really want that extra bit of performance on the O2, the R12000 300MHz is the best price/performance ratio for O2 for you. I don't know prices in US/Canada, but here in Europe R12000-300MHz sell for about 85 euros, and the old CPU you are replacing would be worth about 30 euros. Of course if price is no object, go for the R12000 400MHz (if you can find one), because you get the faster CPU speed upgrade and a 2Mb SCache.
hamei wrote:
kramlq wrote: An RM5200 @ 300MHz is about on a par with a R10k at 195MHz, so don't go "downgrading" your R10k 250MHz. I have a RM7000 at 350MHz, and while there are no official performance figures for O2, I think real world tests show it to be close to an R12000 270MHz ...

Mine is a rm7000 @ 350 also but I swear it's faster than the 300 mhz r12 it replaced. Maybe the disk makes that much difference ?

By real world tests I just mean me using the machine, and what I felt it compared with in the R10k/R12k O2 range. Obviously it is affected by things like disk rpm, cache size and utilisation, and how much floating point is being used. I actually had an an R10k 250MHz and R12k 300MHz as well, and to me, the RM7000 350MHz seemed to rank between the two. In the end, I kept only the RM7000, as I felt the speed advantage of the R12k 300MHz wasn't really enough to justify keeping it in addition to the RM7k.

Taking price, performance and upgrade hassle into account, I definately think an R10k user should upgrade to R12k 300MHz or 400MHz rather than an RM7000 @ 350MHz. Unless you need an O2 with two disks, or a stable O2 machine to run Linux/MIPS.
SGI's peripherals come in granite as well, and they are designed to stack.
Here is my CDROM (the one in my O2 was giving errors), DDS3, Floppy, and another CDROM for an Octane under the desk. They all use Centronics 50-pin SCSI connector.
SGI-SCSI.JPG
SGI-SCSI.JPG (6.04 KiB) Viewed 316 times


Sorry about the quality. I had to use a camera phone, my digital camera batteries need charging.
josehill wrote:
If you do try the 4.3 release on the O2, I'm sure that many Nekochanner's would be interested to hear what you think of it.

Perhaps you should merge this with the existing thread here , which links to an interesting article on the same subject.
I notice the SPARC version has memory barriers. I am not familar with how these atomic ops are used within the rest of the codebase, but it suggests that the MIPS versions should also do memory ordering if you want to make sure it runs correctly on all MIPS implementations regardless of memory ordering model in use. The MIPS equivalent to 'membar' is 'sync'. Perhaps the intrinsics already do this?
hamei wrote:
mapesdhs wrote: And of course PayPal have a reputation for simply locking accounts and grabbing the funds when
things go wrong. Worse, one's money is not protected anyway and it's very hard to get it back if
they lock an account (180 days or more). Many people have had their accounts cleaned out. I'm
sure you've read paypalsucks.com..

If Georgie-Porgie puddin' and pie were not the leader of the Free World, PayPal would be controlled by the banking laws. They walk like a bank, they quack like a bank, they eat snails like a bank, in effect they are a bank. So how they get away with their crap ...

In the EU, paypal is actually registered as bank operating from Luxembourg. AFAIK, only accounts held in the EU are under its jurisdiction.

I fully agree eBay/paypal have gone from bad to worse lately. Some of their latest rules are quite odd
- Sellers can now only leave positive feedback for buyers - so it is very hard for buyers to get a bad reputation. In fact, I don't really see the point in sellers leaving feedback anymore - even if you encountered a difficult buyer, you cant let others know, so the feedback system has no value in terms of representing a buyers behaviour.
-I think hiding identities of bidders is a bad move - in the past, my suspicions once led me to spot and report a shill bidder on an item I won, but now it is more difficult to see that type of thing.
- Paypal is now mandatory for some listings, which I think is somewhat anti-comptetitive - when available, most buyers will use it over alternative systems as it costs them nothing.
A couple of months ago someone on here pointed out to me that some Altix systems were now shipping with x86 inside. Immediately I wondered if it was the start of a move from Itanic. They are very different chips, and I don't see how a company in SGI's state can justify the expense of engineering systems using both. When skywriter wanted to know if we had any questions for SGI I was going to suggest asking about x86 vs Itanium in the long term, but didn't, as I figured SGI would never give a straight answer.
SAQ wrote: From a technical standpoint, aren't there still potential performance benefits from the VLIW architecture (provided that Intel has a way to deal with the scalability issues - i.e. not wasting units if a future version is wider and attempts to run current software), or has current OoO technology eliminated the edge?

VLIW just moves a lot of complexity into software (compilers and apps), so for those willing to put in the effort in optimising code, there possibly are, but X86 speed advances so fast you have to ask is it worth the effort. Interestingly, it seems a trend is emerging for simpler CPU design as well - UltraSPARC and Cell (PPC) both reduce the amount of OOO technology in favour of adding more parallelism (Chip Multithreading in SPARC, Coprocessors with fast local memory in the case of Cell). The X86 approach is probably the safest bet though :-)
TeeTylerToe wrote: in a perfect world, what I'd like is to capture full res NTSC from my 250MHz R12K O2 in rice, or huffyuv, or one of the other favored lossless video compression formats. what I'm trying to do with the constraints I'm working with, is record in either MV using RLE24, or quicktime animation with full resolution NTSC. what I can't even is just plain record full resolution NTSC with no compression at all.

It looks like the comp option for dmrecord is there because they planned to add that feature later, but where it is now, it only accepts jpeg. so I look at avcapture. when I try anything with avcapture I get packing error, and when I specify -d 3 for vid device 3 (s-video) that fails before the packing has time to fail. I'd love to run avcapture -B which is a disk benchmark because whenever I run diskperf -W I get like 2MB on a 36GB 10K drive.

media recorder's nice, but I can't even get it to capture uncompressed without it dropping so many frames that the result video is choppy as heck.

I'd really like to do this without resorting to mjpeg, but can someone remind me what the max bitrate is for ICE compression?

does anyone have any advice on uncompressed vid cap on an O2?


You must use the "two fields" option, otherwise you will probably get dropped frames and an error during conversion. And I think Ian Mapleson recommends 3Mb constant bit rate, but you must set that option last before starting the record, due to a bug in mediarecorder. And from experience, there are some sources like VHS that the O2 just doesnt like - I have VHS videos I tried to record where the O2 always drops a frame in exactly the same places no matter how many times I try - probably due to a glitch in the original recording.
TeeTylerToe wrote: ok, after further investigation, the O2 works pretty fantastically at capturing uncompressed video. it's more then happy to capture full resolution NTSC to your heart's content.

not only that, it'll encode that captured uncompressed video to mjpeg at just about realtime.

what it absolutely refused unconditionally to do, is play back any kind of uncompressed full frame video at any framerate above 5fps... I should probably try mplayer, and vlc, but right now, I'm going to work on figuring out the best way to capture just the active part of a NTSC input in something like RLE, or huffyuv. hopefully 40 minutes or so on a fast 36GB, and a slow 18GB. I guess my best bet is making a JBOD with all of the 36GB, and only half the 9GB. hopefully that'll be enough.


I'm not terribly familar with NTSC but uncompressed PAL is usually 20-25Mb a second. The O2's internal SCSI is something like 20Mb/s, and is shared for all internal disks. The O2 was designed for real time recording and playback of compressed video - using specific formats that can be streamed through the ICE for hardware assistance.

In fact, I just had a look in SGI's sales manual, and it specifically says that the O2 can do real time playback of qt/mv/avi at 30fps only if the video is JPEG encoded. For recording, it says uncompressed capture requires a suitable external disk array (but these comments were based on the lowest possible R5k/R10k CPU configurations, and probably older disk technology).
Ok, dont try this if there is anything important on that drive, but....

I got that message with a drive I had connected to my octane via an external case. I found if I switched the disk off immediately upon seeing that message, then turned it on after about a second, IRIX would continue booting (with error messages) and I could mount it ok and get my files off it.
MisterDNA wrote: Glad someone bought them to remove my temptation. The wife would have killed me. :D

Besides, he has the SPROMs--I don't.

But I still have three of the right modules.


@MisterDNA,
Nice to see you back! I had PM'ed you about that O2 RAM last year (and send paypal payment for it) but you didn't seem to get it. Perhaps you could contact me via PM?
Very nice. Time to update your sig BTW :)

The 600MHz upgrade is on my list of things to do, but those 300MHz modules are hard to find.
Dubhthach wrote:
VenomousPinecone wrote: I thought half the fun was trying to figure out where everybody is!


Especially when their location is in a language other then English -- like mine!

Yay, my 13 years of studying irish wasn't in vain :)
There are various emulators for UNIX/RISC platforms available.

PersonalAlpha at the following site can apparently do Tru64, but I haven't tried it out. AFAIK it is from a DEC research project, so it should be fairly good.
http://www.emulatorsinternational.com/e ... lalpha.htm

BTW, you can also run DEC Ultrix using gxemul.

SimOS/MIPS can run a modified version of IRIX 5.3 on an IRIX/MIPS host, but is really slow when using Mipsy CPU model (embra just crashes, but I didn't look into it). SimOS/MIPS also runs on X86, but I couldnt try IRIX in it because the version I use in Linux/X86 has some custom hacks for some MIPS code I was testing. In theory it should work, but there might be some bugs to fix in SimOS first.

For SunOS/Solaris, there were some versions of SimICS that could emulate a SPARCstation. It might be only for SPARC hosts though.

I don't think there is an emulator for HPPA/HP-UX. HPPA isn't a nice architecture, and some of the more unusual features of it would be very slow to emulate.

Also, on the non-x86 front:
SimOS/PPC can do some modified versions of AIX, but requires a PPC host.
SimOS/Alpha can run a modified version of Digital Unix 4, but needs an Alpha host.
hamei wrote:
kramlq wrote: HPPA isn't a nice architecture ...

Really ? In what way ?


I was kind of speaking generally there, rather than purely from an emulation perspective. Stuff like:
- its rather limited atomic primitives (it can only load and clear a word atomically)
- Disabling virtual addressing and entering physical addressing mode during an interrupt or exception.
- It uses the cache for both virtual and physical modes, which means aliasing issues can occur between the same location accessed via a virtual and physical address.
- Ambiguously defined instructions like cache line operations, which have implementation specific behaviour.
- It has more trap situations than most processors, which means more conditions to check for in an emulator (e.g. trap every time a branch instruction is taken).
- General register shadowing when an interrupt or exception occurs, and live mirroring of certain control registers. That means more overhead for an emulator.

HPPA is certainly not the worst CPU (that would be the IA32 and IA64), but as RISCs go, it isn't the nicest either IMHO.
R-ten-K wrote: I thought PA-RISC had its cache virtually addressed, isn't that the case? (at least 1.1 and 2.0 were).

With virtual addressing on, it virtually indexes the cache. In physical mode, it is physically indexed (as thats all it has available). As I said earlier, the CPU enters physical mode for every interrupt or exception, and until the handler turns on virtual mode, any addresses accessed must be "eqivalently mapped" (PA-RISC terminology) in virtual and physical mode, or else what you write in physical mode may not be seen later in virtual mode etc. It has other strange effects as well - for example, the handler itself should be equivalently mapped, or else strange things will happen for the instruction after the one where it switches to from physical to virtual mode.

To go back to the original point.... does anyone feel like writing an emulator that does all this? That is why there are are no emulators.

porter wrote: From a C programmer's perspective, the oddest thing is that in 32bit mode the stack grows upwards, in 64bit mode the stack grows downwards.

Ah yes! How did I forget its most notably weird feature!
R-ten-K wrote: Also, didn't the Itanium support offer some level of support to emulate HP-PA? Which may be a reason why IA64 is so @#[email protected]#$ up? Trying to offer support for x86 and HP-PA must have been a nightmare, ugh....

A lot of HP-PA features were included into Itanium. The memory scheme, protection scheme, interrupts and even the type of control registers present are somewhat similar. I think it also supports the gateway page mechanism for system calls (which was unique to PA-RISC)

R-ten-K wrote: Also to be fair to HP-PA, some architectures do require physical addressing for certain privileged instructions. Looking at their docs they are physically tagging the cache, so "equivalent" mapping shouldn't be too bad (probably in some XX megabyte offset chunks). Alas multiprocessing must be a PITA, unless they have some very very clever cache controller going on there.

Equivalent mappings are done just like the normal way you would handle virtual aliases. But there are other implementation details that make this "design feature" a pain. Mainly to do with the fact that it affects exception/interrupt handling. If the active kernel stack is not equivalently mapped, you can't even touch that until VM is enabled, so you have nowhere to save the registers. After enabling VM, any faults might overwrite those registers. Lots of issues like that that complicate things for no good reason.

porter wrote: The PlayStation One emulators do R3000 well.

Yeah, MIPS is the easy part, but nothing emulates SGI platform hardware enough to boot an unmodified IRIX kernel. SimOS defines its own hardware, and IRIX had to be ported to that. The Mess (or Mame?) emulator boots to PROM, but AFAIK is incomplete, so can't boot IRIX.
R-ten-K wrote: LOL we submitted at the same time about SimOS.

From what I read from you about HP-PA it sounds like the architecture is fairly ugly... it would be interesting to find out why the #$#@ they decided to make those privileged ops such a PITA to deal with. The overhead, in instructions, that introduces in those handlers must have offset the simpler HW (I assume that was their goal). The OS people must have shitted some bricks, but then again maybe that is the reason why HP-UX was so "unique." HP sure did have some odd "isms."

I should be fair and balanced like Fox News. It did include a load of things like shadow and scratch registers etc. to give some room to work in - more so than any other architecture until Itanium, but IMHO, the overall design was still pointlessly complicated. Maybe we will see a tell-all "Confessions of a CPU Architect" book some day where they will explain the real reasons why they do these things (they were hungover the day they designed that feature... etc).

Simics is now a commercial product, and supports a whole raft of targets. SPARC is probably the best represented, with emulated SunFire 3800-6800 (US3/4) with multiple processors, and also the Blade 1500. I also seem to remember an emulated target for the T1/Niagra, but don't see the docs around now. Also supported is an EV5 AlphaPC, ARM5, Itanium, various PPC flavors, MIPS Malta, and x86. Although the x86 will run Windows, and the Suns will run Solaris, the other targets only support Linux, and are mostly useful for embedded development.

Yeah, I knew about the commercialised version from Virtutech (and also the cost!) but didn't know they had free student licenses, so I didn't mention it. Though Solaris/SPARC is essentially the same code as Solaris/Intel, so it is as easy to just run it natively, or in an X86 emulator/virtual machine.
R-ten-K wrote: I think the feature I remember was their odd cache strategy: usually they implemented a single very large cache level.... And now that you brought the handler issues, it makes a lot of sense to have a big honking cache when so much overhead is expected when servicing interrupts, exceptions, etc.

Yeah, even today, the last generation PA-RISCs have 32 and 64Mb L2 caches. I'm not sure why they do that.

R-ten-K wrote:
It must have been weird to be motorola in the 80s, everyone leaving you for yet another RISC vendor.

I was at Motorola in the 90's and they were still in the process of moving 68k based UNIX boxes used internally to competitors CPU's (mostly PA-RISC). All the 68k based Macs were being replaced as well, but at least they had a stake in PowerPC, so it wasn't all bad.

hamei wrote: While we've got you two dissecting the shortcomings in processors, how about the Power lineup ? And where does the Itanic really fit in ? Would it have been the processor to end all processors if it hadn't been such a bust ?

I have never really used POWER systems - I've only read some of the papers on POWER and AIX. It is actually a couple of similar architectures in one (to support old POWER 32 & 64 bit, PowerPC-AS, PowerPC 32 & 64 bit). Plus it has great virtualisation support (in combination with firmware), and they have also started to implement hardware assists for commonly needed stuff at OS level. And it achieves decent clock speeds. All in all, a nice chip, and a shame it isn't being used more widely.

Ahh. Itanium
... take many PA-RISC features (already overly complicated from an OS point of view)
... add things like register window concept from SPARC, some of MIPS CP0 features, the PALcode idea from Alpha,
... add a huge number of registers, so that normal context switching strategies are not feasible. Ensure that lazy strategies are needed, and thus complicated stack unwinding is needed when something needs to access the register file of a suspended thread.
... then implement full IA-32 compatibility in hardware (i.e. one of the most complicated and kludged architectures ever).
... then for the ISA, use an explicitly parallel design that puts a massive burden of work on compiler writers (and also those writing assembly for the kernel and libraries).

I was never impressed with it from the start, and wasn't the least bit surprised that what seemed like the underdog (AMD Hammer, now AMD64) took the role expected of the mighty Itanium in the 64-bit world. Its actually funny - I recall around 1999 or so seeing graphs projecting annual shipment of 30 million itanium based systems just a few years later. Its a shame several decent architectures (Alpha, and MIPS to an extent) got killed off for what turned out to be a commercial flop. Admittedly it did do FP well, and scientific sales are probably the only thing that justifies keeping it alive nowadays.
Alver wrote:
kramlq wrote: Ahh. Itanium
... take many PA-RISC features (already overly complicated from an OS point of view)
... add things like register window concept from SPARC, some of MIPS CP0 features, the PALcode idea from Alpha,
... add a huge number of registers, so that normal context switching strategies are not feasible. Ensure that lazy strategies are needed, and thus complicated stack unwinding is needed when something needs to access the register file of a suspended thread.
... then implement full IA-32 compatibility in hardware (i.e. one of the most complicated and kludged architectures ever).
... then for the ISA, use an explicitly parallel design that puts a massive burden of work on compiler writers (and also those writing assembly for the kernel and libraries).


Actually, that feature was dropped completely, and is now handled entirely in the IA32 Execution Layer, which is software. And, from experience, it is never, ever used.

I know. If you read it again, you'll see I was giving my take on the design process for the Itanium architecture . The process of designing Itanium (or IA64 as it was called back then) happened a long time before Itanium2 chips (i.e. implementation ) arrived, and back then, they decided to have full IA-32 compatibility on chip.
ruckusman wrote:
pretty sure Jan-Jaap actually decompiled the PROM, IIRC the output came to 10MB or so. Next issue was, no-one knew what changes to make...that may have changed in the meantime.

Obviously reverse engineering the PROM is the start, but the main problem is that I believe the datasheet/manuals for the chip used in a 600MHz O2 were only available via agreement from PMC-Sierra. I think Chicago-Joe was given them, but said he didn't have the expertise to change the PROM. And PMC-Sierra watermark PDFs with the account, company and date/time when you download them, so you wont just find it leaked on an FTP server somewhere.
ruckusman wrote:
I remember when he was in touch with PMC-Sierra, they said all the PROM needed was a couple of lines changed for the 900MHz chips. Apparently the only difference between the RM5200 & RM7000(A, B, C), and the RM7900 is the TLB entries, they are pin compatible otherwise.

My (possibly incorrect?) recollection of events was that PMC-Sierra originally claimed the chip was 100% software and pin compatible (just like the RM7000/600 is). So Joe created a test module, and it didn't work. He then said it was a software incompatibilty, and there was something in the PROM that needed to be changed to get it to boot, and that he knew the values needed (due to having access to the datasheet). I expect you would need it to definitively find out things like the PRid values, TLB entries, and to confirm things like cache instruction encodings and CP0 hazards are identical to the existing RM7000/350 (which would be the most obvious candidate for modification). And most importantly, MIPS chips often have model or even revision specific errata that might need workarounds.
eMGee wrote:
Is it true that the R5K O2 s somehow perform better than some R10K and R12K s? That's what I heard from some people, something I've picked up here on the forum as well in some threads. By the way, you should try to get an A/V-module (if you haven't acquired one already, in the meanwhile). That's undoubtedly one of the most interesting, and unique , features of the O2 among SGI visual workstations...


The 300MHz R5k CPU is beaten by a 195MHz R10k in most tasks. The R10k has a lower clock rate but the internal architecture is more advanced, so it typically has the edge.

On the other hand, the 350MHz RM7000 (the last official SGI CPU available for an R5k style O2) seems to have had architectural improvements. It is faster than many CPUs in the R10k/R12k series. I found it to be roughly equivalent to my 300MHz R12k O2 for some tasks. It really depends on what test is being run - the RM7k might be better in some integer tasks, but R12k is usually better in any floating point tasks. Note that the RM7k has a three level cache hierarchy, whereas the R12k 300 only has two levels, so working data set size can also make a difference.

The R12k 400MHz is the fastest official CPU, and it also has a 2Mb L2 cache, so that is faster than any official CPU in the R5k series.

My experiences are only from running applications/compilers etc on the machine. I never ran benchmarks. Ian Mapleson has, but as you will see, results are often task dependent. And just to be clear, this only applies to the above CPU's in an O2 - the same CPU in an Octane will be much better, as Octane has a better architecture.
theodric wrote:
Nekochan n00b greetings from a longtime Irix-o-phile (Indigo2 SI for the win), and recent (yesterday) O2 owner!

jan-jaap, or anyone else who knows: can you please recommend to me a place in the NL/BE/DE/FR area that is able to perform the necessary chip-swap work on an RM5200 board? I've found a PN# 030-1493-001 for OK-ish money online, but I want to be able to line everything up with a certain degree of surety first, rather than blowing money and hoping.

Oh and incidentally, my girlfriend is from Wijchen :-)

'edefault' (see a few posts up) is the guy who has the parts to do the conversions on suitable RM5200 modules. He is based in Germany. Perhaps send him a PM.
porter wrote: What plonker came up with this dim-witted idea?

pip wrote: Wow, I'd like to see some code that makes use of this (and then smack the person who wrote it).

Its typically used to get the value of the PC at the point (or after it) at which the statement containing it executes. In kernels, this is sometimes necessary in relation to threads, and it is perhaps also useful in implementing debuggers (and as stated above, emulators/execution engines). AFAIK there is no way to do this in normal C/C++, so you either use a GNU extension, resort to inline assembly (if available), or use a compiler intrinsic (if available).
porter wrote:
kramlq wrote: In kernels, this is sometimes necessary in relation to threads


Actually, setjmp/longjmp do this!

Fine to have things for kernels but not supposedly portable user-land code.


Setjmp/longjmp are similar in principle, but may not always be flexible enough. They store context in an opaque jmp_buf structure, and thus may not give access to the low level values you want. Also, they may not give enough control over what is saved/restored (for example, a longjmp blindly reinstating the saved stack pointer can cause problems and undefined behaviour if it is called from outside of the function the corresponding setjump was used in). And this type of requirement can occur outside of the kernel as well - in user level thread libraries for instance. It all depends on the specifics of what you are trying to do really, but setjmp/longjmp are not always suitable.
winchester wrote:
IIRC, SimOs was specifically for IRIX 5.3 and it could very well be that the kernel had to be modified as well. I have the SimOS stuff on my computer as a leftover from a research project a couple years ago, and I should dig out the research notes from that time to be exact.


Yeah, the "hardware" is absolutely nothing like any SGI. It implements a MAGIC (Memory and General Interconnect Chip) chipset which was designed at stanford. The only devices it has are ethernet, scsi disk, RTC and character console (no graphics at all). Even the MIPS chip is strange in places (I remember the code that uses the config register and does FPU detection needing SimOS specific hacks), and the FPU itself is more like that on an R3k model. And on the 32-bit SimOS version, special instructions were added to access the MAGIC chipset (which is 64-bit only) using 64-bit reads and writes - these take the source/destination as two 32-bit registers, which is very strange for a RISC. So SimOS can only be used for the IRIX 5.3 kernel that was ported and distributed by Stanford, or a version of 6.4 that was ported but they never had permission to release.

Its also painfully slow to run IRIX5.3 using SimOS (on an O2 350MHz 1Gb machine). I forget which CPU model I used though, perhaps it wasnt the fastest one available.
mapesdhs wrote: I'm getting a bit sick of seeing the Itanic label. IA64 is actually a decent CPU these days. The orig release was
poor, but ... not now ...

You have to look at Itanium in the context it was originally portrayed. Back around 1999, someone from Intel gave a presentation that described how this amazing wunder-CPU would eventually completely replace x86 32-bit in PC server and home systems. The graph projected something like 29 million units per annum just a few years later. And just about every major OS at the time would run on it (Windows, Solaris, Tru64, AIX/SCO/Sequent, IRIX, etc). The reality is that close to a decade later, that x86 is still being designed/manufactured and is selling, x64 won the home 64-bit market, and a significant part of the server market, and the number of units shipped per year has never come anywhere near what was originally hoped for. "Mission Accomplished" eh :)

EDIT: fixed typos
...where in Edinburgh do you live ... any summer holidays planned ... do you have an alarm system on your house...

On a serious note - an amazing system, but why only 4Gb RAM in such an otherwise highly spec'ed machine?
When calling XtAppInitialize(), you can supply a set of fallback resources as one of the arguments. You can also do it manually by XtAppSetFallbackResources(), but in that case it has to be before certain other calls that create the window. I dont remember which ones.

You can declare fallbacks like this:

#define APP_CLASS "XMyApp"

String fbRes[] = {
APP_CLASS <resource and value as a string>,
NULL
};

The resource and value part can be very generic or more specific. For example
- "*.background: white" would make every widget background white, or
- "*XmBlah*.background: white", would make every instance of XmBlah's background white.

These will be used if no resources file for the app was found. I cant really remember which gets priority with regard to general resource file locations (which can be defined in loads of places). The Xdefaults files in your home directory override everything else I think.

Its been a long long time, so double check all this before doing it. I think all the O'Reilly Motif books were made freely available. They will be much more useful at Motif than I am nowadays.
Martin Steen wrote:
I still wonder why the default-color is "light blue".

Its just a default used in Motif. In fact, seeing your dialog in that colour immediately brought back memories of an industrial laser etch machine I once did software for. I can almost smell it now :-)

Its the same colour for custom apps in Motif on Linux and HP-UX. It is probably defined in one of the dozen or so configuration files X Windows uses for resources, or as a coded fallback. I wont even try to hazard a guess as to where exactly. One of the things about X Windows is it is almost too configurable. I gave two resource examples, but it essentially works like pattern matching, so you can easily configure the colour of every single widget in the system, or a single widget in a single app. It is extremely powerful. I think there is also a tool to change resources in a live app, which may be useful to you for experimenting. Perhaps somebody else remembers the name of it.

Quote:
But if I can manage to change the colors with some Motif function calls, everything is great.

Fallbacks are useful, because you can code in some sane values to use if nothing else was found, but they can still easily be overidden on a per-user basis.


BTW, I'm inclined to agree with the suggestion that Qt or something else might be more applicable nowadays. Motif might be a common denominator on many commercial UNIX variants, but despite being free now, it may not necessarily be preinstalled on open source UNIX systems. Qt is often preinstalled on most open source UNIX systems, and those are in the majority these days.