The collected works of kramlq - Page 2

R-ten-K wrote:
IBM's terminology has been all over the place regarding multicore systems. In its current iteration, IBM parlance seems to refer to n-way as the number of processors, not cores. Processor now seems to imply "physical" chip.

Back in the early Power4 days, I think "Processor Core" was the more common term for a logically independent CPU in a physical chip, but sometimes they were referred to as "processors" in a "chip". I've never noticed any consistent usage of such terminology.
I'd been considering getting something like that as my first mac, but I am shocked at how well the PowerPC laptops seem to hold their value. On eBay and secondhand sales forums, they tend to go for much more than a new x86 laptop would cost.

People must still consider them to be useful/worthwhile machines, or perhaps long time mac owners never really got into the Wintel style hardware cycle where you upgrade just because something is newer.
R-ten-K wrote:
ajerimez wrote: I wish some enterprising company would come up with an alternative to Flash player that's actually optimized for the underlying hardware. It can't be that hard!

... Media-oriented ISA extensions have been around for over a decade in full force, yet there are still few tools/people who can exploit them properly.

I think its only going to get worse. Assembly isn't really considered important enough to teach anymore, and universities often teach virtual machine based languages like Java from day one of the course now. So the correlation between code written and what happens at lower levels is not something future programmers are very aware of anymore.

Having said that, I had always assumed the specific problem with flash playback was that it was so highly compressed compared with other formats we are used to. Half an hour of watchable quality video in less than 100Mb is quite impressive if you consider what the size of an uncompressed avi would be.
:roll:
They seem to be blissfully unaware of the fact that Minix (and hence the "open" Linux that SGI are touting these days) wouldn't even exist if the APIs and internals of "proprietary UNIX" systems weren't so widely documented, understood (and in some cases formally standardised), that people could create those UNIX clones in the first place.
bri3d wrote: How so? All they're saying is that proprietary UNIX has been "antiquated" by newer solutions, possibly ones based on the older UNIX!

I disagree with their blanket assessment of proprietary UNIX systems as "antiquated" and open systems as "higher performance" (mostly because it's a dumb blanket assumption to make, not because it's completely false) but I don't see where they're saying that UNIX wasn't necessary for newer open systems - just that open systems have surpassed it.


Well, to first rephrase the quote and highlight the fundamental message I see:
"natural shift ... moving from antiquated RISC/proprietary Unix to an open architecture"


1) Proprietary UNIX already was/is (essentially) an open architecture. The fact that cleanroom clones such as Linux exist at all, and can run most software originally written for "Proprietary UNIX" is testament to this fact. And so is the POSIX specification.
2) The fact that they feel they are "shifting" or "moving" to an open architecture suggests they don't recognise that their previous platform ("Proprietary UNIX") was actually an open architecture. It wasn't open source, but it was definitely an open architecture, which is what they are speaking of.

I don't see where they're saying that UNIX wasn't necessary for newer open systems - just that open systems have surpassed it.

If you reread what I wrote, I am not claiming that either. Just that they are oblivious to the fact that "proprietary UNIX" was actually so open that despite its complexity, people had enough info (documentation, standards etc) to create a working multi-million line reimplementation of it without access to any original source code. Does that sound like a closed architecture?
Its also possible that Microsoft indirectly did some of the SGI PROM development, as they were a major member of the ARC development consortium when originally targetting MIPS boxes with NT (and ARCS seems to be a derivative of ARC). Also, Microsoft were involved closely with MIPS (the company) because the Jazz design used by MIPS Comp. was licensed from them.

There is nothing simple about disassembling and modifying a PROM image either. Cache management is one of the trickiest pieces of code to write normally, let alone disassemble and make fundamental alterations to. And I don't expect many people here have the necessary spec sheet and errata info to write code for these 900MHz+ MIPS CPUs anyway.
mapesdhs wrote:
kramlq wrote:
... And I don't expect many people here have the necessary spec sheet and errata info to write code for these 900MHz+ MIPS CPUs anyway.


Joe said he could do it but only if he had the original PROM source. He was offered kind help from PMC and Sandcraft
in this regard should the opportunity arise, and IBM gave him some tech help aswell on cache issues. It's perfectly doable,
but for the PROM source. Ah well, plenty of other SGI-related things to occupy our time...

Ian.


Yes, but what I mean is that anyone who decides to disassemble and alter the PROM would also have to get the CPU spec sheets via ahem... unofficial channels. If you were to get official access to the CPU specs (e.g. signing an NDA or whatever), and then a few weeks/months later support for that CPU mysteriously gets added into the O2 PROM via reverse engineering .... well it kind of narrows down the field a little if SGI wanted to defend their code/intellectual property rights.
rumble wrote:
The problem is that the kernel is likely not to work for the exact same reasons the PROM won't. Why should we expect the IRIX IP32 kernel to know how to deal with the peculiarities of the new core if the PROM doesn't? In fact, I imagine there's more room for concern as the PROM uses far less CPU functionality than the kernel does.


Actually the PROM is likely to be far trickier than the kernel. When PROM on a MIPS CPU gets control at the boot vector, the cache is in a completely undefined state (as is the TLB). So even things like cache line address tags, state and check bits and config registers may have garbage values. It has to completely initialise it to a point so that it is possible to even enable the cache without causing errors. So the PROM is working with the cache hardware at a much closer level. Much of the initialisation is already done by the time a kernel gets control, the idea being that it can then just query the PROM config tree to find out about the cache and just handle normal synchronisation during page mapping changes, DMA etc. While the kernel isn't immune from changes in cache hardware (depending on how it is written obviously), it is less tied to it than the PROM is.
rumble wrote:
I see what you mean, but I still don't believe that it would be so difficult. Setting up such state isn't magical (assuming there are docs for the CPU), it's just tedious.

MIPS isn't the worst of CPUs thankfully, but cache code is quite intricate stuff at times (I've written some). Bugs can be very tough to pinpoint and track down. Of course it mainly depends on how ambitious people may get trying to shoehorn in new CPUs with specs that vary a lot from those originally supported in O2 + IRIX.

And sometimes it does involve magic :lol: Look in Linux from a few years ago and you might find a comment warning people "not to even breathe" on the cache initialisation code (because it was so hard to get right).

rumble wrote:
Heck, this topic is covered for the R3k in about 20 pages in The MIPS Programmer's Handbook.

Things evolved a little since the R3k :) I dont have any R3k book anymore, but I think it had physical indexing and tagging, direct mapping and was write through, which was about as simple as cache hardware gets. The RM7k series supports a split onboard L1 cache, joint onboard L2 cache, and off chip L3 cache.

Quote:
My point is simply that the IRIX kernel is very unlikely to support this new core unmodified and that this could well prove a more substantial task than getting around the PROM limitation. Though who knows? Perhaps we'd only need to patch up the Processor ID check somewhere if the cpu is binary compatible with some already existing cache/tlb/etc configuration. Plus, symmon makes debugging much easier.

Yes. As long as things are kept similar to existing models, then it could be doable. That is probably why the 600MHz model worked so well. But AFAIK the faster CPU's Joe tried were supposed to be binary compatible as well, yet they didn't work.
SAQ wrote:
Rumor has it that if you know who to go to you might be able to get an Enthusiast Pack still, but the last time I heard that was over a year ago. HP isn't pushing it, since they no longer sell Alpha hardware.

I've contacted my local HP office twice in the last year trying to get a hobbyist Tru64 kit with licence, but they never got back to me. And I think the Irish office was the one that ran the entire Tru64 hobbyist program :( Perhaps local reps in other countries may still be able to pull some strings though.
japes wrote:
pierocks wrote: I always got a chuckle out of the huge foam blocks inside with enormous warnings about it being "FUNCTIONALLY REQUIRED".


Popular for air flow management. HP did it on a number of x86 servers too. Now they got smart and made clear plastic duct work, that probably still gets left out by some techs.

I think it was introduced originally on the HP 712 workstation, and they got a patent for it. One of the papers on the HP 712 includes a bit on why they use foam (or "HP-PAC").
http://www.hpl.hp.com/hpjournal/95apr/toc-04-95.htm
ritchan wrote:
To be fair, I haven't seen any architecture papers on the R12K-R16K.

The R10k docs by NEC also cover the R12000. Searching for 'VR12000 pdf' on the NEC site should get back some datasheet and user manual links.

I have never seen much on the R14k or R16k, but here is some info on the R18k:
http://www.hotchips.org/archives/hc13/2_Mon/01sgi.pdf
It was of course ultimately cancelled, but it may be of interest to see what might have been.
compuman86 wrote: according to this : http://www.donlinrecano.com/cases/caseinfo/sgi the old SGI is now known as Graphics Properties Holdings, Inc. does this mean something is brewing behind the curtains?


I believe there was some unresolved litigation the old SGI was in the middle of, so it was probably easier to keep that with the old company. A member here (ajimerez I think) was involved in some part of the bankruptcy process - perhaps he might know more.
Alver wrote:
Tru64 hasn't been ported to anything new, and is heading the IRIX way. Companies pay big bucks just to be able to run those fancy homegrown apps they lost the sourcecode for, ages ago.

A colleague used to have one of those 255s in his office, and I'm pretty sure it could run OpenVMS as well. Apps on OpenVMS would be more difficult to migrate, as you are stuck with OpenVMS, which has only been ported to Itanium since the Alpha was killed. I don't imagine commercial app developers are queuing up to port to the OpenVMS/Itanium combination. Perhaps it is easier to try and keep that ol' 255 running. I've noticed that XP900s are another Alpha model that seem to go for way above what you would think they are worth.
mattst88 wrote:
The fact of the matter is that there are plenty of newer, faster, and cheaper alphas that run VMS and Tru64 for significantly cheaper than these 255s. Now, combine that with the fact almost all of them are 'sold' by one seller in private auctions, and it becomes pretty clear that it's some kind of scam.

Another fact of the matter is that some people who have a running system that works as needed/expected sometimes prefer to try and keep it running for as long as reasonably possible. One way to do this is to buy up systems for spares, so you can simply swap parts or switch disks when something goes wrong. In a old job I had, our department used to buy every HP 9000/300 that came on the market in Ireland or the UK, because we had to keep some ancient systems running software that was too costly to port or redevelop going 24/7. An hours downtime cost more than the price of 2-3 spare systems.

The XP900's aren't all being sold by the same seller, yet I've sometimes seen these 433/600MHz Alpha models go for 4 figures on eBay. I don't believe CPU speed is the reason some Alphas sell better than others.

It also happens with SGIs - Ian Mapleson has provided countless examples on here of companies who buy parts or entire systems at prices much higher than any hobbyist would believe that hardware to be worth. This fact alone isn't enough evidence to simply conclude that anyone buying/selling hardware at prices you consider to be high is involved in a scam.

BTW, its Nekonoko who has to deal with any legal threats that arise when you single out and make accusations about specific eBay sellers, so out of courtesy to him it might be wise to edit some of your posts.
leaknoil wrote:
kramlq wrote:
BTW, its Nekonoko who has to deal with any legal threats that arise when you single out and make accusations about specific eBay sellers, so out of courtesy to him it might be wise to edit some of your posts.


I know this is the policy here and its a private board so, the rules are the rules. Not anything to argue about but, I just want to point out that the entire internet would come to a grinding halt, Yelp would shut down, craigslist would shut down, amazon would shut down, ebay would shut down itself, if lawsuits like you describe had a chance in hell of going to court. Because someone threatens to sue or has a hack lawyer threaten to sue doesn't mean they will or can. Defamation is just about impossible to win in court. Yelp and Angie's list couldn't exist if there was any legal grounds for such nonsense. This board is no different. People threaten to sue all the time over just about anything. Its popular with a certain kind of hick crowd in the US. Nothing ever comes of it.

AFAIK Nekonono has been contacted by at least one ebay seller in the past about comments made on here. I know if I funded and ran a board for no reward, even responding by email to such a threat would be more hassle than I would want. Thats why I suggested editing the posts 'out of courtesy'. But of course I'm not the owner/admin or a mod. Feel free to ignore me :D

BTW, defamation/libel cases are much easier to win in other countries. For example, if you were taken to court in London for these accusations, the onus is on you to prove they are true, or you loose your case. Google the term 'libel tourism'.
leaknoil wrote:
It also costs money to sue and if they lose they get to pay all your court costs and they will lose. Those cases never stand up in court. Next to comparing something to nazi's or hitler "I'm going to sue you" is probably the next most common insult or threat on the internet.

Actually, I really can't see how you could possibly win such a case if it were filed in a jurisdiction such as the UK. And someone regularly making a thousand dollars or more by selling a really old computer has a little more incentive than many others to defend his/her reputation by sending out a warning letter (at the very least).
leaknoil wrote:
kramlq wrote:
Actually, I really can't see how you could possibly win such a case if it were filed in a jurisdiction such as the UK. And someone regularly making a thousand dollars or more by selling a really old computer has a little more incentive than many others to defend his/her reputation by sending out a warning letter (at the very least).


It is actually a lot harder to make defamation stick in the US then in the UK from everything I've read. In the US it is almost impossible to prove libel unless its super crazy made up stuff. You would basically have to make up stuff and resort to nasty personal insults about their family.

... exactly, which is why US companies prefer to file cases about stuff posted on the internet in a jurisdiction such as the UK. Anyway, write what you want about who you want. Ignorance is bliss.
I was using my HP 712 at the weekend, and it occurred to me that it still has the original battery in it after all these years. Does anyone know if there are any issues like on a Sun when this battery dies? Is there anything in NVRAM that should be backed up?
leaknoil wrote:
kramlq wrote:
... exactly, which is why US companies prefer to file cases about stuff posted on the internet in a jurisdiction such as the UK. Anyway, write what you want about who you want. Ignorance is bliss.


So you have any examples of what you're claiming or you just make that completely up ?

As I said on the previous page, google 'libel tourism' for examples. If you'd done that earlier you might have saved youself from making several more ill-informed posts. But you do what you want. I've made my point several times now and can't be bothered discussing it any more.
leaknoil wrote:
kramlq wrote:
As I said on the previous page, google 'libel tourism' for examples. If you'd done that earlier you might have saved youself from making several more ill-informed posts. But you do what you want. I've made my point several times now and can't be bothered discussing it any more.


That has no relevance about what we were talking about. That is all about the wealthy, claimed terrorists, and tabloid journalism. Actually show a single case of libel involving someone talking bad about a seller. You can't. Where Paris Hilton files her latest libel suit doesn't really concern me.

Show me one case of libel decided against someone complaining about a seller in the UK on a forum. Seems simple enough for me. That is all I am asking. Go for it.

I swore I wouldn't respond anymore, but I'm now starting to believe you are not just trolling and genuinely don't understand. So one last try:
1) People here made accusatory comments. Bad comments about anybody are potentially libellous, whether they be about celebrity lifestyles, professional incompetance, terrorism, accusations of criminality etc.
2) This opens the potential for legal threats. Neko has had to deal with such threats in the past from eBay sellers, and said he would prefer not to have to do this for comments made on a forum he voluntarily funds.
3) You seem to feel that because such a case in the US would be difficult to win, that gives you the right to say what you want about who you want on here. First, that is against the rules. Secondly, a case would be much easier to win in other jurisdictions, and this is possible for comments made on the internet.
4) In many jurisidictions, to win you would have to prove comments about a seller scamming are 100% accurate. How would you do that?

Quote:
Actually show a single case of libel involving someone talking bad about a seller. You can't.

http://www.timesonline.co.uk/tol/news/u ... 009293.ece

Actually, libel threats concerning people talking bad about sellers in feedback led eBay to come up with this for their own site:
http://pages.ebay.co.uk/help/community/defam-form.html

Regardless of whether a libel case actually happens or not, even a threat of legal action takes time and potentially money to sort out (in this case, Nekos time and money). Keeping your opinions about sellers you've never even dealt with to yourself can eliminate these threats. But again, you do as you wish. At least now you know some facts.
I've picked up a few new systems as well since I last posted. I currently have:
- Sun Ultra 5: UltraSPARC IIi 366MHz, 512Mb, 20Gb EIDE Disk, SunVideoPlus, SunPCi IIpro.
- Sun Ultra 1 (200E I think, but sadly it was severely damaged due to the worst packaging effort I'd ever seen. Hopefully good for parts if I ever find another)
- Sun SPARCstation 20MP: 2 x Ross hyperSPARC 125MHz, 448Mb, SX + 4Mb VSIMM, 2 x 9Gb, SunSwift + WideSCSI, SunVideo, Sun PCMCIA.
- Sun SPARCstation 5: 170MHz TurboSPARC, 256Mb, 2 x 9Gb, TGX, SunSwift + WideSCSI, SunPC 5x86 (Missing the SPARCstation 5 badge sadly, but otherwise in great condition).
- Tadpole SPARCbook 3GX (110MHz microSPARC II) 128Mb, 520Mb HDD
Tadpole SPARCbook 3XP: (85MHz microSPARC II, 32Mb, 520Mb HDD
I bought these two 'spares or repair' systems from eBay. A bit of a gamble, as I didn't know if they had disks, RAM or even if they still worked. To my surprise, after finding a suitable AC adapter recently, both systems work. The 3XP had SunOS 4.1.4 installed, but its case took a real battering during transit. The 3GX is in good condition, has Solaris 2.5.1 installed, and was maxed with 128Mb RAM. They have no batteries (or covers), but still, its nice to finally own portable SPARC machines. OpenWindows seems so primitive.
- Sun SPARCstation IPX: 40MHz, 48Mb, SunSwift + WideSCSI, Sun PCMCIA.
- Sun JavaStation-NC: 100MHz MicroSPARC IIep, 64Mb RAM, 8Mb Flash

I still haven't managed to find a Ross SPARCplug or SPARCplug Solo, a JavaStation brick, or to justify the cost of a nice Tadpole Viper laptop :)

@emGee - I had thought 64Mb was the official max RAM for a JavaStation. Does 128Mb work ok?
bhtooefr wrote: kramlq: The 2.5.1 install set comes with a CD with CDE and WABI.

Yes, I bought 2.5.1 media on eBay, and it has a disk with CDE and WABI for Solaris X86, SPARC (and also PPC :o ). But the Tadpoles have some custom connectors for SCSI and Ethernet, so I have no way to get anything on or off it for the moment. My 3GX also seems to have had SoftWin on it, but it was deleted.
My favourite UNIX machine! Its compact and quiet and is quite nice to use when it has the right OS.
Make sure the drive you choose doesn't get too hot, as the 712 is basically passively cooled.
11i runs, but is a bit much for these. I've found 11.00 and 10.20 run nicely on it, so its worth getting a copy.
NeXTSTEP is also really good on this machine. If you are feeling brave then an experimental release of Mach/mklinux
may also run on it.

Before upgrading the 712/60, bear in mind that if you just get a 712/100 instead, then as well as the faster CPU, it can have as much as 192Mb RAM installed. I've also got the quite rare 2nd Ethernet and VGA card in mine 8-)
eMGee wrote:
I remember that during that time (when more and more was announced) I wasn't paying enough attention and I thought that the ‘64-bit’ that was announced by Intel was going to be ‘the’ “Itanium,” with only one 64-bit version of Windoze. But then when AMD64 came along, I became somewhat confused. At first I thought — that is, before I ever owned any AMD64/x86-64 systems — it was basically ‘Intel technology adopted by AMD’ and only a bit later I learned it were actually two different things.

Yeah, in the late 90s, AMD couldn't just copy the wonderous and soon to be ubiquitous IA64 architecture without licensing it from arch rival Intel, so the only other option was to come up with their own 64-bit x86 compatible solution. Its a little ironic that AMD64 won the 64-bit market, and Intel ended up licensing it for their x86 range (Intel64 mode).

I was teaching OS internals for a while, and had to look at both IA64 and AMD64, as I was initially unsure which would take over from 32-bit x86. IA64 was overengineered and needlessly complex, which hinders adoption by OS and compiler writers. As ugly as 32-bit x86 was, a 64-bit extension of it was more elegant IMO (and far easier to quickly learn and understand). Better the devil you know.

Rhys wrote:
PA-RISC was not slow, faster for a lot of things than most or all of its competitors. There's a good reason PA-RISC systems stayed on the top500 until a couple years ago, so why were they so desperate to drop it? Was there a good economic reason or was this just Fiorina-era bullshit?

Similar arguments could be made about Alpha. I guess they just had to consolidate resources and decide on one future CPU strategy. At the time, CPU architects such as John Mashey (MIPS) felt that RISC just wouldn't be able to keep up with EPIC. And also, the strategy of teaming up with a chip giant such as Intel for your future CPU needs is probably hard to argue against when you are at your next board meeting.
neozeed wrote:
And frankly Virtual PC & Qemu run OS/2 pretty darn good, why on earth would someone go through all that hell to port something to 'kind of work' when you can just emulate the whole thing?

Its probably not that much work, considering the resources IBM has, and if there really are customers actually wanting this type of thing.

An OS/2 personality for text only apps on NT was developed by MS/IBM using the native NT API and included in the initial NT releases. And IBM later developed a full OS/2 personality (including DOS emulation) to run above the IBM Workplace OS (i.e. a Mach variant) - this was released as the aforementioned PowerPC port. Creating an OS/2 personality above the more functional POSIX API would arguably be easier than either of these two previous microkernel style APIs they have targeted.

Running OS/2 apps on "proper" OS/2 in an x86 virtual machine still leaves you at the mercy of a 10 year old OS. Compared with OS/2 apps running natively as processes, it is more resource intensive, and it hinders you from taking full advantage of modern hardware features such as threads etc (it all depends on how the virtual machine is designed though). Also, apps are only as reliable as the underlying OS, and nowadays, some might be more comfortable with that being Linux on raw hardware, rather than the OS/2 kernel running on emulated hardware above another real kernel on real hardware.
Alver wrote:
Bwahahahahaha :lol: okay, hold it - are you seriously suggesting that they would consider porting UX and VMS to AMD64?

Most OSes outlive the hardware they run on. HP have had to port HP-UX from M68k to PA-RISC and Itanium in the past, and VMS from VAX to Alpha and then Itanium. I'd be very surprised if internally they haven't at least analysed what would be involved in a move to x64, in case it some day becomes necessary, just as has in the past.

Alver wrote:
No, seriously. They won't. The big customers of enterprise platforms need levels of error correction that wintel hardware cannot give, and won't be able to give in a long, long time - if ever at all. If they were to move to wintel, the OSes themselves would have little to no added value anymore.

:?
Going back to the original post of this thread, thats actually the reason MS give for killing the IA64 port - reliablity and scalability of x64 is evolving to the mission critical levels needed by industry, therefore making the main selling point of Windows on IA64 somewhat redundant.
porter wrote:
The structured exception handling would be "fun" to provide, very similar and a fore runner of Win32 SEH. Also OS/2 has a unique solution to thread local variables, they all share the same address which is switched to different real memory based on current thread.

Yeah, perhaps some kernel module trickery would be needed those; especially for that thread scheme - its a strange way to do things, as designing it like that means more potential TLB/cache work when switching between threads in the same address space, which kind of defeats the main benefit of threads.
neozeed wrote:
Well don't forget MS got the DEC team after DEC killed prisim/mica..


My apologies for being completely off-topic for a moment ... some internal memos and other docs from the Prism project are available on bitsavers. Its interesting to see what was happening in DEC at that time:
http://www.bitsavers.org/pdf/dec/prism/

Look at the the last page of this memo, and then look at the original NT design team
http://www.bitsavers.org/pdf/dec/prism/ ... nation.pdf

Microsoft owe a lot to the guy at DEC who cancelled the Prism project. They basically managed
to assemble an experienced OS team for NT as a direct result of it.
:lol:
R-ten-K wrote:
It is not about "ugly" or "pretty" it is about fast or slow, or power consumption, or price/performance... or any other sort of meaningful metric. I.e. things that can be easily objectively measured and demonstrated/modelled.


I sort of see where SAQ is coming from though. As you correctly point out, x86/x64 may be the undisputed standard nowadays, and it is good enough for what much of the market wants. But it is also undoubtedly ugly. For example, look at how much startup code is required for the x86 architecture to get to start_kernel() in Linux (for example, compared with Alpha, which SAQ cites as a good design). A lot of it is caused by the continual addition of new features and the need to retain compatibility with old ones in x86/x64.

Of course, most of the population neither need to know nor should care about this, but that doesn't stop it from seeming ugly to those who do need to know.

R-ten-K wrote:
Maybe people can simply create their own ideal processor in software and abstract the x86 out of the equation ;-) Maybe a cool project would be to create an idealized ISA target with all the interesting features from other instruction sets.

Virtualisation/emulation and binary translation have been an interesting step forward in this area. The traditional dependence on backward compatibility and ISA is becoming less relevant as these technologies develop further. Apple's transition from PPC to x86 was a good example.
I have a maxed out SS5 TurboSPARC 170 in aurora2 chassis running Solaris 7. It also has the PCMCIA and SunPC 5x86 SBus options, but I haven't yet put an OS and software on it that will take advantage of them. I've picked up a Sun DVD drive for it, but haven't tried installing it yet.
BTW, Has anyone with a parts SS5 machine got a nice 'SPARCstation 5' badge they could donate? Mine has none.

I also have an SS20 in aurora1 chassis, with 2 x 125MHz HyperSPARCs, 448Mb RAM and 4Mb VRAM. Its a bit of a work in progress - the case is badly damaged (held together with tape!). The maximum RAM and CPU limits on this box are very impressive considering the SPARCstation 20 was designed and introduced back in the early nineties.

@ajerimez
Have you put extra cooling in your SS20 or something? I'm sure that combination was listed on the MBus guide as 'deadly' in terms of the potential for failure due to heat issues.
Here in Ireland we say 'anyone but France' :D

Why? Here is the handball it took to get the win: http://www.youtube.com/watch?v=jSWi2WiQiUE
WolvesOfTheNight wrote:
But doing it in Java is an issue. I often end up thinking that there should be a better way to do something, but I can't figure out how. And I hate saying "yea, this interface bug is stupid and it shouldn't be that way, but Java made me do it."


I use apps written in Java quite a bit at work and aside from being painfully slow to load, they have annoying UI issues like mysteriously and randomly changing the focus from the current foreground window to one in the background, inconsistent response to cut-and-paste key sequences, and at times, astonishingly slow responses to buttons being selected with the mouse. All in all, a horrible user experience. I had always assumed it was due to stupid programming by the app developers, so its interesting to see it might not be all their fault.
SAQ wrote:
inca wrote:
Yeah, a year has passed from the last post, but still, perhaps, it'll be helpful for somebody:

ftp://ftp.akaedu.org/ 嵌入式硬件设计资源_Hardware/嵌入式微处理器_CPU/龙芯资料_Godson/SimOS/original/


For IRIX it required a customized kernel which Stanford couldn't distribute, so it never was useful unless you had an IRIX source license and access to the SimOS modifications.

The license on the Stanford page just made you click some button to indicate you agreed to Stanford & SGI's distribution terms, and that you had IRIX 5.3 already, and then it led you to the download links (including the modified IRIX 5.3 kernel). You needed the CD to build a filesystem disk image to use (in conjunction with the modified kernel distributed by Stanford). Only the IRIX 6.4 kernel was never released publically, because that was the current version of IRIX at the time.

EDIT: Here is the main part of the agreement. The registration form it mentions in clause (ii) was the web page you had to fill out and agree to. I guess since that is gone, there is no official and legal way to get IRIX 5.3 for SimOS any more.
Quote:
SGI and its licensors retain exclusive ownership of the Licensed Software. SGI hereby grants to you ("you" ) a non-exclusive, non-transferable, royalty-free and restricted license to use the Licensed Software internally for research purposes only, provided that: (i) you have a valid license to IRIX 5.3 currently in effect with SGI, (ii) you have returned the attached registration form using true and correct information provided true and correct information . No license is granted to you for any other purpose.


It was a similar story with Digital UNIX 4 for SimOS - DEC/Compaq just made you request it, and then sent you ftp links for the distribution, and also for a prebuilt 1Gb file system image. There was an AIX version available for download as well (for the SimOS-PPC fork) at some University in Texas.
If your program is capable of running on IRIX 5.3 as made available for SimOS then perhaps try using that. It had fairly sophisticated annotation (i.e. run a custom script when an instruction/reference/event occurs) and tracing capabilities. Or if you are lazy, you can just hack the instruction decode loop to print the instruction being processed. Or if you are really lazy, print just the program counter, and write a script to convert this list of addresses by grepping in a objdump disassembly of the binary (but this might not be practical for massive instruction traces obviously).

The downside is that SimOS is old, and therefore is awkward to build, and wont support the 'modern' IRIX 6.5 release. It is also difficult or impossible to officially obtain now the site is gone.

Seeing as you are to ultimately run the trace in a MIPS CPU model, is there actually any IRIX dependency here - Do you actually need a trace from an *IRIX* executable? Perhaps a trace of some code in any other MIPS emulator (see http://www.linux-mips.org/wiki/Emulators ) would be sufficient. Perhaps one of the full system simulators already has an option to generate suitable instruction traces. You could just extract the user level instructions based on virtual address if that is all you are interested in. I recall GXemul definately has something like this for generating instruction, register and PC traces.
edikat wrote:
Here in the UK vendors on eBay have totally unrealistic price expectations.

Did you try eBay Germany? You could certainly get a V480 on there within you budget. Possibly some of the others as well.
Also, there is a computer recycler based in Belgium that has been offloading a huge load of Sun gear over the past year (from Blade workstations to rackmount stuff). Some of it went quite cheaply, so its worth a look to see if they are still listing it. And buying from the EU you have no customs charges (unlike buying from the US).
I work as a manager at an oil company. Lots of looking at contracts and spreadsheets, or finding solutions to sometimes large and quite open-ended business problems. I am also responsible for all international banking transactions. I wrote much of the code used to interface with the wire transfer platforms we use (SWIFT formats etc), but I don't really have the time for development anymore. In the past I've mainly worked in teaching, and software development combined with some sysadmin.
jan-jaap wrote: Personally, I bought a 13" MacBook Pro as my first Mac. I was looking for something for light browsing duty around the house, portable, but with a keyboard. I also take it with me to offload my Nikon when we're on holidays (I wouldn't be the first who had his camera stolen or memory card die on the last day of my holidays). I'm quite happy with it. Not only does it work very well as a computer, it has something that makes you want to use it. It also makes the Dell laptop it replaced look incredibly clunky.

I bought the 13" MBP laptop as my first Mac recently as well. Its a good middle ground between a netbook and 15" laptop for travelling, and its nice to have a modern commercial unix to use. But as regards the hardware, I've found the keyboard is lacking some fairly useful buttons, the trackpad takes a bit of getting used to. And as laptop hardware goes, it is obviously seriously overpriced, and with some strange design quirks (only two USB ports, positioned a bit too close than makes sense IMO). Plus the wifi reception is quite weak/flaky compared with the PCs I use in the same locations - seems to be an issue for many people, yet never fixed by Apple.
If you consider the percentage of systems and devices out there that are either:
- Running an OS with heritage that goes back to the original UNIX
- Running an OS that is an independent reimplementation of the UNIX design
- Running some software written in C
he clearly had a far greater impact on modern computing than Jobs and most others.

So far, the only reference in mainstream media I've seen is the BBC: http://www.bbc.co.uk/news/technology-15287391
Oskar45 wrote: Actually, originally Thompson set out to create a FORTRAN compiler for "First Edition" Unix but then instead created B [with the help of Ritchie; the "Second Edition" Unix kernel was written in it] which was only later developed into C by Ritchie.

B was interpreted, so I'm not sure it would have been suitable to develop any kernels back then, considering the hardware they were working with. I think the paper R-10-k linked to suggests this:
"On the PDP-7 Unix system, only a few things were written in B except B itself, because the machine was too small and too slow to do more than experiment; rewriting the operating system and the utilities wholly into B was too expensive a step to seem feasible."

I think ultimately, the desire for B-like syntax with assembly-like performance motivated the development for a system oriented language like C. Despite its flaws, it certainly succeeded as a both a system development and a general programming language.