The collected works of bri3d - Page 2

A "complete" computer with no processors... nice price too!

Best of luck to the seller :roll:
Fuel support - nice! I wonder if it actually supports VPro, even as a framebuffer - probably not.
kramlq wrote: :roll:
They seem to be blissfully unaware of the fact that Minix (and hence the "open" Linux that SGI are touting these days) wouldn't even exist if the APIs and internals of "proprietary UNIX" systems weren't so widely documented, understood (and in some cases formally standardised), that people could create those UNIX clones in the first place.


How so? All they're saying is that proprietary UNIX has been "antiquated" by newer solutions, possibly ones based on the older UNIX!

I disagree with their blanket assessment of proprietary UNIX systems as "antiquated" and open systems as "higher performance" (mostly because it's a dumb blanket assumption to make, not because it's completely false) but I don't see where they're saying that UNIX wasn't necessary for newer open systems - just that open systems have surpassed it.
* Origin 350, Onyx 350, Onyx 4 and Tezro have not been tested, and are not supported yet due to lack of support for their PCI-X controller.


Sadly this is apparently pretty difficult - that's where the Linux port trailed off too.
Yeah - that's why it's almost always a lot cheaper to buy a shared hosting, VPS, or dedicated account at a datacenter.

I definitely admire running Nekochan on SGI hardware, though, and my gratitude goes to Neko for footing the bill for it!

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
I actually quite like Windows 7's new taskbar, Aero Peek, and a lot of the other tweaks that have been made in 7. On the other hand I find the new Start menu layout from Vista to be really annoying.

The fact that you can't really change the frontend highlights my main frustration with Windows - it forcibly hides too much. Sure, a "home user" doesn't want to see a lot of verbose debug messages, or tweak every aspect of their OS, but the functions are usually already there, hidden deep in some logging or debug tool, or a secret registry setting. It's very frustrating to *know* it's possible to do a lot of what I want to do and see a lot of what I want to see, just hidden away so well it's not worth the time to find it. I hate seeing a progress bar spin forever and having no idea what's going on in the background.
Yeah, that's libtiff, our very favorite PSP and iPhone exploit vector!

There are a lot of other fun copyrights in the PSP manual - they threw a lot of open(and closed)-source libs in there.
miod wrote:
As for non-framebuffer hardware support, Origin 350 are now working since a few hours :!: , and there is ongoing SMP work targeting the Octane and Origin 200 systems.


O350 "working" meaning what?

Is PCI probing supported? Does NUMA work? Linux has been "working" on O350 for quite a long time now - in a single-CPU configuration with no PCI support (and therefore no way to access any storage - making it completely and totally useless).

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
miod wrote:
bri3d wrote:
Does NUMA work?

Multiple-o350 configurations could not be tested for lack of hardware (the only multiple-node system which has been tested, to the best of my knowledge, is a dual Origin 200 system).
[/quote]

I'll pull some snapshots later on and test, as I happen to own two dual-R16k O350s NUMAlinked.
Let me know if there's an IRC somewhere I should join to collaborate with you...

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Quick crash course:
"real" time is also commonly called "wallclock" - it's how much actual, measurable time the program spent from when you invoked it to when it finished. Generally measured using a real-time clock in the system, if it's available.
"user" time is an estimate of how much CPU time was spent in user code (i.e. the program itself). It's usually measured by counting instruction cycles.
"system" time is an estimate of how much CPU time was spent in system code (i.e. the kernel and syscalls). Again usually measured by counting instruction cycles.

This is a grossly simplified explanation, but user time is higher on your Origin 2400 because it has slower CPUs - it had to do about 4 hours of processing "work" to match your Octane's 3 hours. It could just slice it up into 8 pieces, so it took less real time.
uridium wrote: It has a full MMU unlike PSP.


Sounds neat - I take your full MMU and raise you a rudimentary raster engine, a custom non-MDMX vector unit, a boatload of crypto hardware, and an entire extra CPU core though!
Quite a bit of the functionality of a full MMU could probably be emulated on the PSP - it has custom memory protection, so just add relocatable binaries (which Sony apps are by default)!

Honestly the Dingoo sounds a lot more useful, but goofing around with the PSP was a lot of fun.
Every PSP released to-date can have unsigned, arbitrary user-mode code (can't access kernel RAM, crypto hardware, some privileged functions) executed, and we have 100% bare-metal access to the hardware (including the ability to do bringup(!)) on everything prior to the PSP-3000.
Sony have done their damnedest but just like every other console it's a cat-and-mouse game that the little guys usually win.

The PSP Go is a little more complex right now in that it involves a buffer overrun in a gamesave, and the code hasn't been publicly released yet. That should change soon.

Code is pretty fragmented around the homebrew community but pspdev is a good place to start if anyone wants to get into PSP development.

Real devkits are quite a bargin for a recent console, too, although with the PSP's lukewarm reception that's not too surprising.
It's interesting that you found so many 8-bit systems in production use - I've noticed this trend with used Indy systems I run into as well.

The premium must have been quite extreme on non-academic systems (looks like it was about $1000 more for 24-bit than 8-bit on academic) or SGI's sales reps very bad at demonstrating the 24-bit vs 8-bit - 8-bit is not only ugly but slower in most apps due to dithering.
I'm in the frustrating position of having not one but two 1600sws... with only #9 cards.

MLAs seem to go for more on their own :/
rumble wrote:
mapesdhs wrote:
Alas, without the PROM source, not gonna happen.


Lack of PROM source probably isn't the problem so much as the inability to alter the IRIX kernel is.

With a combination of disassembly, IRIX/Linux/OpenBSD/NetBSD headers, and examining the existing PROM as it runs through gxemul, I think it'd be reasonably straightforward to create a workable replacement that could load a Linux or BSD kernel.

So, if you have hardware you know (or are very confident) works, I don't think it would be too difficult, but you can kiss running IRIX goodbye.

The first step would be figuring out how to rescue a botched PROM. I'm quite certain there's an emergency means of flashing that thing via the serial port, though when I explored this a few years ago, I got the impression that although the flash chip could have portions permanently write-protected, it didn't appear that SGI used that functionality. They seemed to make cache assumptions in the very lowest-level code (which would presumably incorporate the rescue feature), making it necessary to flash the whole thing with the introduction of a new CPU.


An IRIX kernel wouldn't need alteration to run with a custom bootloader - once you've got the hard part (bringup) done I am fairly sure loading sash is a matter of read and jump.

CPU/bus/co-processor bringup (the part we'd need to change) would be the challenge - to the best of my knowledge, no open-source code can do it - we have knowledge of how to bit-bang the hardware once it's up, from Linux and NetBSD folks, but I don't think anyone's worked out how to bring it up at the low level. Considering the wide variety of goofy hardware inside the O2, I think bringup might get pretty involved.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Your primary issues are going to be with:
1) Licensing/support - IBM charge a very hefty fee to keep their mainframes licensed and supported. You won't be able to afford it, so you'll lose any kind of support.
2) Cooling. They're designed to run in air conditioned datacenters with raised floor. Even if you can power one keeping it cool is a whole new level.
3) Storage. Depending on how the system is configured you'll need an external storage director and some kind of array - you're correct in that you'll need DASD but there are a lot of ways to get DASD attached these days, and most of them are very expensive.

Linux will IPL in an LPAR (Logical PARtition, essentially a virtualized section/container of system resources set up via that OS/2 system, which is the system management controller), but if you've never done it before you'll probably need a lot of help. Docs are sometimes available and sometimes hidden behind IBM's licensing gate, but if you've got a contact who runs mainframes he can probably help you out.

Cool as it would be to own a z9, you're probably not going to get it into a working state - personally, I'd pass.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
I am not sure about Flame 8.5 but I know for a fact Flame 9.5 has entirely different license names for different hardware configurations (even after the serial number lock), so I'm going with probably not.
Was initially bug-tastic for me - beachballs all the time, even after deleting caches etc. (clean install), lots of inconsistencies (especially with third-party and dev software).

After the last few point releases seems to have gotten better.

So basically like any other Mac OS X release.

I think it's pretty much ready for prime-time now - haven't had any issue for a few weeks. My roommate has had sketchy network experiences (lots of random connection droppage and latency) though - probably related to that he roams between networks a lot, but definitely a bug nonetheless.
deleting ioconfig has never, ever done me wrong, and it's my first step when debugging any kind of input issue anymore.

the only mouse I've had not work so far is a Logitech notebook mouse which is recognized but gets the polling issues spit out on console, jerks around, and is worthless.

Keyboards are 100% for me, even one on a composite keyboard/mouse receiver.
It's possible to change the endianness, yes, but to the best of my knowledge no PROM was ever publicly released supporting it, so it would be pretty hard to do...
If they're scratched on the clear side, you might want to try one of the 3000 methods people have come up with to polish CDs (everything from ghetto elbow grease and toothpaste methods that never worked for me to mechanized polishers). It's gonna be a lot easier than trying to get them off SGI.
D-EJ915 wrote:
4.2.2.2 is one I use when I don't want to use somebody's dns


4.2.2.2 still is somebody's DNS - it's just GTEI/Level 3 and they're just running dumb DNS servers, not collecting lookup habits like Google are most certainly doing.

I use 4.2.2.2 as well when I can't think of something else off the top of my head.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
D-EJ915 wrote:
They're probably used in applications that haven't been ported over to x86 yet.


I doubt that's why. There are a lot of cheaper, faster Alphas out there. It could be a model-specific app like the MRI control O2s, but I've never heard of one for AlphaStations...

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
I really doubt the old Origin hardware really cost $81.46 to run - as a matter of fact, that seems entirely impossible given the PSU specs.

On the other hand, I'm sure avoiding the noise is really nice :)

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
TWC used O2s for their regional weather displays up until the mid-to-late 2000s. I think they're all gone now though.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
I have no idea why you would want to do this.

Since GCC works well enough to produce and link running code now and you can replace init, there's absolutely no technical reason you couldn't as far as I can tell. Mixing swmgr and chkconfig in would probably not work, though, as they depend on non-GNU-style utils to run.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Enough is known about VPro to do a driver if anyone wants to mess with XOrg drivers. XOrg drivers are absolutely awful though, so I doubt anyone would go to that kind of effort. It also takes a GL command stream as its input, and since the ever-brilliant X developers pretty much abandoned XGL it's going to be awfully hard to get that out of X.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
foetz wrote:
josehill wrote:
Me, too! While I use the CS3 version on my workhorse Mac, I still look at Photoshop 3 as one of the best pieces of commercial software ever released. A terrific balance of power, performance, and reliability.


especially the speed is nuts. already looking forward to see that running on the 1ghz r16k :D


It's damn snappy - no deep undo is awful though.

I got a copy of Photoshop 5.5 via some promotion on PC and never bothered to upgrade, and it hasn't bothered me. I've tried out all the newer versions and a lot of their features are really cool (perspective warp, especially) but not something I'd ever use.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Nice looking skins on all those systems - I've never seen that Presenter either, that's one hell of a rarity.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
eMGee wrote:
Titox wrote:
It has very high especifications (576 GB max RAM, woow!!), but lacks of beautiful rack like Origin 3000!!!

Or the beautiful MIPS RISC instruction set and NUMAcc based architecture, to name a few things... (Also, didn't the Origin 3900 support several terabytes of RAM? The Altix line certainly does).


I completely agree with you that it looks boring compared to a real SGI system (as any mass-produced x86 server system will - to compete in that market requires cutting cost and not doing anything flashy). On the other hand, in the interest of pedantry and being a pain, I will point out that the Nehalem microarchitecture CPUs with QPI and every AMD CPU since K8 (HyperTransport) are a ccNUMA architecture - they have CPU-local memory and are linked via an interconnect bus topology that introduces variable latency when accessing remote processor memory. Not many people have exploited it to do single-system-image beyond one enclosure like old/real SGI systems, but the possibility is there and the fundamental idea is similar.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Once you factor in the time and effort (and possible cost, depending on area / recyclers) to recycle or trash the pieces you end up with after a part-out, it's usually better if you can find someone to take it whole.

On the other hand, if you're looking at raw cash value, you'll get a lot more parting out, because you can ship parts to people without shipping negating any financial gain from the deal (plus your audience is much wider and you can find someone willing to pay).

I'd rather see a system go whole, too, but that's just personal feelings - it's nice to see working hardware go to a good home where it gets used, rather than half of it going to the recyclers. Some (probably wiser and better) people aren't sentimentally attached to computers at all though, so it's just a preference.

Personally, I'd try for whole and use parting-out as a second option.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
Compared to the Onyx the CRM graphics in O2 is a bad joke, so I doubt SGI really had to worry about competing with themselves there.
Wow - O2 has quite decent performance/watt - too bad the total performance was always so low :D

A 12W+ standby drain out of I2 is pretty annoying (I'd be flipping switches on a powerstrip on that for sure), but nothing like other systems of its vintage - my AlphaServer4/266 tends to leak over 80(!!!!!)W when off (somehow).
Getting a new laptop and seeing what OS 4.0 brings to the table before I think about one of my own (still not 100% sold). Might camp out and flip some on eBay though, can't decide... I camped out and did that for the iPhone and it was an entertaining experience. I'm not sure the iPad will go for quite the insane amounts iPhones were though.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
I think the Itanium animation frontend / workstation market was limited (thus the quick death of the Prism and most other Itanium-based "workstations"). The Altix was designed as a backend compute system, so in terms of the animation workflow I'm pretty sure RenderMan ran on it.
Most Altix installations are in the HPC world, at research facilities and universities running simulations and other sci datasets, just like the Origins before them - SGI pretty much discarded the Onyx idea of a powerful visualization system for CAD, compositing, and animation work as they became less able to compete.

Oh, and I'd love an Altix :)

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
The Prism's a really pretty system - one of SGI's best, imo. Too bad it was hamstrung by a lack of market and decent drivers - from what I hear UltimateVision (similar graphics for IRIX) had equally awful drivers so I'm inclined to blame ATI - their non-Windows drivers were never really passable until very recently and that's even still debatable. I feel like XFree86/Xorg might probably played a role as well.

Kudos on picking one up!

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
pierocks wrote:
D-EJ915 wrote:
Nvidia's last IA64 driver is ancient and requires a quite old kernel (I think 2.4) so you might be able to use it.


I'm pretty sure I remember being able to compile the nVidia IA64 driver for a 2.6 kernel on an IBM Intellistation...but I could be completely wrong...


You're right, it looks like it worked years ago around 2.6.6: http://www.mail-archive.com/debian-ia64 ... 01974.html

I wouldn't expect it to work now - since it's Linux there's no dedicated development tree, it's been several years, and breaking this sort of thing is intentional. I'm sure most everything has changed in the kernel.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
XOrg 3D acceleration is really complicated and since VPro hardware commands are basically the same as GL it'd be nice if XGl hadn't have gotten abandoned so quickly. 2D might be even worse on VPro if you went the standard XAA route, since you'd have to reimplement everything in GL anyway. I wouldn't get your hopes up for anything.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
nekonoko wrote:
indyman007 wrote:
I know that I am reviving an old topic, but do you guys think it could be used as a home theatre PC?


Sure, but you'll probably want to add a Broadcom Crystal HD mini-PCIe card for h.264/Blu-ray acceleration.


Seconded, and I wouldn't recommend it if you use Netflix and YouTube (especially Netflix, since you can play YouTube through third-party H.264 players now) heavily as Flash and Silverlight both have poor (albeit improving) hardware video acceleration support and the Atom is really not up to decoding high quality video.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW:
And the PC cards for almost every early minicomputer, and so on :)

I've never seen a Unisys mainframe system - could be a rare bit of fun.

_________________
:0300: <> :0300: :Indy: :1600SW: :1600SW: