The collected works of robespierre - Page 3

some LCDs can only display effectively 18-bits of color, but none of them as far as I know would look like 8-bit.
To the OP: I'm not sure why you'd expect the icons to look different? The Indigo Magic desktop icons are neat because they are scalable, certainly not for being colorful. To see the difference between 8-bit and 24-bit root windows you need to display a picture or animation program into it.
Try giving an sgi .rgb file to bgpaste(6D). there are several other ways.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
see http://mbus.sunhelp.org/modules/index.htm
appears to be a Ross HyperSPARC 2x100MHz (Sun name HS12, the FCC ID is granted to Sun).

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I use the 3c597 in my R4400 IMPACT Indigo2. It works fine, but there are certain limitations (you cannot netboot with it, and getting it to transition to "UP" after connecting the cable can sometimes be a bit baffling).

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Are you sure that the minimized Console icon is related to the depth of the root window and not the default visual?
I may be wrong on this, but my understanding is that minimized icons display in their own separate windows.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Hamei, your rants remind me, sometimes, of some of my favorite essays, like Lewis Lapham's 'Waiting for the Barbarians' or Robert Hughes's 'Culture of Complaint'.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I set mine up with 4, 128 MB simms and 8, 64 MB ones. It worked without any issues.
PROM Monitor SGI Version 6.2 Rev A Aug 26,1996

One thing to consider is that larger DRAMs, ceteris paribus, use more power and the power budget is limited.
SIMMs using 3V DRAMs and level converters seem to be safer.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Even older still, there was a spreadsheet program called Prophet that was written in the '90s. I don't remember if it used Motif or Athena.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Oh also if you only use 8 SIMMS you need to put them in the right places. The lowest-numbered group of slots needs to have memory in it or the machine can't boot. I think in the Indigo2 that's the group farthest from the CPU.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
OK, we have the same PROM version. At this point I don't know if it's an issue with the SIMMs, or if the IP28 board is a revision that just doesn't work with them (could be possible, as only 64MB SIMMs-768MB was officially supported).

Sometime I can take out the SIMMs I have and tell you the numbers on them if that would help.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I thought you meant "sick" as in "awesome"!
I have a 3/60, mono only though. I've never actually seen a cgfour before (it looks like there are two different types in the auction, so people should take a look).

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Errors in the users' manuals aren't unknown. From a cooling standpoint it was widely recommended on comp.sys.sgi and elsewhere to use the slots farthest from the CPU first, because the processor module interferes with airflow over the RAM.

I have my 128MB SIMMs in Bank A, and the 64MB ones in B and C. The Bank A SIMMS are labeled both "IBM FRU 76H4896 OPT 94G6682" and "Samsung KMM53632000BK-6".

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
actually, both the books I cited quote cavafy's poem. and Hughes quotes the Herod section of W.H. Auden's Christmas Oratorio.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Somewhat of a side note, but the dissimilar metal corrosion problem with tin-lead SIMMs in gold sockets or vice versa should be preventable by using a Polyphenyl-ether based contact rejuvenator.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
All O2 main boards (name: "IP32" or "Moosehead") are labeled with multiple SKUs. 030-x for the circuit board assembly, and 013-x for the whole assembly including the PCI riser and the I/O port shield. 013-1486-x was used for the narrow, single-slot assembly with an R5000 and the short PCI riser; 013-1664-x for the wide assembly with an R10K processor and the tall PCI riser.

Since changing the processors, PCI risers, and I/O shields only requires a screwdriver, the 013-x sticker on the board might no longer describe its configuration.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
It is still available, at premium prices. Link: http://www.memoryx.com/huc32h.html
Otherwise, it appears scarce. One (not very urgent) project of mine is to document the operation of the
memory ASIC on the simms, which could allow for more than 96MB of ram in an R3K Indigo. As it is,
the 2, 4, and 8MB simms are all using the same pcb, with pull-up resistors to signal the size. So
reworking 2MB simms to larger sizes is achievable.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
The DECstation series is interesting in that it was a RISC adaptation of the VAXstation 4000 system architecture. I seem to remember that it was developed and released in a very short timeframe. TurboChannel was open, like SBus, in that its documentation and licenses were easily available, but unlike SBus it was not widely used by third parties since there were no TC clone machines. (Kubota Graphics and maybe E+S did develop 3D subsystems for the TC.) The VAX influence can be seen everywhere, but most saliently by the use of VAX keyboards and mice. How ironic, then, that they did not support the architectural features needed to run VMS.

There was some interesting work on computer music and teleconferencing that was supported by DEC WRL, leading to the release of the 'LoFi' base rate audio I/O TurboChannel card and its associated external amplifier box. This was contemporaneous to other DSP experiments at NeXT and SGI, as were the ISDN features similar to the ports on the SS10 or the Indy. Quite a number of researchers on the MBone, or using real-time kernel extensions, used them. The Open Group used DECstations extensively, and an early release of OSF/1 exists on them. Other OS researchers also used them, like Ousterhout's Sprite team and the developers of the Chorus microkernel that was the basis for Cellular IRIX.

The Personal DECstation was also a rare machine for the era that used one serial bus for all its input peripherals. At the time, only ADB (used by Apple and NeXT) had that architecture; other workstations used multiple rs232 or rs423 signals for each device, or, like PCs, had a single serial bus per device. The Access.Bus interface was later refined into VESA DDC, losing the ability to connect input devices. The system bus was flexible enough to accommodate 64-bit R4000s when they were available, unlike the situation with the Indigo, which required a new IP20 design.

3D on Ultrix and OSF/1 were an afterthought, though, so the systems are not as interesting as SGIs.
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Thanks for pointing out the HP-HIL counterexample. I don't want to quibble, but isn't it more of a ring?
HP seem to have dropped HIL sometime in between the C180 and C200 workstations, in 1997.
(For comparison, the Personal DECstation had been discontinued for three years by then.)
And of course daisy-chain wasn't a new idea, HP and Commodore had been using it since the '70s.
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Oskar45 wrote:
Ah - the Indigo. Once upon a time it was hailed the "RISC PC of the 90's".


In fact, the Indigo's model number is "4D/RPC" in the series of IRIS 4D machines.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
pentium: you could try Texas Instruments PC-Scheme, it should work on a 386. Pretty nice IDE for the time (mid 80s).

I had access to a Symbolics machine during school and kept some of the manuals. Like some other systems that thrived and perished before the WWW, much of the information available today is incomplete or wrong. For a while I archived information about several obscure lisp machines designed in France, Japan, and Norway, but bit rot ate the linked pages and it was too time consuming to fix everything.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
IRIX has NCSC B2 features under its MAC controls, so simply being root does not grant access to everything.
See clearance(4) and dominance(5).
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
On EBay a couple months ago, a MacIvory II (a NuBus card that runs at half the speed of the latest and best hardware Symbolics released) sold for $360. That's a sight better than the last one to appear, four years ago, that was being offered at $1200 with a IIfx.

in 2005 I learned that an XL1201 was sold via an email exchange for $5000. That is the desktop version that cannot support color graphics; by comparison I have heard of an XL1200 selling for $10,000. The only machine to appear on EBay that year, an XL1201 bundled with a UX400 VME board inside a Sun 3/140 went for the paltry price of $510.

The machines attract high prices because of their rarity. Although roughly 20,000 computers (including NuBus and VME boards) were manufactured by Symbolics, very few survive. A more economical option is to use the hardware emulator they developed for Alpha AXP workstations after their Chapter 11. Since it uses the same embedded computer architecture as the MacIvory and UX systems, the user experience is similar.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Wow, that's neat. I forget exactly why I thought that, but it was clear to me that this modified system wasn't for video production use, yet it seemed to have some kind of video interface. So optical recognition was an alternative that came to mind.

This forum has pictures of a potentially cooler special system; IIRC it was also attached to an Indy.
viewtopic.php?f=4&t=16723913

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I will note that lisp machines (like Forth and some other topics) are a magnet for dreamers and crackpots.
Often these projects get started with only negative goals: they know emphatically what they do not want, but have only a vague idea of steps to achieve a coherent vision. Backwards dependencies are also a symptom: for instance, declaring that "conventional architectures" are "hostile" to high-level languages, then proposing to implement a new architecture for which no software exists. This is a type of tunnel vision. Another is the conviction of originality; surely my ideas are iconoclastic and new, there is nothing to learn from academic papers!

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I'm simply giving a warning. Do you really mean that you've never come across that type on the internet?
I suppose I may be unfortunate in having long subscribed to certain usenet groups and seen the assortment of mental defectives that plague them periodically. I don't mean that dreaming is necessarily bad, but you can tell that for some, the idea of "dream machines" (pace ted nelson) is rather more poetic feeling than technical concept. Ideals, if that's what they become, have a way of being unshakeable by facts and no real progress can be made.

I'm not entirely convinced that there is such a thing as a "C-machine", since C has been made to work on Crays and Symbolics too as long as the code is truly portable. It's the operating systems (that old research area that rob pike declared dead) that create dependencies on C compilers and their ills. At any rate, the architectures of the world vary in too many dimensions to pretend that they break down along the same lines as HLLs.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I think hamei's comment was apropos "intellectual masturbation". (sorry for explaining the joke.) there are a lot of pie-in-the-sky hardware and OS projects, but some exotic stuff is actually very interesting. To expand on it a little: computer architecture as it's been practiced has been focused on executing instructions as fast as possible from a single stream. The hardware area on a modern CPU that actually does computation is a very small part, about 10-20%. All the rest of the gates are doing ancillary functions to help keep that pipeline busy (caches, prefetch, decoder, dispatcher, speculative execution, multiport register file, TLB, rename registers, branch prediction, etc). But the problem is that all that complexity is what's keeping core speeds stalled for the past decade at the 3 GHz line, while the amount of parallelism exposed by the CPU is also stalled at around 8 threads. Putting more parallel units on a die is hard both because the cores are so large with so much extra stuff, but also because each additional core creates more contention for shared resources like memory, and creates more cache-coherency traffic. We call it the "von Neumann machine model", but the concept is actually at least as old as Charles Babbage with his separate "Mill" and "Store". Anyway this is getting long but basically we should all be using transputers or something else that scales cpu and memory together without shared resource limits.

geo: it's a little misleading to talk about "lisp machines" as a descriptive category since there were different kinds of designs. all of the extant designs take your approach #2. Really, #1 and #2 are the same strategy because they involve defining a virtual machine. You could start with the virtual machine used by an existing compiler, like CLISP, and make adjustments based on the ease or difficulty of implementing all of its features. But you need a VM that can run "without a net": there cannot be any support routines that run underneath the VM, so it needs to grow additional abilities, like virtual memory management and process control. There would then be hooks from the hardware side back into service code running on the VM to handle those tasks.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
geo: lisp interpreters need to use a lot of temporary storage for maintaining the 'environment' and evaluating subexpressions. The #3 approach is doable, (I believe SCHEME79 from MIT did this) but you miss out on a lot of optimizations that a compiled order-code can give you: like turning access to a local variable, which is a linear search in an interpreter (ASSOC), into an indexed load from a base register.
There's also something to be said for not locking the semantics of the language too deeply into the hardware design, because languages evolve with new features (like new number types, logic variables, new array types, etc).
Of course you can still have an interpreter, but it will run faster when compiled than interpreting itself :)

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Some generate native code only, some generate a bytecode, others can do both. If you define a function and then (DISASSEMBLE 'my-function) you can see the instructions that the compiler generated for it (or for any predefined function).

For example: CLISP doesn't have a native code compiler, it always generates bytecode. This makes it one of the most portable implementations. ECL generates chunks of C code and then runs them through the C compiler and 'dl_load's them into the environment. There was a family of implementations that do that, beginning with Kyoto Common Lisp from the mid 1980s.

Allegro CL, LispWorks, Clozure, and SBCL only generate native code. CMUCL has both bytecode and native compilers.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I don't think the PPC64 port ever worked.

geo: while there can be some overlap, bytecode is usually designed so it will be easy for software to decode, whereas lisp machine instructions were meant to be decoded by hardware (*) and those criteria lead to differing instruction sets. For example, a hardware decoder is easier to build if the fields of different instruction types are aligned, since the same paths of the mux can be used. Symbolics used a fixed instruction width of 16 or 17 bits with essentially a single instruction field format, and around 10 special addressing modes. They had properties in common with both RISC and CISC designs; the instruction fields were fixed length, and their addressing modes were fixed per opcode; but after the hardware decoded the fields, it jumped to a microstore address where microcode performed the instruction. Most instructions used 1 to 2 microinstructions.

* Actually the MIT CONS/CADR machines decoded their instructions using microcode, and had writable microstore. So the machine was originally designed to be a flexible microengine that could run PDP11, Data General NOVA, or other kinds of instructions. The microcode even had the ability to modify itself.

Oskar45: The 3600 is a real beast, it makes a predator rack look cute. Total weight including 14" disks, up to 800 lbs... There is a turbine inside for cooling that could probably amputate a whole hand.
About a decade ago I was visiting MIT and saw one that was left in a courtyard for the scrappers. Leaving aside the lisp machine pixie dust, it is a very strange machine. (The L-machine series have a service processor called a FEP, which is a mc68000 that has access to the data paths of the main processor. One of the 18" [hex unibus sized] boards has the FEP and its memory and bus logic. But the 3600 has an ADDITIONAL controller called a nanoFEP that was made from an 8080 and squirreled away inside the top of the chassis. There is an LED character-matrix display and buttons marked "NO" and "YES" that are connected to the nanoFEP and control the power-on process. The machine could also be controlled entirely by a remote engineer dialed in to its build-in modem when the key was in the "REMOTE" position.)

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
geo: You're quick. In fact, MIT Lisp Machine Lisp does have a microcode compiler, that can compile lisp functions to microcode, where the hope is that they will run faster. Problems: microstore is very expensive, think L1 instruction cache, except in a dedicated memory that can't be reused. So there typically isn't much space there (MIT CADR was unusual in that it had a huge microstore, something like 16Kwords IIRC). Also, microcode is where traps (exceptions) are handled, so any function compiled to microcode cannot trap. This is a particular problem in a lisp machine, because in addition to things like page faults, memory references trap in many ways (type tags and invisible forwarding pointers). It must also be very careful about subroutine calls, because since the microstore is inside the CPU, any call stack is a dedicated CPU resource with a fixed depth. If microcode calls nest too deeply the machine crashes. So much for dynamic programming...

With the 3600, Symbolics moved hundreds of low-level functions out of microcode, into regular macrocode, where they ran faster, for all the above reasons. I don't think a microcode compiler was written for the 3600, but of course there was a Lisp-syntax macro assembler that made writing the microcode simpler. A single line of microcode could pop arguments from the stack, check their tags, perform an arithmetic or logical operation, shift the result, check overflow, store or push it into various places, mark flags, and return—anything that the hardware can do in parallel in one cycle.

Writable microcode (WCS - writable control store) was a feature of some minicomputers in the '70s and '80s. MIT CONS/CADR, Symbolics 3600, Three Rivers PERQ, and a number of DEC machines including the KL10, VAX 11/780, 8800, and even IBM S/370.
http://en.wikipedia.org/wiki/Control_store
While an architecture is young it can be valuable to fix problems in the field. At the end of the '80s the WCS fell out of style. (Today a whole machine can be compiled to FPGA, so there is an opportunity for patches even without microcode.)

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
the people you listed are not "addicted to lisp machines". there is a difference between inventing something and idolizing it.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
geo wrote:
hmm all this sounds CISC like right? When i try to search a RISC LISP CPU, i saw someone implemented it but not sure it got done hmm


In some ways yes, in some ways no. "Classic CISC" like the Motorola 68020, NS32016, or VAX have variable-length instructions with "operand descriptors": After loading and decoding the first instruction word containing the opcode, the CPU may need to fetch and decode several more words of the instruction to know how to supply the operands. So a single opcode may need several microcode branches to resolve all the operand possibilities. For example, a 68020 instruction can have multiple operands, each of which employs a series of operand descriptors, with memory addresses calculated from pointers which are themselves loaded from memory. 68k instructions can be as short as 16 bits or as long as 176 bits.

VAX is more complex, with many 4- and 6-operand instructions and very complicated addressing modes. It is "orthogonal", which means that every operand can use any address mode, including the deferred addressing modes that indirect through memory. The longest VAX instruction is 448 bits long, except for a class of instructions like CASEW that are actually of unlimited length (limited by the machine's 4GB address space). Just the ADD opcode has over 30,000 addressing mode combinations.

A closer comparison could be made between a lisp machine like the 3600 and the PDP-10. They both have fixed-length instruction encodings with very few addressing modes. Both are word-addressed, with all operations done on 36-bit words. Both have at most one operand in memory, like many RISC machines. But both also have the property that an operand loaded from main memory may immediately redirect to somewhere else in memory, or not: invisible forwarding pointers. The PDP-10 has been called the "first lisp machine" because of this property and some properties of its operating systems.

Overall, I think that the instruction sets of lisp machines are not Complex, they are just unfamiliar. They have a lot in common with old Burroughs mainframes like the B5000: most operations take place on a stack, the top of which the CPU keeps in registers for fast access. They also are object-oriented: compiled code basically never addresses memory directly, instead relying on the CPU to supply subsidiary objects to it. So it has instructions that look like Lisp functions: CDR, RPLACA, AREF. The benefit of that is that even if the code has bugs (it always does), it is impossible for it to reach or modify objects inaccessible to it through the supplied primitives. For example, array access is always bounds-checked, and the size of the datum returned is under the control of the array, not the compiled function. So no matter what, when we access an array we can only get out of it the elements that are its proper members.

Quote:
btw, just finished watch the video on Kalman Reti's talk at Boston last year? wow!!! watching the demo using VLM, i cannot believe what i saw! all those cool features already done on an 80's machine? my God.. and i think even some of those features are not available on todays OS right? why so? is it because of the C language barrier? or are just developers lazy to implement such cool feature? esp this one: you can change the code even the program is running, then just recompile it, tada!! how i wish Lisp machine will return and taking advantage on todays hardware techs..


You're right again, the real valuable stuff is/was the software environment and what it made possible. One of the reasons Symbolics went bankrupt [is that they made bad real estate investments before the Black Monday in 1987...] is that too much resources were put into special hardware products, that didn't have the economies of scale of their competitors. The VLM shows that the system could have run on "commodity hardware" (is AXP really a commodity? I don't know) using a virtual machine layer. VLM was faster than all Symbolics hardware even run on slow 200 MHz Alphas.

Today's Common Lisp environments do have debugging and recompiling during runtime, but some advanced Genera features (like its transparent file access, editor, and error recovery) are hard to reimplement in a way that is compatible with the underlying operating systems.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I used to use Altsys's Metamorphosis Pro on my classic Mac. It works very well.
http://vimeo.com/14758980

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Hi Geo, good choice of learning material :)

Are you talking about this video?
http://ocw.mit.edu/courses/electrical-e ... n-to-lisp/

Hal Abelson is an excellent lecturer and he starts the course in very solid ideas. He introduces his subject as Lisp, but it's important to know that Common Lisp is a different language in several important ways from Scheme, the language he uses in the course. Still, there is no introduction to Common Lisp of so high a pedagogical quality, so it's a fine course as long as you know those differences.

In the slide where he shows what happens with SQRT/TRY etc, there is an error. It shouldn't say 1.3333.
[if you use rationals you can see that (* 4/3 3/2) is 2, so 1.3333 isn't any closer an estimate to (sqrt 2) than 1.5 is]. your figure is correct.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Geo: I have heard that a functioning workstation based on one of the Scheme-chips was made at MIT, but it wasn't used to teach the intro course. The editor screen shots show a mode-line with the word "Edwin", which is a giveaway that they are using C-Scheme. It ran on Unix, VMS, and DOS, so I can't say for sure which they would have used. (Smart professors probably didn't use DOS if they could help it.)

So to understand why they didn't use Lisp Machines for their course, it's probably enough to say that Lisp Machines were quite expensive ($30-50K in 1986, when the videos might have been produced). But there are other ways that Lisp (including Common Lisp and the dialect named Zetalisp) differ from Scheme, and the most important is something called the Safe-For-Space property by Will Clinger in a 1998 paper.

When Abelson says that you really don't need control structures like FOR loops, he is right in the sense that they can all be expressed by clever use of functions (he calls them procedures). But to write an infinite loop with recursive functions, it needs to be safe to call functions infinitely deeply. The only way to do this safely is to use a different method of temporary storage from the conventional stack, since stacks grow each time a new function is called, and eventually the address space (or the virtual memory device) would become exhausted. Scheme implementations instead put their "frames" on the heap and make them garbage-collectable like other data. So as we keep calling functions infinitely deeply, the "frames" that we will never return to become orphaned and get swept away by the collector, keeping memory from running out.

Lisp machines that ran systems like Genera were designed differently, with the stack being a fundamental part. So if your function is infintely recursive, what will happen is that eventually the stack will exceed its limit and a SERIOUS-CONDITION will be flagged. There are ways to compile Scheme code on lisp machines and other CL systems, but they don't have the important Safe-For-Space property.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
most likely it is a 62 minute DAT cassette, as HHB is an audio company. You need a DDS-1 compatible drive that can be made to work with cassettes that do not have the DDS leader marks (which prevent audio-grade DATs being used in data applications).

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
hi, all SSDs have garbage collection. they know which blocks have ever been written, and which have been erased to zero. this information is necessary to provide wear leveling.

the problem with hosts that do not provide "trim" or "drop" signals is that blocks that the filesystem considers free cannot be freed on the SSD side. there is nothing magic about a trim command, it is just shorter and faster than overwriting the block with zeros. so as writes accumulate over time, the wear leveler has a smaller set of free blocks to use.

the other major SSD problem, write amplification, has more damaging effects when the wear leveler has a smaller set of free blocks to use.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I recall another poster had problems like this, you might find it by searching the forum. The O2's AV hardware has a memory access limitation that can cause DMA to stop if there is not enough physical memory available within a certain range. I think the problem is worse on some IRIX versions later than 6.5.15.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I know that Ciprico made GIO64 ultra-scsi cards. did they also make fibre channel HBAs?

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
it looks like a broken PIMM. you should try blowing out/cleaning the connectors between the processor module and the mainboard, but likely it's toast. Indigo2 processor modules are still relatively common on the used market.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
I'd hesitate to hop on this without some type of scientific HUI study... there are 6-degree devices that detect force, like the SpacePilot, and just from first principles a tool that doesn't move would seem to be more desirable.

_________________
:PI: :O2: :Indigo2IMP: :Indigo2IMP: