Everything Else

I'm going fiber - Page 2

nekonoko wrote:
pentium wrote: Oh dear, this is getting a bit out of hand.
I just assumed that so long as I stuck to the ethernet protocol while using fiber I would be fine.


FDDI and optical fibre fast ethernet are not the same thing.

To quote a wise man on the first page of this thread:

jan-jaap wrote: I hope you're not mixing up ethernet over fiber (using transceivers), FDDI and ATM. They all use more or less the same medium (fiber) but the protocol layer is of course totally incompatible so you can't mix them.


If memory serves correct:
-Ethernet over fiber is really no different than regular ethernet.
-FDDI works in the same way token ring does.
-ATM will make your head explode if you fail to fail to grasp the concept of packet switching.

That being said, this would mean that I would require at the least a conversion from the FDDI protocol to the Ethernet over fiber protocol. If I wanted to use my two ATM cards in the Onyx and Crimson, I would then have to convert both ATM and FDDI to Ethernet over fiber. Then we got to remember that the FDDI and ATM converters as well as the Cisco gear will give off quite a bit of heat.
:P
Why does networking have to suck so much?
:Crimson: :Onyx: :O2000: :O200: :O200: :PI: :PI: :Indigo: :Indigo: :Indigo: :Octane: :O2: :1600SW: :Indigo2: :Indigo2: :Indigo2IMP: :Indigo2IMP: :Indy: :Indy: :Indy: :Cube:

Image <-------- A very happy forum member.
pentium wrote: If memory serves correct:
-Ethernet over fiber is really no different than regular ethernet.

Correct. Same protocol, different media layer. Allows you to cover bigger distances. Also referred to as 10base FL or 100/1000 base FX.

-FDDI works in the same way token ring does.

FDDI allows you to build ring topologies, like token ring. But at the protocol level they're totally different, so an optical transceiver won't allow you to mix token ring and FDDI. The downside of a ring topology is that you don't have a ring unless all systems are running. A concentrator allows you to treat FDDI like a star topology which is more convenient. That's all. Technically, you don't even need one.

-ATM will make your head explode if you fail to fail to grasp the concept of packet switching.

I have only one machine with an ATM interface, so I can't comment. But somehow I expect the details of the implementation (packet switching) to be hidden by the tcp/ip stack.

That being said, this would mean that I would require at the least a conversion from the FDDI protocol to the Ethernet over fiber protocol. If I wanted to use my two ATM cards in the Onyx and Crimson, I would then have to convert both ATM and FDDI to Ethernet over fiber.

You're getting it.

Then we got to remember that the FDDI and ATM converters as well as the Cisco gear will give off quite a bit of heat.

Don't forget the noise.

Why does networking have to suck so much?

No, you called this on yourself. You're enjoying the pain. Admit it :)

Schematically, this is what my network looks like:
network.gif
network.gif (7.54 KiB) Viewed 488 times

To me, network infrastructure is not a goal in itself, it is there to support something else. So, I simply use what everybody else is using. It has to be cheap (to buy, but also to operate), and as simple as possible. I use regular 10/100/1000base TX copper wiring where possible. Convert everything that has a different media interface (AUI etc.) to RJ45 right away. Use consumer grade 1000baseTX switches. You can find fanless 16port switches that consume one a couple of watts.

Only where fast ethernet is not available do I use FDDI. Nothing else, no ATM, tcp/ip over fibrechannel etc. As you have found out by now, each new flavour requires a router to connect it to the rest of the world, and adds a new subnet to your local network which is a pain to administrate. As you can see, I have three subnets to administer, which is two too many as far as I'm concerned.

My FDDI concentrator is an IBM 8244, which is a 12-port device in 1U 19" form factor. It is fairly quiet and not too power hungry, as far as FDDI concentrators go.

I've got some pictures of (SGI) FDDI cards and the concentrator here: http://www.vdheijden-messerli.net/sgist ... 2.27-fddi/

If I were you, I would keep things as simple as possible for now. Establish a working baseline. Have you even tried to connect two systems via FDDI? Just a straight connection, and make them talk to each other?

That's about what I know of FDDI. One last thing: some of my SGI systems are dual homed (the Crimson and Onyx in this picture). In normal operation, they are configured to have their ethernet interface(s) disabled and use only FDDI. But the cable is there, so I can netboot them and perform an installation from the install server. Actually, if everything is configured properly, it only downloads the miniroot over ethernet. The actual software installation goes via FDDI. Makes a big difference if you want to install IRIX 6.5.22 + compilers + software on an Onyx1 :)
To accentuate the special identity of the IRIS 4D/70, Silicon Graphics' designers selected a new color palette. The machine's coating blends dark grey, raspberry and beige colors into a pleasing harmony. ( IRIS 4D/70 Superworkstation Technical Report )
jan-jaap wrote:
-ATM will make your head explode if you fail to fail to grasp the concept of packet switching.

I have only one machine with an ATM interface, so I can't comment. But somehow I expect the details of the implementation (packet switching) to be hidden by the tcp/ip stack.

Once an ATM network is set up, all the complexity is indeed generally hidden behind the TCP/IP stack, but hosts on ATM nets usually do require some driver tweaking and tuning.

However, getting an ATM network up and running can be much more complex and challenging than getting FDDI or ethernet running, regardless of the protocol which is going to run on it.

To complicate things further, getting good drivers for IRIX systems is a challenge, even with SGI-made network interfaces (the old Challenge L/XL ATM cards, for example). You'll often find cards that only have IRIX 6.2 drivers, or 6.3 drivers, or 6.4 drivers, or even 5.3 drivers, and for cards that have 6.5 drivers, they often have serious glitches.

For any hobbyist who wants to experiment with ATM, I highly recommend doing a lot of product research first and sticking as much as possible to a single hardware manufacturer for NICs and switches.
To complicate things further, getting good drivers for IRIX systems is a challenge, even with SGI-made network interfaces (the old Challenge L/XL ATM cards, for example). You'll often find cards that only have IRIX 6.2 drivers, or 6.3 drivers, or 6.4 drivers, or even 5.3 drivers, and for cards that have 6.5 drivers, they often have serious glitches.

Luckily I do have the Irix 6.2 and 6.5 driver for my ATM cards however even before you said that they are a complicated thing to setup I had doubts on using them anyways. My alternative for the two desksides, should ATM networking not work, was to just plug in an AUI transceiver and live with a 10 Base fiber ethernet connection. Speaking of that, the majority of the network was to be made from either fiber ethernet PCI cards or AUI ethernet over fiber transceivers with a 100 base speed required for only three systems (at the least) and the rest running at 10 base which is okay with me unless I have to start moving big files around.
The only system I own that would show an incompatibility with ethernet over fiber would be my Indigo. It has an FDDI card installed.

Speaking of FDDI, look what popped up on ebay. Too bad it only has a CDDI board though. What you recommend is something like this.
:Crimson: :Onyx: :O2000: :O200: :O200: :PI: :PI: :Indigo: :Indigo: :Indigo: :Octane: :O2: :1600SW: :Indigo2: :Indigo2: :Indigo2IMP: :Indigo2IMP: :Indy: :Indy: :Indy: :Cube:

Image <-------- A very happy forum member.
pentium wrote: Oh dear, this is getting a bit out of hand.
I just assumed that so long as I stuck to the ethernet protocol while using fiber I would be fine.
I was wrong...very wrong.


Heh, it's not like we didn't try to warn you.

If you're still going to try to pursue the vain attempt to install a FDDI network to replace some of your Ethernet infrastructure, be prepared to invest a bit of money, and, more to the point, potentially hundreds of hours.

I see all of the suggestions made so far are about Cisco gear. I know nothing about Cisco FDDI gear whatsoever, but I'm sure it will be a little less impossible to scare up the management software for the Cisco stuff.

The FDDI network that I set up was all Digital Equipment Corporation gear, from end-to-end. DEChub 900 backplane, four power supplies, FDDI DECconcentrator 900MX, CDDI DECconcentrator 900TH, and a couple 32-port Ethernet DECrepeater 900TMs.

None of it had anything to do with Cisco, though, so I can't really comment on any of its use.

Chris
:O2000R: (<-EMXI/IO6G) :O200: :O200: :O200: (<- quad R12k O200 w/GIGAchannel and ESI+Tex) plus a bunch of assorted standalone workstations...
The Keeper wrote: The FDDI network that I set up was all Digital Equipment Corporation gear, from end-to-end. DEChub 900 backplane, four power supplies, FDDI DECconcentrator 900MX, CDDI DECconcentrator 900TH, and a couple 32-port Ethernet DECrepeater 900TMs.

Yep - we had good success with DEC-based FDDI concentrators back in the day, too.
Oh, and I forgot the DECbridge 900MX to do the bridging between CDDI/FDDI and Ethernet.

Even then, the software wasn't generally available. I had to pull strings to get a CD and demo license key, back in a day. I don't even know if the software is available at all at this point.

It worked pretty well. There's a reason why it worked -- the original acronym for Ethernet was "DIX", with Digital being the first letter. Digital was a leader in networking.

But 1U 10/100 switches are a lot smaller, a lot quieter, a lot less expensive to feed, don't need a babysitter, and are practically free at this point. Speed is on the same order of magnitude. GigE is one order of magnitude faster, and not a whole lot more expensive than the shipping cost on the FDDI beasts.


If you really want to get into fiber, and want to do something useful with it, forget FDDI and start looking at Fibre Channel. It's a lot easier to work with than FDDI, and you can learn something from it that's very applicable in today's technology market. 1-Gbit is cheap, and drivers are readily available.


Chris
:O2000R: (<-EMXI/IO6G) :O200: :O200: :O200: (<- quad R12k O200 w/GIGAchannel and ESI+Tex) plus a bunch of assorted standalone workstations...
You know what, I think I'll stay with what I got for a little longer.
:Crimson: :Onyx: :O2000: :O200: :O200: :PI: :PI: :Indigo: :Indigo: :Indigo: :Octane: :O2: :1600SW: :Indigo2: :Indigo2: :Indigo2IMP: :Indigo2IMP: :Indy: :Indy: :Indy: :Cube:

Image <-------- A very happy forum member.
pentium wrote: Since hubs, transceivers and cards are getting cheaper (and since almost all my computers are in the same part of the house) I thought it might be nice to finally upgrade from my tangle of 10Mbit BNC ThinNet, 10Mbit CAT 5e, 100Mbit CAT 5e, AppleTalk and Token Ring to just Token ring (so I can network my old AIX 1.3 box) and Fiber.
The cables I am getting should cover most of my computer room however I'm not going to get all the systems. So far, it will look like I will need seven PCI fiber cards and seven AUI fiber transceivers. If I get a fiber card for the Onyx and the Indy we can take out two transceivers. If I can find two Sbus Fiber cards for the SUN systems, I can take out two more transceivers.
The only systems that I don't think I can network are my macs (both PCI and NuBus) and my Intel (white box) NeXTStep 3.3 system.

The only items I currently have that are related to fiber are two FORE ATM cards for the Crimson and Onyx and a card installed in my Indigo. What do you think?

I apologize for the length of this post. You seem to really want to do something with that fiber, that's great but don't waste to much money to push light through the orange stuff.

Well by now you've figured out the different protocols that you might pass through your fiber cables. I've messed with fiber for Ethernet a little and quite honestly it's boring. Once you understand it's the same idea as any other Ethernet it becomes, well, the same as any other Ethernet. Figure out Tx goes to Rx and Rx to Tx and that if it doesn't work you need to swap one side and it's just like plugging RJ45 cables in. Well it's twice as much work.

But not to discount the experience...I've worked with many people that just didn't know/realize Ethernet over fiber isn't that much different than Ethernet over cooper and were either confused, scared or impressed by it. Sometimes they think it's "fiber" and somehow different than Ethernet. I've even seen people think, oh well, "it's fiber so it must be fast"...never mind it was a 10 mb Ethernet (mixed 10Base-T and 10Base-FL network with 200+ nodes on a single collision domain (all hubs).

While switches might exist, I'm not sure if I've seen anything other than hubs in the 10mb ethernet area. 10mb fiber Ethernet isn't much better than the 10Base-2 topology, not any faster, just a little less fragile. Make the jump to 10Base-T and you'll have less to worry about.

The 3com corebuilder 3500 came up, it's a solid piece of equipment, a little noisy but not any worse than that ML370 I saw in your picture. The corebuilder is also a layer3 switch so you could make your network nice and complicated if you like. I have one if you want it, but it's full of 100Base-T cards, no fiber. 100Base-FX cards come up on ebay often, but it sounds like you want to connect old stuff so you're stuck at 10mb and that doesn't help you.

Unlike a $40 gigabit ethernet switch, fiber gear is one speed. It won't autoselect to 10/100/1000mb like a 10/100/1000Base-T switch, it's the speed it is. You get to choose full duplex if it's a switch, you get to swap the strands for cross-over and it's 10mb or 100mb or 1000mb when you buy it.

Getting the ATM gear working with an inexpensive switch from ebay, while risking head explosion, might be worthwhile for the speed and experience. I have no experience with it. You'll want a switch that can covert to Ethernet to get to the rest of your network.

Fiber has a place...just not on machines 20 feet apart. Use it between wiring closets and buildings. Use it for high speed connections (10gigabit Ethernet!). Use it for protocols that don't have a copper media layer (Fibre Channel storage...the faster ones). Set it up in your lab for experience, but don't bother making everything fiber.

10Base-FL, 100Base-FX and 1000Base-SX are all similar. 10Base-FL is often ST connectors, 100Base-FX is often SC, and 1000Base-SX is usually MT-RJ (with some older stuff using SC). But at the end of the day you plug two fiber strands in on each end and check for link lights (on both ends).

I think you could get twisted pair (copper rj45) AUI connectors and enough patch cables for less than the fiber AUI connectors and it would be better in the long run...get the 10Base-2 network out. 25 foot and less patch cables can be found for cheap if you look around and probably can get them for free from friends that stole them from work.

While you're dreaming of fiber, consider media converters and consider just using fiber on a couple devices for the novelty, experience or whatever. Check out IMC Networks for example, they make/made some modular chassis that accepted different converters for 10 or 100mb ethernet and different fiber connectors. You could use one with a couple modules in it to front end a 10/100 switch to run your Mac with the AAUI converter you already grabbed and something else.
The Keeper wrote: If you really want to get into fiber, and want to do something useful with it, forget FDDI and start looking at Fibre Channel.

Apples and oranges. Or maybe steam engines and diesel :mrgreen:

For the first couple of generations of SGI's you have two choices, 10mb/s ethernet and 100mb/s FDDI. And maybe ATM, but certainly not fibre channel.

Later generations of SGI's might take fibre channel, but isn't it true that only the Prisa adapters for Onyx do tcp/ip over fibre channel (which was the whole aim in this case)? These same systems all have 100mb/s ethernet so FDDI is irrelevant for them, but for most of them gig ethernet is also available. Still, I admit that gig ethernet over fibrechannel in an Onyx1 has a certain geek-appeal 8-)

If you want to improve your resume, everything is better than FDDI, which is just another niche technology that went out of fashion a decade ago.

I have no experience with DEC or Cisco concentrators, but my IBM has a built in configuration menu on a serial console so I don't need anything special for it. I guess the default settings were sensible because I never had to change anything to make it work. Most systems (Indigo/Indy/Indigo2/Onyx/PC) were 'plug and play'. The only exception was the VME FDDI card for 4D systems. It's an Interphase 4211 V/FDDI board, which was modified by SGI, and they changed the fiber connectors around so I wired it up incorrectly. The fact that one of my V/FDDI boards was dead didn't make that one any easier to diagnose.
To accentuate the special identity of the IRIS 4D/70, Silicon Graphics' designers selected a new color palette. The machine's coating blends dark grey, raspberry and beige colors into a pleasing harmony. ( IRIS 4D/70 Superworkstation Technical Report )
I admit that i still need to get things set up, but i use Cabletron concentrators, and those should work fine.
The MMAC-3/5/8 models are basically a chassis with a backplane consisting of multiple busses, a power supply, and fan tray.
You insert cards according to communication needs; these cards have their own CPU,firmware, and console port, and basically run independently; they communicate with each other through the busses.
All parts are hot-swappable.

My setup consists of a FDMMIM-04 card (4 master FDDI, 2 pri/sec FDDI, bridges to the IRM-2), and an IRM-2 (ethernet module, 1 FOIRL port, 1 AUI port).
I could also add a FDCMIM-08 (8 FDDI master ports) to expand FDDI connectivity.
Workstations connect with their FDDI card to the concentrator, which converts FDDI to Ethernet protocol via the IRM-2 card, which connects to the rest of my (Ethernet based) network.
-= I reject reality, and substitute my own =-

1 Indigo R3k-33 32MB XS24-Z;
1 Indy R5k-180 256MB XZ;
1 Indy R4k-175 64MB XL;
2 Indigo2 R10k-195 512MB MaxImpact;
2 Indigo2 R4k-200 256MB (XL+Extreme);
2 Octane Dual R12K-300 1024MB (MXI+V6).
jan-jaap wrote:
The Keeper wrote: If you really want to get into fiber, and want to do something useful with it, forget FDDI and start looking at Fibre Channel.

Apples and oranges. Or maybe steam engines and diesel :mrgreen:


Yes, steam engine and diesel engine is a very appropriate analogy. Fibre Channel is basically the next generation of FDDI technology.

Both engines use roughly the same infrastructure.

However, with steam engines, you can't get the parts, there is no support from the original manufacturers, and there are very few people that you can turn to for help.

Diesel engines are still in use today. Older ones are easy to come by, are inexpensive, and they are still compatible with the models of diesels that are manufactured today.

The question that has to be asked at this point is -- what was the original intent? I think the original intent was to have orange wire running around the room. If that's what it was about, then FDDI or FC would be equally viable.

jan-jaap wrote: Later generations of SGI's might take fibre channel, but isn't it true that only the Prisa adapters for Onyx do tcp/ip over fibre channel (which was the whole aim in this case)?


Correct, but only as long as you don't use a Silkworm switch. For some reason, the Brocade switches don't like to work with the Prisa. Assuming you can find a Prisa for less than the price of a used car. They're rare as hen's teeth at this point.

There was also an Indigo2 variant with the GIO interface.

jan-jaap wrote: If you want to improve your resume, everything is better than FDDI, which is just another niche technology that went out of fashion a decade ago.


Not just improve your resume, but also perform a general purpose task. FDDI is not capable of being anything more than just another 100Mbit IP-like network transport.

Fibre Channel has been supported on all major workstation and PC platforms for the past 15 years, including SGI (I have many), Sun (Ultra 10), HP (C240), IBM (43p), DEC (PC164LX-based), Mac (PPC Mac and G4), and PC (I have many), and continues to be supported today.

With my Fibre Channel SAN, I share a 200-CD jukebox, DVD-ROM drive, CD-RW burner, 60-tape/6-drive DLT library, and a fast FC disk drive for occasional PC backups using Norton Ghost, to every workstation/PC in the house.


I guess that's enough FC evangelizing for now.

I'm not trying to say that getting into FDDI at this stage of the game is stupid. I've just been trying to make sure that anyone that wants to start playing with FDDI now, knows what they're getting into.

Just as the "wanted Origin 2000 or Onyx2 to start a business thread for under $250" played out, this thread is similarly playing out. The poster of that thread wants a purple fridge, and the poster of this thread wants to use orange wire. Sometimes more thought has to be given to enterprise-level technology than just colors, and the question "what do you want to do with it" has to be answered.


Chris
:O2000R: (<-EMXI/IO6G) :O200: :O200: :O200: (<- quad R12k O200 w/GIGAchannel and ESI+Tex) plus a bunch of assorted standalone workstations...
The Keeper wrote: I guess that's enough FC evangelizing for now.

Nah, you're the resident fibre channel guru, we wouldn't expect anything else ;)
The Keeper wrote: [...] the poster of this thread wants to use orange wire. Sometimes more thought has to be given to enterprise-level technology than just colors, and the question "what do you want to do with it" has to be answered.

Hehe, somehow 'pentium's thread always get very long. Back to page 1:
pentium wrote: I thought it might be nice to finally upgrade from my tangle of 10Mbit BNC ThinNet, 10Mbit CAT 5e, 100Mbit CAT 5e, AppleTalk and Token Ring to just Token ring (so I can network my old AIX 1.3 box) and Fiber.

I think he wants tcp/ip networking, not a SAN. I wouldn't mind a SAN though :D Maybe once my KVM project is finished ...
To accentuate the special identity of the IRIS 4D/70, Silicon Graphics' designers selected a new color palette. The machine's coating blends dark grey, raspberry and beige colors into a pleasing harmony. ( IRIS 4D/70 Superworkstation Technical Report )
The Keeper wrote:
jan-jaap wrote: Later generations of SGI's might take fibre channel, but isn't it true that only the Prisa adapters for Onyx do tcp/ip over fibre channel (which was the whole aim in this case)?


Correct, but only as long as you don't use a Silkworm switch. For some reason, the Brocade switches don't like to work with the Prisa. Assuming you can find a Prisa for less than the price of a used car. They're rare as hen's teeth at this point.

Chris

Really??
I guess I was lucky then cause I just bought a couple of them on eBay in December.
Paid 30$ for them.
Only one of the boards has the optical adapter.
according to the seller the boards was a working pull.
I just got them home and was thinking of installing them this weekend.
Guess I was lucky to find them then :)

I bought it to start fiddling with fibre channel and to get faster disk access in the Onyx that is limited to 20MB on the SCSI channel. I'll PM you "The Keeper" when I have it show up in hinv and maybe you can recommend a start kit for me ?

But are you saying I can use it as a network card as well ???
I thought fibre channel was only a storage protocol.
So there are Fibre switches that can convert it to Giga bit Ethernet then ?
And IRIX supports running the Prisa as a NIC?

//debug

Mein Führer, I can walk!
deBug wrote: But are you saying I can use it as a network card as well ???


Apparently so - here's a quote from the manual:

/etc/hosts

This is the file used for configuring NetFX Fibre Channel ports as IP interfaces. Each
local Fibre Channel port that we wish to use as an IP interface must have an entry in
/etc/hosts. The IP interface name must match the port's npname in /etc/NLPorts. Each
remote IP port can be defined either in /etc/hosts or by using a name service protocol such
as NIS or DNS. However, keep in mind that the name in the NIS or DNS maps must still
match the entry for that remote port in /etc/NLPorts.


BTW, you can download the PDFs here:

http://futuretech.blinkenlights.nl/prisa/

I have the drivers on my FTP if you don't already have them.
Twitter: @neko_no_ko
IRIX Release 4.0.5 IP12 Version 06151813 System V
Copyright 1987-1992 Silicon Graphics, Inc.
All Rights Reserved.
deBug wrote: I guess I was lucky then cause I just bought a couple of them on eBay in December.
Paid 30$ for them.


Guess so!

deBug wrote: Only one of the boards has the optical adapter.


What you'd be looking for to complete the rest would either be called a "GLM", or Gigabit Loadable Module, or more likely at this point, Emulex LP6000 or LP7000 PCI cards with GLMs attached to them. Just yank the GLM off the LPx000 and drop it onto your cards. If you need more ports, that is.

deBug wrote: But are you saying I can use it as a network card as well ???
I thought fibre channel was only a storage protocol.
So there are Fibre switches that can convert it to Giga bit Ethernet then ?
And IRIX supports running the Prisa as a NIC?


Fibre Channel is a transport protocol, and whatever you layer on top of it is your choice. You can do SCSI-FCP or IP-FCP, as examples.

There are no switches that convert from FC to Ethernet. If your drivers support a Fibre Channel HBA as a network device, then you can talk to other Fibre Channel HBAs that also act as network devices, but you would need to set up a pc/workstation to route between the two.

Chris
:O2000R: (<-EMXI/IO6G) :O200: :O200: :O200: (<- quad R12k O200 w/GIGAchannel and ESI+Tex) plus a bunch of assorted standalone workstations...
The Keeper wrote: What you'd be looking for to complete the rest would either be called a "GLM", or Gigabit Loadable Module, or more likely at this point, Emulex LP6000 or LP7000 PCI cards with GLMs attached to them. Just yank the GLM off the LPx000 and drop it onto your cards. If you need more ports, that is.
Chris


Thanks Chris, great info!
Very appreciated!
I'll grab a few LP7000 on eBay and get the other card going.

//deBug
Mein Führer, I can walk!
Interestingly, I just found a bag with a few GLMs in it. If you can't find any LP6000 or LP7000 cards in Europe, let me know. I'd be willing to sell you the lot for $20 including shipping.

Chris
:O2000R: (<-EMXI/IO6G) :O200: :O200: :O200: (<- quad R12k O200 w/GIGAchannel and ESI+Tex) plus a bunch of assorted standalone workstations...
Pentium, when you are finished, you need to bridge a RFC 1149 network in amoungst that too. ;)

Regan
:Onyx2R: :Onyx2R: :0300: :0300: :0300: :O200: :Octane: :Octane: :O2: :O2: :Indigo2IMP: :Indy: :Indy: :Indy: :Indy: :Indy: :Indy: :Indy: :Indy:
:hpserv: J5600, 2 x SUN, 2 x Mac, 3 x Alpha, 2 x RS/6000
The Keeper wrote:
deBug wrote: Only one of the boards has the optical adapter.

What you'd be looking for to complete the rest would either be called a "GLM", or Gigabit Loadable Module, or more likely at this point, Emulex LP6000 or LP7000 PCI cards with GLMs attached to them. Just yank the GLM off the LPx000 and drop it onto your cards. If you need more ports, that is.

What about bulkheads, do you have those? Or do you want to run fiber through a hole in the front panel of the Onyx straight to the card?

For me, that was a reason not to bid on that auction . . .
To accentuate the special identity of the IRIS 4D/70, Silicon Graphics' designers selected a new color palette. The machine's coating blends dark grey, raspberry and beige colors into a pleasing harmony. ( IRIS 4D/70 Superworkstation Technical Report )