The collected works of jwp - Page 1

I've been thinking a lot about CDE since it was open sourced. I've even had dreams about it several times in the last month! One of these dreams even included a new CDE theme used by IBM, which was darker than the default colors, and had the dock mostly hidden at the bottom of the screen except for the arrows at the top of each icon. Of course, such an arrangement would be cumbersome and ridiculous since there would be no labeling or icons to indicate what the arrows were for. Dreams are strange like that, though...

Around 10 years ago when I was in high school, I basically coveted IBM and HP Unix workstations, but of course I didn't have one myself, as they were far too expensive. I had never used any genuine SVR4 Unix system, but I knew that it must be more awesome than anything I could possibly imagine. At that time, I built my own Linux box, but it sadly still wasn't the same as the "real thing," and CDE was basically the symbol of everything I was missing out on. Of the many pieces of Unix software, CDE was the only major standard software that had no equivalent or replacement in Linux or the BSD's.

When I finally got to college as a CS major, I worked in a lab of Apple G4 systems, along with a small row of Sun workstations. I bugged the local admin for an account on the Sun machines, but he basically just ignored me (he was a big Apple fanboy). Each time I worked there, I saw the CDE login screens, but even after a few years of working there, I never saw more than that -- I never saw anybody even log in to the Sun workstations, although they probably cost a few thousand apiece. I can, however, remember using those stupid G4's to look up screenshots of CDE running on AIX and HP-UX.

Even after all these years, CDE has still been at the back of my mind, and I was stunned when it went open source. It still seems almost unbelievable! I downloaded the source code and created a new VM on my local machine just for running CDE on Debian. After following the instructions exactly, I ran "dtlogin", and was startled to see a full CDE desktop in front of me, on my own machine! It's still almost unbelievable, and the novelty has definitely not worn off. To me, nothing looks better than the default CDE (although sadly, many people say that it is ugly).

So yeah, I am definitely a CDE lover. Now I just need to learn how to actually use it.....
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
sgi_mark wrote:
jwp wrote:
Around 10 years ago when I was in high school, I basically coveted IBM and HP Unix workstations, but of course I didn't have one myself, as they were far too expensive. I had never used any genuine SVR4 Unix system, but I knew that it must be more awesome than anything I could possibly imagine. At that time, I built my own Linux box, but it sadly still wasn't the same as the "real thing," and CDE was basically the symbol of everything I was missing out on.


That's so funny - that almost exactly mirrors my experience, although I did actually get a login that worked on the Sun workstations at our Uni... I really wanted my Linux desktop to more closely look like a "real Unix" and spent ages playing with enlightenment themes, running XFCE (back when it actually did look like a CDE clone) and so on. Eventually I managed to get Solaris x86 installed (after having to purchase a commercial X Server from Xi graphics to support my Voodoo card) and basked in the glory of a real dtlogin and CDE :)


Ah, Voodoo cards -- now I know the era. Yeah, XFCE was probably the closest thing to CDE before this. Although some of the look and feel has been modernized since the early releases, it's pretty easy to customize the environment to have a similar type of dock at the bottom. What I ended up doing instead, though, is moving to FVWM with a simple Motif type look. It is more minimalist, but it has that basic Motif / CDE look and feel, and is easy to configure.

To this day, the only SVR4 type Unix system I've used was Illumian (just to try it out). It seemed so similar to Linux with Gnome 2, and even the "vi" it uses is vim, hassling me about children in Uganda on SVR4! Oh, how the mighty have fallen.... I guess for a modern open source Unix system that still seems like Unix, it's necessary to either go the BSD route, or Linux with a certain software set (e.g. nvi instead of vim, and FVWM / XFCE / CDE / twm in X11).

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
Alver wrote:
I assume the reason behind the question was: "will platforms that have CDE now benefit from the changes made by the open source community that manages it now".

The answer there would probably be "yes", since it's not GPL. But I'm not an expert in license law. :)

I believe that if HP, IBM, and others wanted to include any new work done by the open sourced CDE, that they may have to drop their own CDE codebase or re-license it to be compatible with the LGPL. The LGPL allows linking to the work as a library, but not as part of a derivative work that is proprietary. For example, the CDE code can be linked against by third-party programs written for CDE, like a CAD program written for Unix. However, that is linking rather than creating a derivative set of programs. Improvements made for the open source CDE project could probably not be used by the Unix vendors under current licensing arrangements, because that would result in a derivative proprietary work (which the LGPL protects against). That is my current understanding of the situation (I'm not a lawyer).

There are still many bugs in the current Linux build of open source CDE, and many rough edges. I think the first phase was just to get it up and working. Now they are starting to clean up the code base, basically resolving a few thousand compiler warnings (everything is compiled with ANSI and pedantic flags). Some things still don't work, though, and there are some bugs that need to be worked out. For example, the dtexec program can start using 100% of the CPU under certain conditions if desktop is running for a long time. Essentially, the code is still alpha quality on Linux.

I think there is some work under way to also get the CDE code working under the BSD's, and some interest as well from Solaris people. I hope that work can continue to clean everything up and work out the bugs and compatibility issues. There are also some security flaws that are documented, and will probably be fixed at some point.

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
The first computer my family ever owned was a PC running DOS, back in the early 1990's. As a kid, I had no idea whatsoever that it had QBasic on it, or even that such a thing could help me to build my own programs. I would have loved learning all about it, if I had even known that such a thing existed. Microsoft has always kept these sort of tools out of sight, maybe because the company's view is that people should buy software written by professionals instead. I admire that other types of personal computers had BASIC in plain sight, like the Commodore 64, which even booted into a BASIC interpreter (very cool!).

For learning programming, high level languages like Python and Ruby can provide nice programming environments with interactive REPL's. You can type in code and see it evaluate in front of you. That type of immediate feedback is really great for learning the basics.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Some others are great too, but the ones below are sticking out in my mind as some of my favorites.

  • Eden of the East
  • Ghost in the Shell
  • Ghost in the Shell 2: Innocence
  • Haibane Renmei
  • Kiki's Delivery Service
  • Serial Experiments Lain
  • Whisper of the Heart
Serial Experiments Lain is pretty cool, and would probably be appreciated by some members of this forum. The interesting views on technology, references to things from computing culture (references to Knights of the Lambda Calculus, Lisp and C programming, and obscure Apple stuff), and even having a room full of crazy computer systems, all seem to fit really well with the Nekochan thing (although unfortunately not much in terms of Unix references).

Other than the animes which are kind of about technology and philosophy, I gravitate toward "slice of life" anime. I like good characters and depth, rather than just fast action and stuff like that. Haibane Renmei is an anime that is a little slow moving and quiet, but deep and thoroughly enjoyable. That anime is really special, and has a lot in common with Serial Experiments Lain, despite having a totally different setting and subject matter.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
I believe that Richard Stallman (rms) uses one of these, because it is more free than other machines (i.e. it comes with a free BIOS). I'm interested in this stuff, and I hope they do really well. It's difficult to break into the market since Intel and x86 seem to have a stranglehold for PC's and laptops these days.

It seems like there is currently a gap in the market for inexpensive computers that are not only environmentally friendly, but also reliable. Many users don't need to have the highest performance, but they would at least want something that will be durable and last for a long time without hardware and software failure.

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
hamei wrote:
jwp wrote:
... they would at least want something that will be durable and last for a long time without hardware and software failure.

In that case, forget about anything that comes out of China.

Sorry, just the facts, ma'am.

Practically everything comes out of China, it's just a matter of the engineering and quality standards. Most businesses in other countries use China for its cheap labor, so they are willing to cut corners already (e.g. Walmart, Dell, etc.). However, things manufactured in China for the Chinese market can vary anywhere from very low quality to very high quality. I'm not sure about these machines in particular, but the cases look tougher than those of typical home market PC's. It also seems to be the project of a university, and to have different purposes than just undercutting the competition. For example, rather than advertising speed and flashy features, they are emphasizing low power consumption, security, and reliability. Basically, they are marketing these more as appliances than typical PC's.

This could be very good for many Chinese who don't know much about computers, but deal with viruses a lot, and otherwise don't need the latest technology. If they are reliable as well, then that would be suitable for people who are close to the elements (i.e. rural farmers and villagers), or who simply can't afford PC repairs for random problems they have.

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
hamei wrote:
jwp wrote:
However, things manufactured in China for the Chinese market can vary anywhere from very low quality to very high quality.

Name one.

Most "carrier grade" and high end servers these days, including those for the big names, are likely manufactured in China. As another example, when IBM sold their PC division to Lenovo, the quality basically stayed the same, as did the traditional designs for their business PC's and workstations. Lenovo is a Chinese company, yet they manufacture IBM products in a manner very similar to the way that IBM did. I would consider these high quality products, to the extent that they are built to be reliable and rugged (e.g. they have to go through military grade endurance testing). By contrast, companies like eMachines and Dell are using Chinese manufacturing as well, but they aren't willing to pay more for high quality standards, and the designs are meant to lure customers with the latest whiz-bang, while being housed in overheated flimsy plastic cases.

As for high quality products for the Chinese market, clothes come to mind easily. I lived in China for a little while, and I saw that the price you pay for clothes is closely related to their quality. For example, a $15 pair of pants in China will often have noticeably better and more durable materials and stitching than a $30 pair of pants in the U.S. However, the $4 pair of pants is of low quality and will not last, just like low quality clothes tend to not last in other countries. On the Chinese market, there is a wide range of quality, and you often get what you pay for.

As another example, tea in China as sold to the Chinese, ranges from very low quality to very high quality. The very low quality tea is often still better than what can be found in supermarkets in the West, and the premium tea is of incomparably good quality, and is simply not exported en masse to any other countries, but rather kept for the Chinese market where people will pay big money for premium tea. http://teaguardian.com/all_about/whycheaptea.html

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
Hamei, endlessly shouting out your own opinions is more appropriate for a Fox News opinion show than for a forum about computers. If you actually want to talk the issues, then I'd be happy to do so, but only after you've wiped the foam away from your mouth and calmed down.

guardian452 wrote:
jwp wrote:
It seems like there is currently a gap in the market for inexpensive computers that are not only environmentally friendly, but also reliable. Many users don't need to have the highest performance, but they would at least want something that will be durable and last for a long time without hardware and software failure.

It's called an ipad.

iPads are not really built to be rugged or reliable, and are not general purpose computers. After a few years, will no doubt be ineligible for software updates. I have a netbook which is more general purpose than an iPad, but I've gotten the impression that the hardware on that is a little iffy as well, and they are also just meant for home users -- not business class reliability or a long lifespan. Eventually all these devices will end up in a landfill after the hardware flakes out.

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
bluecode wrote:

Quote:
Weiwu Hu, chief architect of the Godson processors developed by the Institute of Computing Technology (ICT) at the Chinese Academy of Sciences, is coming back to ISSCC to talk about the line of MIPS chips that the Chinese government is funding for handhelds, PCs, servers, and supercomputers. Hu gave a presentation about the Godson lineup at the ISSCC 2010 conference.

Very interesting! They are planning on using these chips for servers and supercomputers too, then. The article also mentions 16-core chips coming out in the future, but apparently little information is available on these so far.

Some pics of an older model are at: http://www.cyrius.com/debian/loongson/fulong/gallery.html

It looks like this came with a customized Debian install using a GNOME 2 desktop.

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
I came across this material about some special Xeon coprocessor boards that SGI are now having as an option in some of their servers (even in 1U systems).

http://www.sgi.com/products/servers/accelerators/phi.html

Apparently these boards each have 60 Xeon cores running at over 1 GHz, with 8 GB of RAM, and the coprocessor can actually run independently as a Linux server if you want to use it that way (you can even SSH into it). Times are strange. Hardware technology is moving so incredibly fast!

I was watching a video recently of a data center from the early 1990's, which featured some old mini fridge sized Sun boxes. I was curious to find out how much actual processing power a classic SVR4 Unix box like that would have had in the early 1990's, and found out that the CPU on this model was less than 24 MIPS! Then I started looking through old SPEC CPU2000 integer performance benchmarks, to see how some classic Unix systems fared from the late 90's and early 2000's, vs. the PC architecture machines. Funny enough, the $8000 Power5 workstations were comparable in CPU power to the $1000 x86 workstations of their time. All the old Alpha chips as well look to be no more powerful than the early line of Pentium 4 chips.

http://www.spec.org/cpu2000/results/cint2000.html

It's kind of depressing to look at data that puts a number on an older piece of technology. Many of these systems were novel and engineered well, and large models could definitely scale to a high degree. A lot of thought was put into building these different platforms. It makes me realize, though, that in the late 1990's or early 2000's, at least the smaller Unix machines were already starting to look old and behind the times. For example, a review is still up for an RS/6000 workstation in the late 1990's. (Note that the price of the test model they received was over $80,000!)

http://www.drdobbs.com/ibms-rs6000-43p-model-260/199200782

For some comparisons to other machines in the workstation market:

Quote:
Our runs of SPECfp95 on the 260 resulted in a score of 30.1, which was somewhat higher than the posted scores for other high-end workstations from HP (26.3), SGI (26.6), and Sun (29.5), and over twice the score of Dell's high-end Intel-based workstation (14.7). Integer performance, however, is another story. The Model 260's score of 13.1 on SPECint95, while certainly respectable, is lower than the integer scores of those same competitive machines: 18.9 for Dell's Precision 610 running a 450MHz Intel Pentium II Xeon, 17.4 for the HP 9000 Model J2240 equipped with a 236MHz PA-8200, 13.6 for SGI's Octane powered by a 250MHz Mips R10000, and 16.1 for Sun's Ultra 60 Model 2360 with a 360MHz UltraSPARC II CPU.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
mia wrote:
Now, the real question isn't how fast it is, but how you use it, if it's to run World of Warcraft or watch movies, then I think it might be somewhat overkill, but if it's to run a clustered filesystem, it might be worth it.

Very true, I guess what matters most is whether the machine fulfills its purpose. When I was a teenager, I didn't have much money, but I managed to get some used 486 business PC's for free. For what I needed the machines for, learning how to use Unix, ten year old PC's were completely sufficient.

Later when I was upgraded to a Pentium 2 and had a 14.4 kbps external modem, I felt that was a huge upgrade. I could even use X11 and the Mozilla suite -- living in luxury! The big change came a few years later when I wanted to use Firefox, play music and videos, and multi-task more. At that point, even 512 MB of RAM and an Athlon 2500 seemed like they were just "okay." :?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Commercial Unix used to have the server and workstation market, BSD hadn't gained much traction outside academia, and Linux was almost exclusively for home users running PC's. For a long time, SGI and other workstations were viewed as exotic and expensive machines that few people had access to. These days, two processors doesn't seem that impressive since everyone has dual and quad core systems, but at one point, the situation was very different.

http://news.cnet.com/SGI-to-slash-workstation-prices/2100-1001_3-213534.html
Quote:
The Octane line's entry-level product, which comes with a 225-MHz R10000 MIPS processor, 128MB of memory, a 4GB hard drive, and a 20-inch monitor, will fall to $17,995 from $19,995. The pricing action comes two months after the company introduced it. An Octane system featuring 250-MHz R10000 processor, meanwhile, will drop from $38,995 to $24,995.

I wonder how many people had such a workstation in their homes in 1998?

Like the dwarves in Lord of the Rings, the big Unix players delved too greedily and too deep.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
mia wrote:
Quote:
I wonder how many people had such a workstation in their homes in 1998?


me! (indigo 2@250Mhz SI)

:o Lucky guy!
hamei wrote:
It's wrong because it's based on a lie. The whole point to Linux was "the evil corporations won't let us have Unix at a decent price so we'll make our own. This is for the commmmuuuunity." 90% of the work was done by others for free, based on these claims. I seriously doubt that any of those people would have lifted a finger if the point of the exercise was to create another cash cow for IBM and Linus Torvalds.

I don't hate the man or anything but if you're going to be honest, his entire career and stature is based on a lie. Stallman may be a goofball but at least he walks the walk. Torvalds is a hypocrite.

By the time the Linux kernel was developed, GNU tools had already been popular on commercial Unixes for quite some time. For example, if someone wanted a "grep" that was faster and without arbitrary limits, they could use the GNU version. That was their selling point -- more options, fewer limitations, portable, and.... emacs. Then in the early 90's, Linus came along and wrote a kernel for his 386, compiled the GNU userland along with it, put in some glue, and then everyone started calling the whole system "Linux." The interesting thing is that GNU code far outweighs kernel code when looking at the whole system. Without the GNU project, "Linux" would have to import all the tools from BSD to even be usable.

Later, Stallman contacted Linus about working together toward a common goal of "software freedom," but of course Torvalds didn't really care about that stuff, and he still doesn't to this day. He just considers the GPL to be a tool that helps him keep any derivative code in the official kernel. There has always been this conflict between the "free software" movement and others who don't care at all about those principles. Eric Raymond is another person who is like this. Coining the term "open source" was all about weakening the stance on free software, and sweeping that set of ideas under the carpet.

The Linux kernel is not so indispensable, though. Considering the project "Debian GNU/kFreeBSD", it shows that a distribution can even change kernels while keeping almost all the other software the same. They could probably also even create a "Debian GNU/Illumian" or a "Debian GNU/Minix3" if they wanted to.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
In case anyone is interested, there is an unofficial live CD of Debian with the latest version of CDE, available at:

https://andarazoroflove.org/code/cdebian

... A short video of installation and some basic usage are here:

http://www.youtube.com/watch?v=26ijJIu7lFU&vq=large

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
vishnu wrote:
A lot of people argue quite cogently that one of the main reasons for the success of Linux was the Unix System Laboratories vs. BSDi lawsuit in the early nineties...

I think that must have been a major part of it -- both the lawsuit and their lateness in porting BSD to the PC platform. Andy Tanenbaum said that he didn't take Linux seriously, or try to develop MINIX into a serious OS previously, because he thought that "BSD was going to take over the world." The old Usenet debate shows that Linus was thinking something totally different, though -- that the future would be GNU/HURD, and so Linux would just be a hobby project in the meantime. As it turned out, BSD was late to the game, and GNU HURD never ended up as much except vaporware.

Since that time, though, BSD systems have lagged behind on hardware support. Having a Unix system like BSD that purports to be more stable and carefully audited is only relevant if it actually works with your hardware. If you can't use hard disks or networking, or if there is no video acceleration and you want to do 3D modeling, then who cares about the rest? The BSD's haven't taken drivers very seriously in the past, a lot of the effort seems to be toward developing new server security features instead. I will say, though, that crucial things like hardware support are far more important and fundamental than the next new ZFS / containers / jails / VM / clustering whatever crap. It seems that they are satisfied with their server OS niche, and aren't willing to take the necessary steps to turn the BSD's into good general purpose Unix systems.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
mia wrote:
jwp wrote:
I will say, though, that crucial things like hardware support are far more important and fundamental than the next new ZFS / containers / jails / VM / clustering whatever crap.


Trust me, that may not be the case for everybody.

Sure, not for everyone, but basically everyone except a few server admins and basement-dwellers -- assuming the OS even has the necessary device drivers for their own hardware. If the goal of the BSD developers is to make the best server operating system (FreeBSD seems to be promoting this niche), then they haven't been succeeding in developing much support in the computing field over the last 20 years. If instead their goal is to build a general purpose Unix OS, then they've also been unable to do that sufficiently, since their hardware support is often lacking. This is my point -- that the BSD's have not really succeeded as a general "Unix." They fell far behind the rest of the industry, and instead of pursuing the necessary changes to build a modern system, they just grew more conservative and focused on the server niche.

The commercial Unixes like IRIX, Solaris, AIX, and HP-UX were all much more general purpose than the BSD's are these days. They came with all the necessary drivers for utilizing their video cards and other hardware, and the CDE desktop was the industry standard. The BSD's have no standard desktop, or even a preferable one. They lack hardware support for any real desktop use, so in some ways they are even behind commercial Unix systems of the 1990's (which is sad to think about).

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Often support for these platforms is dropped because the developers have no access to such systems, and there is often little indication that anyone uses them. For example, I think the Ruby developers are unaware if Ruby even compiles on AIX, because they never hear about it. Most of the developers just use Linux, and even MS Windows is left to others to port. If someone were to volunteer to verify that emacs works on these older platforms, the developers would probably keep the code for them in the source tree.

Actually, it's really amazing to me that emacs was popular on so many platforms. Most of these operating systems are very old and definitely obsolete, though. I wonder how many people will be upset that they can't use the latest emacs on 4.1 BSD?

_________________
Debian GNU/Linux on a Thinkpad, running a simple setup with FVWM.
I just kind of wandered in here, never having used an SGI workstation. I've been trying to understand the whole SGI and Nekochan thing, but I don't quite "get" it yet. When I was younger, I didn't quite have a broad view of which Unix systems were out there, and so I was mostly aware of Sun, IBM, HP, Compaq, and DEC as some of the main commercial Unix vendors. I saw some SGI workstations, but the ones I saw looked kind of ugly to me, and I assumed they were nothing special. I was kind of surprised then when I found a whole Internet community of people who are SGI workstation enthusiasts, even though SGI hasn't been making those machines for years.

So what gives? Why isn't there similar support for AIX, HP-UX, or Solaris? Is the attraction more in the hardware of the machines and collecting these boxes as a hobby? Or is it the graphics software available that is the big draw? And do more people come to IRIX for the graphics software or for the Unix aspect? Do many people use IRIX only for the traditional Unix type work rather than for 3D modeling, CAD, animation, video, etc.? For example, would many people here write shell scripts on a regular basis, or schedule cron jobs?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Pontus wrote:
SGI has wonderful hardware design(both aesthetic and functional) and quite a bit of lore around it. Many movies and tv-series used SGI hardware to produce effects.

Those are two things that put SGI ahead, otherwise it's just another Unix.

Interesting.... I have to say, the SGI Fuel is a pretty awesome looking workstation, regardless of what it's used for.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
sgtprobe wrote:
I almost find the questions offensive, isn't it obvious? :lol: Nah, just kidding.

What's so special with silicon graphics? Well, in it's day, for graphics work, basically nothing came close, and that kinda stick with you even today. Even if it's just for the occasional retro computing, having fun and remembering the nice days. But got to say that the Octane2 and especially the Fuel still is nice to work with despite their age. Besides, Irix is really nice, and I mean really nice.

But if you know what Sun, IBM, HP, Compaq, and DEC was, I find it somewhat strange that you don't know what Silicon graphics is, or what makes them so special?

But I'm not gonna fool myself, or anyone else for that matter, and try to compare them with a modern computer and pretend they are more powerful or better for a particualar work. Time has moved on.

Heck, right now writing this, I'm working with a 3d scene with over 15 million polygons combined with sevral GBs of texture data together weighting in at 10.5 GB. And all in realtime.

But funny thing is, once I hit render and all cores starts to work, the computer feels sluggish making it hard to do other things at the same time. Never experience that on any of my SGI's.

Ops, gotta go, my render just completed.

Cheers! :)


Ah, I know what SGI is, and about the general history, but I never fully understood their niche. To me, Unix is closely associated with working in a terminal, as well as networking, multi-user environments, system automation, etc. It still seems a strange match to me that a company would specialize in Unix workstations geared so heavily toward monolithic GUI applications. This is probably a generational thing too -- I was in high school when SGI was really starting to go downhill, and I was never really exposed to their advertisements.

As an anecdote, an interview with Brian Kernighan mentions using SGI workstations at Bell Labs. I'm not sure how pervasive they were, or how long they were used for, though. I can see why the company would buy versatile workstations like that for their research group -- all the flexibility of Unix, and graphics capabilities on the top. In another interview, he mentions that FreeBSD is pretty popular in that group, but Linux also has quite a few supporters throughout the company. Maybe there was a shift at some point?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
http://arstechnica.com/information-technology/2012/07/hp-says-itanium-hp-ux-not-dead-yet/

I guess Oracle is pulling the plug on their support for Itanium and HP-UX platforms. HP can't really be secretive anymore about moving away from HP-UX and Itanium. Their big project for bringing the "mission critical" reliability features into Linux is an effort hilariously titled "Project Dragon Hawk." Like the others, they will be pushing RHEL 6, even though RHEL 6 kind of sucks and has a tasteless design..... At least HP-UX doesn't have services for Bluetooth, ISDN, and PC Smart Cards enabled by default.... :?

The ksh88 scripts on HP-UX would port alright to ksh93 on Linux, but some might have to be tweaked a little. Fortunately ksh93 is now open source, because the old pdksh (Public Domain Korn Shell) had some subtle but serious incompatibilities with ksh88.

This brings HP even further toward the commodity computing market. It's kind of scary to see a classic and powerful SVR4 Unix being phased out in favor of Linux. About ten years ago, Linux was basically regarded as a hobbyist operating system for white boxes at home, not scaling well beyond 1 or 2 CPU's. Things have certainly changed, and it will be interesting to see if HP contributes in a significant way, or if their extensions for mission critical stuff will just be proprietary.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Very cool, and wonderful pictures! It's great to see people taking such care to restore an older system like that (and one that had been so neglected). I had the pleasure of working with an AlphaServer 2000 (4/233) at one time. We nicknamed it "The Beast," because it was built like a tank ("/etc/motd" contained a suitable ASCII art graphic as well). They are definitely sturdy and high quality systems.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
smj wrote:
jwp wrote:
Sure, not for everyone, but basically everyone except a few server admins and basement-dwellers

Well, thank you - I assume you've promoted me from basement dwelling Morlock to server admin...

BTW, I'm reading this rather pointless thread on a machine running PC-BSD. Installation was no worse than a recent Ubuntu - no, it was better, because it was easy to avoid GNOME 3. And guess what - PC-BSD seems to be using my nVIDIA GTX550 Ti just fine, and the controllers on my MSI 990FX motherboard seem well-supported...

And the important files are being served from a nice rackmount machine in another room - above ground - using FreeBSD and ZFS. A combination that makes efficient use of my drives and actively detects and reports any problems long before any data is at risk. And that all depends on good hardware support for the controllers interfacing those drives, so it's got that and "ZFS / containers / jails / VM / clustering whatever crap." All of which have been important features driving the use of Linux and (what's left of) commercial Unix for the past 10 years...

Well, I'm assuming you must be an admin, because you don't seem like a Morlock....

Part of the issue I see, though, is the tendency to create a "desktop" distribution and a "server" distribution. FreeBSD, for example, practically advertises itself as a server OS. Meanwhile, PC-BSD is based on the end user experience and having a graphical desktop. If we look back to the commercial Unixes like AIX, HP-UX, Solaris, etc., though, they had no such clear distinction, except for the software sets that would be installed on each. And rather than having 30 different window managers available, and 3 different mammoth desktop environments, they basically focused on one standard desktop: CDE. The BSD's currently have no analogue to this, and the situation for Linux is not so much better. All the responsibilities of software management are dumped onto the end user, and at a time when software management is a serious matter due to software vulnerabilities.

There are currently too many complexities and choices, and the tools for managing software are typically not really sophisticated enough or high level enough to keep up with the massive pool of software available. For example, you should be able to install GNOME and KDE in a few short commands, as well as uninstall them in a few short commands, and expect to have a clean system. It should be possible to update the entire system, including the core operating system between major releases, and in a few short commands. I will say, though, that if pkgng becomes a common way to handle packages, then I think that will definitely be a big technical step.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Thanks for all the good points here. It's always interesting to hear about what perceptions were at the time, especially from those who were into Unix even back in the 80's. I appreciate that type of perspective, and it's something that I missed out on at the time. It sounds like the first experiences with SGI workstations were quite similar for many people. :)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
R-ten-K wrote:
jwp wrote:
About ten years ago, Linux was basically regarded as a hobbyist operating system for white boxes at home, not scaling well beyond 1 or 2 CPU's.


10 years ago was 2002 not the 90s.

Right, but that reputation persisted for quite a long time after the 90s. When Linux was used heavily for Beowulf clusters, and when IBM started getting behind Linux more, then the tide really started to turn.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
I have worked with these things quite a bit in the past, and I think we had one of this particular model that died. If I'm not mistaken, this is basically a small NAS type MyBook (rather than the simple USB external drives). The biggest problem with these devices is that the hardware is not so good. After awhile, they just die. I've seen quite a number of MyBook devices die mysterious deaths, never because their internal hard drives went bad, but always due to other hardware failures. The second major issue with this type of device is that the actual capabilities of the hardware are very low. If I remember correctly, the NAS models had only 32 MB of RAM. They definitely can't handle two or more connections with any competence (and will fail the transfers quite readily when they are not up to the task).

In one ridiculous episode, we had about two dozen Oracle databases that were sending their nightly backups to this puny little thing, and the cron jobs had to be staggered very carefully to avoid two machines connecting at the same time. If there was significant overlap, the transfers would fail. The poor person who had to maintain this ridiculous system (but who of course did not set it up!) was myself. There are too many stories I could tell about having to deal with these things, and how unsuitable they are for even the smallest business..... At some point you will probably end up prying open the case and attempting to retrieve the hard drive after the other hardware fails. The end game was that I finally got sick of just waiting for these things to die. I put two 500 GB drives in an old IBM PC, installed Linux with software RAID-1, and called it a day. Since that time, the little IBM PC has saved us at other times when other storage schemes failed. It can handle any number of connections I give it without breaking a sweat.

We currently have a similar WD 2TB NAS device that is now collecting dust because I don't trust it with anything. Actually, we have a number of large drives just sitting around just because they were pulled from dead MyBook devices. But I digress... The best thing about these little boxes is that they can be opened up, and their hard drives can be removed. You may have to use a screwdriver, and bend some metal in the process use your strength (depending on the model), but it's possible. For your own sanity and wellbeing, I would highly recommend putting the hard drive in another machine, and using it in that capacity. You will thank yourself later.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Ooo.... I just saw Nausicaä of the Valley of the Wind , and that was awesome. I'm definitely putting that on my list. It was a truly epic movie, and it kept me interested the whole time. I love movies that seem to create an entire world to learn about as they tell their story -- a little like the way that Lord of the Rings does. Nausicaä was a very likeable character too. It's amazing that it was created back in 1984, and it's so vast in scope and well executed. Very highly recommended.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
Since drives have become so large, I pretty much only look to RAID-1 these days, and often with software RAID rather than hardware RAID. Besides the usual concerns, some things to also consider:

  • What happens when a hard drive fails? How are you notified? If you have software notification through email or some other mechanism, can you really trust it?
  • How are these backups being made, and can you trust the backup programs / scripts to never fail? If they do fail, how are you notified?
To me, the difference between RAID-5 and RAID-6 is not very big compared to these other matters, which are often neglected. Each part of the chain needs to be strong, or it won't hold up. Each time I set up a new system that will be used for backups, there are two types of scripts I end up writing: [1] scripts that notify me if something fails, and [2] scripts that send me regular reports about the status of the backup files. These are scheduled to run regularly with cron, and the scripts are always carefully written and tested. For example, all scripts are organized into shell functions, and all functions use local variables and explicitly return an exit status. Commands like tar , cd , etc., should always have their exit status checked. Critical errors caused by any command failing should be caught, and relevant information should be emailed to the appropriate people automatically.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
SAQ wrote:
jwp wrote:
To me, the difference between RAID-5 and RAID-6 is not very big compared to these other matters, which are often neglected. Each part of the chain needs to be strong, or it won't hold up. Each time I set up a new system that will be used for backups, there are two types of scripts I end up writing: [1] scripts that notify me if something fails, and [2] scripts that send me regular reports about the status of the backup files. These are scheduled to run regularly with cron, and the scripts are always carefully written and tested. For example, all scripts are organized into shell functions, and all functions use local variables and explicitly return an exit status. Commands like tar , cd , etc., should always have their exit status checked. Critical errors caused by any command failing should be caught, and relevant information should be emailed to the appropriate people automatically.


You're forgetting the most important part - get in there and check it out "by hand" regularly (every week or two).

Ah, that's a good point, but I generally try to relegate a lot of that to the daily reports. For example, which filesystems are mounted, the amount of disk usage for each, long listings of any backup files ordered by date (so I can check for any straggling backup files), RAID array statuses, etc. Basically I try to put in the reports anything that I would type into a shell when checking manually, so I can just peruse through an email report and get the gist of the situation. I do log in from time to time for applying system software updates, though.

Random anecdote.... At one time, a guy I worked with had a backup program running on a Windows server that he never really checked, but was responsible for backing up PCs at the company, as well as a fairly important file server. Two of the external hard drives had died without anyone knowing about it, and the last external drive only had backup files from two years before . He only found out about the situation after the machine itself died. When I found out about the system and told him about the multiple levels of failure, he looked pretty embarrassed, so I didn't say anything to make the situation worse. The moral of the story is to keep a close eye on your backup servers.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ClassicHasClass wrote:
It's 8.0. It seems very happy with that. Some pr0n of Homer (it even came with a Homer Simpson squeeze doll):

http://www.floodgap.com/iv/1572 (full enclosure)
http://www.floodgap.com/iv/1573 ("hinv")
http://www.floodgap.com/iv/1574 (old school X11)
http://www.floodgap.com/iv/1575 ("dmesg" from HP-UX)

Very cool X11 setup! I like that color scheme -- I assume it is the default? Would you mind posting the X resources for mwm, or maybe a screenshot from the machine itself (if that is possible)?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Are you working with a fixed data set? If so, is it necessary to constantly regenerate it? How large is the data set, and what type of operations take so long?

Also, have you used a profiler to find if there are any bottlenecks in your program? Have you used the "htop" utility to watch your CPU core usage?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
After the big commotion surrounding the open-sourcing of CDE, there was another open-sourcing, but this time of Motif, which was also released under the LGPL.

http://sourceforge.net/projects/motif/

Not many people paid attention to this event, though, unfortunately. Motif was one of the missing pieces from the software ecosystem -- something that commercial Unix systems have had for a long time, but was never available under an open-source license on Linux or BSD.

I've always really loved the look of the Motif window manager, and despised X11's default TWM. Recently, I've been using FVWM, but it is configured to look and act very similar to MWM. I prefer FVWM these days because it has support for virtual desktops and Unicode fonts.

Since Motif has been around for a long time, and used in a variety of ways on commercial Unix systems, I'm guessing that some members of Nekochan have workstations or servers running MWM. Would anyone like to share screenshots of MWM from their systems? :)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
Just a small but important distinction : Motif was released as open source but only for non-commercial operating systemns . So you are not "allowed" to use it on Solaris, Irix, AIX, HP-UX, etc etc.

Now, as a descendant of people who walked across a continent, faced down Indians, malaria, distemper, freezing winters and blazing hot summers, no telephones and no KFC, I say they can take their "allowed" and shove it up their rosy red behinds. But that's a personal decision ...

Before they had "Open Motif," which was not truly open source, and was not truly part of the free software ecosystem because it had those restrictions. However, a few months ago after CDE was released under the LGPL, then Motif was also released under the LGPL. These days it is as much free software as anything else, which seems strange.

vishnu wrote:
Motif is, and probably always will be, the user interface toolkit of choice for the very highest end of the spectrum, for example Pro/ENGINEER uses it, Maya uses it, and then the brain dead morons who run the X foundation go and declare that the X Toolkit Intrinsics is deprecated. WTF!? Unix in particular and X in general have been wriggling on the stake of the mechanism-not-policy non-decision for 25 years and the end is nowhere in sight.

So anyway, rant aside (I could go on for hours) I've been using mwm on my Linux computers since Linux came out, mwm is not an exciting window manager so this is not an exciting screenshot but here it is:

Wow, you've been using it since 1995 on Linux! That's amazing. I love that type of consistent software experience. I was reading Slashdot months ago, and one person said that he used TWM for many years, and never saw any great reason to change. Then sometime in the 1990's, he switched over to FVWM, and has been using that ever since. He was totally uninterested in new GUI environments, and said that he doesn't feel that he is missing out on anything.

Well, I've attached my FVWM setup, but it basically looks similar to MWM. It extends to a second larger monitor, but that wasn't included in the screenshot. The font being used is GNU Unifont, which covers basically every language (e.g. window titles will show Chinese or Japanese characters, even). The emulator is bsnes ( http://byuu.org/bsnes ).

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
foetz wrote:
jwp wrote:
He was totally uninterested in new GUI environments, and said that he doesn't feel that he is missing out on anything.

exactly that's the point. at some point a product is good. everything that comes after makes it worse but the majority always look for "new" and "modern" no matter what actually has been changed.
if people could overcome the "hey look i got the new version" ego needs we could make some actual progress ...

Indeed, it seems like every GUI environment is going through some "revolution" or another. Between Unity, GNOME 3, and Windows 8, people have been complaining nonstop over the last year or two about their environments changing. For Unix command line tools, there are guidelines and examples of what we know works (and has worked for decades), but with GUI applications, it seems like everyone has a different philosophy about design, and a lot of it comes down to what is currently fashionable. It's like there are no clear engineering principles behind this mess. :(

On a positive note, we can still tune most of it out and stick with some consistent environments like MWM and FVWM...

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
With both CDE and Motif now available under the LGPL as truly free software, it looks like 2013 is gearing up to be the year of Linux on the desktop!

I can't wait to see everyone's grandma customizing their ~/.Xdefaults file to get that perfect shade of lavender for their MWM window handles and titlebars.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
vishnu wrote:
I'm afraid the Linux desktop is pretty much owned by Gnome and KDE... :cry:

Neither desktop has really "gotten it right" and won over the majority of the community yet. Gnome 2 came really close, but then threw it all away. KDE has never really won over that many people, and has remained the "other" desktop environment. "Unity" ironically fractured the community even further, as have forks of Gnome 3 like Cinnamon. The Linux desktop is in shambles these days, and there is plenty of room for alternative environments like XFCE to pick up the slack. For those who want a modern full-featured window manager, for example, Openbox is an outstanding choice.

For my own part, I'm starting to value stability and consistency from year to year, so can't commit myself to a big desktop environment like Gnome 3. My guess is that many others are the same way. Even as the big desktop environments go through their revolutions every few years, the smaller window managers will keep working like they always have.

Unfortunately, the GUI is just something that Unix has never done all that well. GUI components have not had the same consistency and modularity that has been characteristic of Unix command line utilities. Using an ordinary shell, it's possible to use pipes, redirect output, and script everything that could be done by a human being. Graphical applications for Unix stand in clear contrast to the eloquence of command line tools. The right set of GUI primitives was never developed, and part of the problem is the X11 software itself, which does not adhere to the Unix philosophy.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
tingo wrote:
vishnu wrote:
I'm afraid the Linux desktop is pretty much owned by Gnome and KDE... :cry:


But not mine (luckily). I prefer Xfce, use it on both Linux and FreeBSD.

XFCE is a nice desktop environment that is more evolution than revolution. Perhaps the coolest feature is being able to change the window manager decorations to the Motif style. :)

Part of me wishes, though, that they had continued XFCE as a CDE clone... From what I've seen in the newer versions, it's not possible to create the same type of "drawers" that CDE uses in its dock.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
A new CDE version was recently released today: 2.2.0d. :)

They are considering CDE to be "beta" quality now. Apparently it is known to work well on at least Linux, FreeBSD, and OpenBSD these days. From the release notes, it looks like they have also added some basic Xinerama support for multiple monitors.

Code:
# 2.2.0d (beta) 05/30/2013

- We are being bold this time, and promoting CDE to Beta.

- More work on dtinfo.  It now mostly compiles but is not quite ready for prime time.  It is not built by default.

- dtksh now builds on linux systems.

- We do not build Motif man pages anymore.

- X11 screensaver extension support now works in dtsession on Linux.

- Some screen locking issues on the BSD's have been fixed.

- /usr/sbin/sendmail is now the default mailer on OpenBSD

- Basic support for Xinerama has been added to dtlogin and dtsession using a new DtXinerama library.

- Resolve many more compiler warnings

Unfortunately I haven't been able to get it working properly...? Upon starting an X session, I just get the copyright screen and an xterm. Sad face... :(

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
So recently I was visiting a gay bar with several friends (I'm not gay myself), and was hit on by a drag queen over the course of the night. She was very cute and playful, and I found myself unusually attracted her. But still, she has a pee pee... Should I care about that?

Nekochan, I'm so confused... :|

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.