The collected works of jwp - Page 2

commodorejohn wrote:
Well, that gives us a good few years to crack the license format and steal the source code...

Bastards. Like we need to be moving closer to a world of indistinguishable Unix derivatives.

I'm surprised that VMS wasn't discontinued earlier. HP never marketed it, as though it were just a legacy product, and it's been that way for many years now. Eventually HP-UX will also be discontinued, and they are apparently adding "high-availability" code into Linux to ease the transition. I guess Solaris will also probably be discontinued (although Oracle stays quiet about that subject), and then IBM will be the only major Unix vendor left.

Moves like this show the weakness and vulnerability that companies open themselves up to when they rely on proprietary software that can be discontinued at any time. The situation is the same with IRIX -- people can still find the media, but the company basically abandoned it, and it's unlikely to go open source in the future. It's mostly of interest as a legacy platform used by hobbyists, not a viable operating system for the future (which is unfortunate).

If HP wanted to succeed with VMS, they should have open sourced it and let the community take over the bulk of new development. My guess, though, is that they just wanted to get rid of the engineers and infrastructure costs associated with VMS. HP has been slowly becoming another Dell, and everything is probably made by Foxconn anyhow. Eventually even the management will be "outsourced" when the Chinese finally figure out that these companies are just "management shells" rather than manufacturers, and decide that they can do that part too.

Honestly, though, I don't mind the "world of indistinguishable Unix derivatives." Unix is a fine platform with many excellent qualities. I would much rather have standard open source Unix systems than a world beholden to Microsoft, or another proliferation of commercial Unix (with each system costing a small fortune).

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Hmm... I think I've returned to my senses. That was a close call.

The picture of the "drag race" may have helped. ;)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Good points about open source vs. proprietary. Using the IBM example of supporting their mainframe platforms, it's obvious that HP could have kept VMS as a major long-term OS choice. But IBM still does active development on z/OS, from what I understand. HP has not been investing money in keeping VMS up to date with the latest hardware. They could if they wanted to, but the longterm vision just isn't there.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
I never quite understood the great rationale for sudo. The extra keystrokes are annoying...

I have to disagree with the article about the emacs thing, though. Emacs is one of the two major Unix editors, and many great programmers and other Unix people love using emacs. It's been that way for a few decades now. Comparing emacs to MS Word is unfair, as emacs is at its core a Lisp interpreter that has editing functions built-in. It's Unix's answer to the IDE, without the IDE. There are some really cool things that are possible, like SLIME:

http://common-lisp.net/project/slime/

There is also the "mg" editor which is very tiny, and has most basic emacs functionality, but without a Lisp interpreter. mg is part of the OpenBSD base system, from what I understand.

Personally I prefer "nvi", the BSD reimplementation of Bill Joy's original vi -- no special features, just vi.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
I think the point being made is that the quality of software hasn't matched the great increases in hardware. For example, the Web is progress in some way, but it's a pretty ugly solution, made from poorly conceived "standards" all held together with duct tape. Alan Kay also has this view:

http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442

Quote:
The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

I think, though, that there has undeniably been some progress (consider the Internet and networking in general). We can also look at the state of Unix systems. Back in the 80's, scripting would have been done with Bourne shell and awk. These days we also have very popular interpreted languages like Python and Ruby. These can fill in the big gap between C and simple shell scripts, and help us avoid the ugliness of Perl.

Languages like Go also show some real progress made in concurrent programming, and the same might be said of Erlang as well. There is some substantial progress, but it happens fairly slowly, and most people are still happy to reinvent the wheel for the Nth time.

The big failing, in my opinion, has been in mainstream graphical programs like web browsers. It seems that there is still no elegant way to create a GUI application, no clear set of primitives from which they can be composed in a modular manner. GUI applications are still ugly, and little progress has been made on these.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
smj wrote:
jwp wrote:
Personally I prefer "nvi", the BSD reimplementation of Bill Joy's original vi -- no special features, just vi.

We call that "vi" - vim is vim, and I don't care for replacing vi with not-vi. But I acknowledge this is a personal preference.

What vi really has going for it is universality. I prefer to use emacs for serious editing, like coding/scripting. But as a sysadmin I needed to be able to edit files on any random UNIX-like system, and that meant being able to get the job done with vi (and sometimes ed! Thank you Ultrix installer/miniroot...).

If I understand the situation correctly, as part of the BSD code cleanup following the lawsuits from USL, the BSD people needed to reimplement vi, so they made nvi ("new vi") and released it in 4.4 BSD-Lite. The original vi is now only common on commercial Unix systems. The code for this original vi implementation is now open source, but not in wide circulation, and not used by default for any open source Unixes. So I guess Bill Joy's original code is most likely to be found on commerical Unix systems.

The original vi looks like it hasn't been touched in quite awhile: http://ex-vi.cvs.sourceforge.net/viewvc/ex-vi/ex-vi/

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Years ago, I read that the green screen phosphor terminals were actually better on peoples' eyes than modern PCs, due to the color of green causing less eye strain. That perked my interest, and I began being more interested in issues related to terminal colors and eye strain, and also figuring out color schemes emulating classic terminals. What I have here are a few settings that I've developed through basic experimentation that are generally very close to original terminal colors.

Code:
DEC VT100 light gray:     #dddddd
DEC VT100 light blue:     #99ddff
DEC VT100 white:          #ffffff
Green phosphor terminal:  #33ff66
Amber phosphor terminal:  #ffff33

I'll explain each one of these:

  • DEC VT100 light gray: This is a basic light gray color that is closer to the actual light gray used by terminals, than the gray-on-black settings for many terminal emulators.
  • DEC VT100 light blue: In many pictures and videos, the VT100 screens actually show a light blue tint that is very common. This is light gray with that blue tint.
  • DEC VT100 white: Due to backlighting, VT100 characters could look almost white. Since that's the case, a basic white may be preferable in some cases.
  • Green phosphor terminal: This is very easy on the eyes and close to the old green screens. This is the setting that I use every day.
  • Amber phosphor terminal: This is the amber variant, not quite as easy on the eyes, but some people may prefer this style.

Of course, invocation is as simple as...

Code:
uxterm -bg black -fg '#33ff66' &

If you hate ANSI colors in your terminals (they are ugly and make everything unreadable!), then you can mostly turn them off with a few lines of shell:

Code:
TERM=vt220 && export TERM
alias ls 2>/dev/null && unalias ls

For anyone who thinks it's pointless or silly, if you use a computer often, you should pay attention to readability and eye strain. I've found that the green setting is very comfortable and easier on my eyes than other schemes. As for the colors themselves, they are closer to the "real deal" than the presets that come with terminal emulators like Gnome Terminal or Konsole. Those programs have very naive presets that don't actually match the historical colors. For example, gray-on-black does not match the light gray color of a classic terminal. Similarly, green-on-black is often simply any old green, without any thought or effort put into whether this green-on-black scheme even looks similar to an old terminal that it is presumably trying to emulate.

Attachment:
File comment: Terminal colors
terminal_colors.png
terminal_colors.png [ 52.55 KiB | Viewed 688 times ]

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
It's true that some things are subjective, but there are certain matters related to readability that must have some objectivity. Some color schemes can cause eye strain, and certain color combinations can have poor visibility.

For example, some "ls" listings may use navy blue on black, which is terribly unreadable. However, another entry shown in yellow on black does not have the same problem. Studies on eye strain have also shown that green is the most soothing color for the eyes.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
bluecode wrote:
A real 3278 is a dream to code from. The visibility is tops and so is the keyboard.

One time I saw somebody writing COBOL in an emulator with colors set on and I almost needed to heave. I don't understand how people can tolerate the visual noise level but I guess the new generation is used to colors and flashing lights etc. When I code in x3270 it is alway set to work as much as possible in green screen mode. It's a nice emulator and works very well.

It looks very well made... You could probably bludgeon someone pretty easily with a keyboard like that. Nice green screen to boot. :)

Attachment:
ibm-3278.jpeg
ibm-3278.jpeg [ 226.68 KiB | Viewed 297 times ]

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Who cares what the brand name is? If it works, then it works. If it proves to be reliable, then great. I've had cheap D-Link wireless routers for the last 8 years at least, and they never fail or let me down. I never do any maintenance on them or need to reset them.

The same goes for a lot of PC hardware. I've seen many servers that are nothing but old business PC's running Linux, and they may have uptimes measured in years. They just keep working, so there is nothing to change. But I guess maybe the people who are interested in "big iron" need to justify it somehow. "Yes, I need hotswap redundant power supplies in case one goes out on me, and I can't post to Nekochan anymore!"

p.s. Nobody needs to "go away." We just need to be more civil with one another.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
How many processors / cores do you have on your systems, and does this script successfully detect all of them?

Basically, the idea is a single cross-platform code snippet that runs fast and is portable across Unixes, even AIX, HP-UX, or Solaris machines. The script below should be vanilla Bourne shell and portable between any of them. I would be very interested in whether it works across all these platforms, and what types of machines you guys may be able to try it with. :)

Update: added basic Darwin support with hwprefs
Update: plain old psrinfo for Solaris processor detection (logical CPUs)
Update: added (maybe) better AIX support by checking for pmcycles and using that before attempting lsdev
Update: fixed IRIX processor detection, so it will get "Processor" in addition to "Processors"
Update: switched IRIX processor detection to use sysconf, for more reliable detection on mixed processor systems

Code:
#!/bin/sh

uname -a

if [ -f /bin/cp ] && [ -f /bin/sh ]; then
if [ -r /proc/cpuinfo ]; then
grep -c ^processor /proc/cpuinfo
elif [ -x /usr/bin/hwprefs ]; then
/usr/bin/hwprefs cpu_count
elif [ -x /usr/sbin/psrinfo ]; then
/usr/sbin/psrinfo | grep -c on-line
elif [ -x /usr/sbin/ioscan ]; then
/usr/sbin/ioscan -kC processor | grep -c processor
elif [ -x /usr/sbin/pmcycles ]; then
/usr/sbin/pmcycles -m | grep -c .
elif [ -x /usr/sbin/lsdev ]; then
/usr/sbin/lsdev -Cc processor -S 1 | grep -c .
elif [ -f /sbin/hinv ] && [ -x /usr/sbin/sysconf ]; then
/usr/sbin/sysconf NPROC_ONLN
elif [ -x /usr/sbin/sysctl ]; then
/usr/sbin/sysctl -n hw.ncpu
elif [ -x /sbin/sysctl ]; then
/sbin/sysctl -n hw.ncpu
else
echo 'Error: unknown platform!' 1>&2
exit 1
fi
else
echo 'Unix without the Unix?' 1>&2
exit 1
fi

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Cool, many thanks. I've updated the code to include detection for Darwin. :)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ClassicHasClass wrote:
Code:
% sh testcpu
AIX uppsala 1 6 000C3N50R3D
2


This is, technically, correct -- there are two physical cores. However, AIX recognizes four logical CPUs because of SMT:

Code:
% iostat

System configuration: lcpu=4 drives=2 paths=1 vdisks=0

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait
0.1         45.0                1.5   0.4   97.9      0.3

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.9      85.5       3.5   366793833  542291156
cd1              0.1       0.0       0.0          0         0
% vmstat

System configuration: lcpu=4 mem=7712MB

kthr    memory              page              faults        cpu
----- ----------- ------------------------ ------------ -----------
r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
1  1 649184 49871   0   0   0   8   17   0  40  580 185  1  0 98  0


As for Tiger on PowerPC (quad processor ist demonstrandum),

Code:
% sh testcpu
Darwin bruce 8.11.0 Darwin Kernel Version 8.11.0: Wed Oct 10 18:26:00 PDT 2007; root:xnu-792.24.17~1/RELEASE_PPC Power Macintosh powerpc
4


I see kokoboi beat me to NetBSD, but it worked on my mac68k install also.


Very cool, it's nice to see that it runs fine on AIX. I'm just trying to detect the multiprocessing capability. For example, how many processes can run simultaneously on the system? So if I have a big CPU-intensive task that needs to be split up, how many processes would be optimal? If too few processes are created, then that wastes memory and can be slower. If not enough are created, then it isn't taking advantage of all the processor cores available.

I guess a similar script could be made to detect the SMT capability. Some CPUs like that have specific technologies for running more threads at a time, or some operating systems may have ways to allocate a certain number of "virtual processors" for executing threads.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
thegoldbug wrote:
FYI

On my Fuel all I'm getting is the output from the uname command.

Yes, I know my Fuel only has 1 cpu but I have a couple of dual cpus in my Octanes.

Hmm... even if it only has 1 CPU, it should still report the 1 CPU...

Would you be able to post the output of the "hinv" and "hinv -c processor" commands?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ClassicHasClass wrote:
Quote:
Any particular reason you named your machine "uppsala"? (my hometown)


I name my servers after Scandinavian cities and there's a whole long story behind that, but tl;dr my AIX boxen are generally all "from Sweden" and uppsala was the successor to stockholm, my Apple Network Server 500, which is now a backup box and will eventually be turned into an overgrown workstation in a place of honour for its many years of service to me.

Quote:
Very cool, it's nice to see that it runs fine on AIX. I'm just trying to detect the multiprocessing capability. For example, how many processes can run simultaneously on the system? So if I have a big CPU-intensive task that needs to be split up, how many processes would be optimal?


On this two-way POWER6, it can run four execution threads simultaneously. This will be important to account for as IBM doubles down on this strategy for POWER8.


Good point, I guess I hadn't considered chip-level technologies for which the OS will actually see two or more logical processors per physical core. I've switched the Solaris code to use only psrinfo to just try getting the virtual processors rather than physical cores. Also added an entry for AIX, so it will use the pmcycles command if it is available, to find the number of logical processors.

Unfortunately I only have an x86-64 laptop with Linux here, along with VMs for FreeBSD, OpenBSD, NetBSD, DragonFly, and Illumian. The script seems to run fine on those platforms, but I'm more concerned with basic support for platforms that I don't have access to (HP-UX, AIX, IRIX, etc.).

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
thegoldbug wrote:
the output from the "hinv -c processor" command

IRIS 46# hinv -c processor
1 600 MHZ IP35 Processor
CPU: MIPS R14000 Processor Chip Revision: 2.4
FPU: MIPS R14010 Floating Point Chip Revision: 2.4
IRIS 47#


Ah, I was foolishly looking for only the plural form "Processors" rather than the singular "Processor" ... Fixed. :)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
jwp wrote:
Ah, I was foolishly looking for only the plural form "Processors" rather than the singular "Processor" ... Fixed. :)

You realize this entire exercise is totally wrong, I hope ? It is not the application's job to determine how many processors are available or how to assign resources. That's what the operating system's scheduler is for.

Stand back and let the o.s. do its job. All you are doing with this shit is making a mess.


Your comments seem to make the presumption that one program only needs one process. Most programs only need one process, but for some things like CPU-intensive programs, build tools, server programs, libraries, etc., knowing about the resources available is important for multiprocessing. Otherwise, how many processes should the task be divided into? There are basically a few choices: (1) do one thing at a time, (2) use some magic number conjured out of thin air, or (3) make a very large number of processes (potentially wasting a lot of time and memory).

There are many very useful programs that do need to know how to divide up work in order for them to fork the correct number of processes. For example, pbzip2 (parallel bzip2) and pigz (parallel gzip) can run an order of magnitude faster than just using bzip2 and gzip, respectively. Likewise, GNU Make can take advantage of parallelism when building a project. The same goes for GNU Parallel -- that program splits up work into subprocesses and does tasks in parallel. There are also special libraries that are used specifically for parallelism, and they also need to be able to detect the number of processors.

I was reading through the code for GNU Parallel, and the code for detecting the number of processors was lacking, to say the least. If I remember correctly, it basically just works for Linux, FreeBSD, Solaris, AIX, and Darwin. Libraries and utilities specifically meant to execute tasks in parallel (and therefore remove the low-level detection from the application itself), should have better processor detection code that works for IRIX, HP-UX, NetBSD, OpenBSD, etc.

I want to create a small reference implementation so developers who need to write software like this can refer to the script. The script itself may not be so useful. Otherwise, without some reference, few people have access to many of these commercial Unix systems, and may even leave out support for OpenBSD, NetBSD, etc.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
Quote:
knowing about the resources available is important for multiprocessing.

For general computing and desktop use, no it is not

Quote:
Otherwise, how many processes should the task be divided into? There are basically a few choices: (1) do one thing at a time, (2) use some magic number conjured out of thin air, or (3) make a very large number of processes (potentially wasting a lot of time and memory).

Jesus. The task should be divided up according to what needs to get done.

Using Bluecode's example of an email client :

Open the app with one thread, immediately spin off another thread to actually draw the windows, another to do the work to fill them. Thread One is listener thread for user input. User decides to collect mail, clicks [email protected] and one new thread created to collect those emails. User has two more accounts, clicks [email protected] and tommy@the_grill.org . One worker thread each. Three sets of mails being collected (seemingly) simultaneously, user interface still responsive. Mr User can even browse the main window and open last week's mail (another thread) or if he likes, open another window and write a happy birthday letter to his Mom (another thread.)

These threads and/or processes are all dependent on what he needs to do , not some peculiar calculation based on how many cores are available.

Right, but this is a trivial example in which the number of tasks to be done is something small like three, and the whole thing is I/O-bound.

hamei wrote:
For general use, the number of cores available is not a factor.

Multiprocessing within a program is often for things that are not simple "general use."

For example, I have a SQLite database that is several gigabytes in size. When I want to run the backup, I use pbzip2 (parallel bzip2) which works several times faster than normal bzip2 on my computer. The database dump is a single text stream sent through a pipe, so the alternative is just to use normal bzip2 and wait for the whole task to complete, using a fraction of the computer's power. Logically it is only one task, but splitting it up into more tasks makes sense because it is CPU-bound.

Another example is that I have a CPU-bound program that does a lot of string operations (some 100 million across around 10,000 files). If I just start as many processes as I need for the task, it would mean 10,000 processes sitting in memory, which would drive performance into the ground, if it were even possible at all. If I just guessed and divided it into 4 processes, then it would waste time on a machine with 16 processors.

This is somewhat alleviated by GNU Parallel, like:

Code:
$ find ~/some_files -type f | parallel -t bzip2 -9 {}

But if the startup cost of starting the program (just an example) is a significant part of the total execution time, then it may be more efficient to handle multiprocessing in the program itself, as long as it is convenient to do so. Some languages like Python and Ruby can make this more convenient than what has historically been the case in C.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
Quote:
For example, I have a SQLite database that is several gigabytes in size. When I want to run the backup, I use pbzip2 (parallel bzip2) which works several times faster than normal bzip2 on my computer. The database dump is a single text stream sent through a pipe, so the alternative is just to use normal bzip2 and wait for the whole task to complete, using a fraction of the computer's power. Logically it is only one task, but splitting it up into more tasks makes sense because it is CPU-bound.

jwp, you're scary. Did you know that you don't have a clue ? I hope you are not in the "IT" world but fear that you probably are ?

I care about performance because a lot of what I do is CPU-bound. It's just like waiting for some big rendering job because your video card isn't powerful enough. I'm not happy using just 25% of the available CPU power, needlessly waiting for some big task to complete (and the same goes for many other people). If you just write documents and surf the Web, then of course it's silly and useless to talk about these things, but not everyone is like that. And for the rest of us, what's the use of a computer if you feel like you have your hands tied behind your back?

Unix doesn't have to be crippleware. We can actually use the multiprocessing capabilities that are a native part of the operating system. People have been doing things like this for decades now, using the fork(2), wait(2), exec(2), etc. All of these multiprocessing libraries and utilities internally are pretty much just calling fork and managing the child processes, which is a normal part of Unix programming. The only thing new is that new libraries and utilities are available that provide convenient interfaces for these features.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ShadeOfBlue wrote:
jwp : on IRIX you can also use "sysconf NPROC_ONLN" to get the number of CPUs currently online. This way you don't have to parse the hinv output, so it's a little bit faster and will now also work on mixed-CPU Origins, where hinv reports processors as:
Code:
Processor 0: 500 MHZ IP35
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 1: 500 MHZ IP35
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
[...]
Processor 14: 400 MHZ IP35
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 15: 400 MHZ IP35
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
[...]

Thank you, ShadeOfBlue, I have updated the script accordingly. I trust that this sysconf approach will work for older systems as well? I would guess so, since it's something from the System V heritage...

chicaneuk wrote:
Output from my Intel Core i5 based Hackintosh running OSX 10.8.4:

Code:
Darwin octane.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May  1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64

Ah, interesting. Would you be able to post output of the following script?

Code:
#!/bin/bash -v
whereis hwprefs
whereis sysctl
hwprefs thread_count
hwprefs cpu_count
sysctl -n hw.ncpu

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
chicaneuk wrote:
Yep :)

Code:
Darwin octane.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May  1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
4


Forgot to say that the '4' on the second line did also appear on my previous output - I just didn't include it for some reason!

Ah, great, just a copy-and-paste issue.

It seems that the heuristic approach has proved fruitful. I just tried the script in Minix 3, and it worked perfectly (another "/proc/cpuinfo" instance).

From my end, I've tested it on Linux, DragonFly BSD, FreeBSD, NetBSD, OpenBSD, Illumian, OpenIndiana, and a few others. :)

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ajw99uk wrote:
Adding RISC iX to your list: check for "RISC iX" or "RISCiX" in the output of uname, and return one core (which would be an ARM2 or ARM3), should do the trick - but would be pretty pointless!

That said, there are now some dual/quad core ARMs around (but I expect your Linux method would apply there), while the mid-90's RiscPC Hydra system could have five ARM610s or ARM710s on a daughterboard (and run NetBSD).

How complete would you like this list to be?!?


Wow, I didn't know about RISC iX before -- it's very interesting to see an mwm look and feel on a green screen! Fascinating how many variants Unix has spawned over the decades... :)

http://en.wikipedia.org/wiki/RISC_iX

As for how complete the list should be, I'm mostly interested in the Unix and Unix-like systems that have seen some degree of widespread use in the last 10-15 years. I think we currently have the bases covered with the big ones (but someone correct me if I'm wrong).

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Attachment:
aix_mwm_003.png
aix_mwm_003.png [ 12.82 KiB | Viewed 168 times ]

Frightening that my X11 setup has the same basic look & feel as AIX did 20+ years ago. :?

Still, it looks better than OS X and whatever glass crap Windows looks like these days.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
An RS/6000 workstation running Unix (probably $20,000 originally), reduced to a PC running Windows NT? :cry:
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
tingo wrote:
jwp wrote:
Part of me wishes, though, that they had continued XFCE as a CDE clone... From what I've seen in the newer versions, it's not possible to create the same type of "drawers" that CDE uses in its dock.

Well, CDE is free software since August 6th, 2012...

Yes, but by now, the code base of CDE is very dated and lacks things like Unicode support. It will be a fine addition especially when it makes its way into distro repositories (I've compiled and installed it from source), but it's still about 10-15 years behind the curve. CDE is still a great desktop, though, don't get me wrong... I can't deny my love for that industrial look with its orange, gray, and teal. It will become even better when all the bugs are hammered out, and Xinerama is fully supported.

XFCE could have been a modern version of CDE, but instead it became more like lightweight GNOME.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
jwp wrote:
Yes, but by now, the code base of CDE is very dated and lacks things like Unicode support.

Who cares ? Unicode unicode unicode ... who gives a shit ? How many people run more than one language on their computer ? (Besides me.) What good is it except for clueless web developers who take one look at your ip then "helpfully" change the language on you ? Fucking nitwits.

Thanks so much, morons. I really wanted my page to show up in Assyrian cuneiform. I swear to God, if I could lure all the world's web developers into one room, I'd turn on the fire sprinklers and drown them all.

Well I care because I'm constantly doing multilingual work (yes, every day, in terminals, text editors, and browser windows). In fact, I would have to say that most of the work I do on a computer is multilingual work. For me, not having a Unicode capable terminal is unacceptable, and not having a Unicode aware text editor is even more so. When setting up CDE, the first things that I have to do are to install replacements for the terminal and text editor, and fonts for them to use. Unicode just provides a standard multilingual encoding framework (a set of code points for every language and glyph).

The world has moved on since 1995, and supporting "code pages" that require about 10 different convoluted steps to set up, is crude, primitive, and low tech -- a remnant of the bad old days. Supporting Unicode is actually much simpler than supporting various language-specific incompatible encodings.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
jwp wrote:
When setting up CDE, the first things that I have to do are to install replacements for the terminal and text editor, and fonts for them to use.

That really is a terrible imposition, I agree ... If I'd only known you had to install your own editor and terminal, I'd have realized that CDE was worthless.

Quote:
The world has moved on since 1995 ...

Absolutely. We used to be men ...
Attachment:
user_interface.jpg

And what's the purpose of using a desktop environment if the utilities that come with it are useless? If that's the case, then I might as well use a simple window manager instead. As I've said, Unix doesn't need to be crippleware. Yet here you are again advocating for inferior technology while conceding that workarounds are needed.

Yeah, the world was really great back in the 90s when people relied on dozens of incompatible encodings, nearly all of which were inadequate for their respective languages, right? Or maybe we should just go back to ASCII and hide our heads in the sand.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
hamei wrote:
Finally had a minute ...
ShadeOfBlue wrote:
This is partially true. It works great for IO-bound tasks, but for CPU-bound tasks it's different.

Been thinking about this a while and came to the conclusion that you have been misled, no, it is not different.

You're looking at this from the wrong point of view. I am only speaking of the situation as it pertains to workstations .

In the case of a workstation, the top priority is user responsiveness. . Not ultimate performance. Not least power consumption, not the best benchmarks on a single type of task; the overriding primary requirement is responsiveness to user input. I guess this point is somewhat lost on people who have only used Unix or Windows or OS X. But if you had ever used a system that truly prioritized the user, you would never be happy with the rest of this crap again.

Irix is not very good at this. I am quite disappointed in the performance of a dual processor O350. If you keep gr_osview open on the desktop you will see that one p is almost always more heavily loaded than the other. This is poor quality software. OS/2 does not do that. BeOS does not do that.

In practice, it does make a difference. With the Fuel, I would get situations when (for example) a swiped bit of text would not middle mouse down into nedit instantly. The processor was busy with something else. I was looking forward to going two-up ... but what do I find ? 2 p does the same thing. Irix is not well-designed (from a workstation point of view). This should never ever ever happen. I am god here and the damned computer better understand that.

Your example was also not so great for the following reason - a computer cannot spawn thirty threads simultaneously. If it spawns thirty threads, they will be spaced out at some time interval - let's say 3 milliseconds just for fun. During those three milliseconds a lot of other events will be occuring as well. And then the next thread, another three milliseconds. The rendering program does not own the entire computer, ever, and you can't spawn multiple threads simultaneously ever, either. So your example doesn't really hold true in practice.

But more to the point, it ignores the prime consideration for a workstation : PAY ATTENTION TO THE USER. If I am writing an email while some app is rendering a 3840x2400 graphic in the backround, I really don't give a shit if it takes .25 seconds longer or even 25 seconds longer. What I care about is that the edits in my email do not get hung up even for 1/10th of a second. What I am currently doing is what counts, not the stupid background render job.

[...]

Which brings me back to this project ... I am all for people writing software. Really. Even if it's something I could care less about, it's still good. Without software computers are not useful. And this is kind of a fun little utility, it could be nifty to know hw many cores are in a box or how many hyper-threaded pseudo-processors or whatever. But as a means to make multi-threaded software work better, it's tits on a boar. Developing more poorly-thought-out, badly coded untested crappy gnu libraries is the wrong approach to getting the most out of multiple processors. The programs have to be written with the task in mind, not just have some cool modern 'parallel library' tacked on to a junky badly-structured application. Junky single-threaded, junky multi-threaded, who cares ? Junk is junk.

Hi Hamei, I'm not saying that knowing the number of processors on a system is useful for the average application. I'm saying that it's one piece of information that may be necessary if someone is writing a special type of library or special application that uses multiprocessing. It's not for the typical program, but rather for a very CPU-intensive task that takes a long time to complete. Most people don't have these types of problems. Their programs take a second to complete, and are typically I/O bound, or network-bound, or something like that.

The reason for using all available processors would not be because the program would complete in 1 second instead of 2 seconds -- the idea is that it's something very significant, for which one processor is insufficient to get the job done in a reasonable amount of time. For example, a job that might take 90 minutes with one process can potentially complete in 30 minutes with four processes. Of course, not every concurrency problem can be resolved with simple approaches like this, but for some it's an eloquent solution. It's important to use the right tool for the job, and to know when it's necessary and not necessary to do optimization at this level.

Without using multiprocessing, the machine would be idle, and my time would be wasted (to me, that's the truly bad user experience -- waiting around for an extra hour). As it is, even with the program pushing the CPU 4 ways to 100%, the system is still responsive. If there were a problem, I would just "nice" the program to a low scheduler priority. I've found, though, that the Linux scheduler seems to handle everything fine, and the only difference I see in my user experience is just the noise of the fans when they kick in. Basically, the processors are there to be used, not sit around idle, and the kernel's scheduler should decide how to divide CPU time between the processes.

In the case of IRIX, it sounds like the scheduler isn't being very fair about how much CPU time each process gets. In that case, the process scheduling algorithm is basically the problem. In the Linux 2.6 era, there was a scheduler rewrite to fix similar clashes between background processes and user applications: http://en.wikipedia.org/wiki/Completely_Fair_Scheduler . As an anecdote, it appears there was a big stink about it from some guy at Red Hat, to which Linus replied, "Numbers talk, bullshit walks. The numbers have been quoted. The clear interactive behavior has been seen."

Attachment:
htop.png
htop.png [ 13.51 KiB | Viewed 145 times ]

As for the relative strengths and weaknesses of open source, there are good things and bad things. Certainly some development efforts like the Linux kernel, Perl, Python, Ruby, MySQL, Apache, etc. have been very successful. Some of it was famous and widely used before Linux was even around, like Perl, Emacs, the GNU toolchain, BSD Unix, etc. There's also a lot of crap code out there that is unprofessional, poorly conceived, and poorly tested. Taken as a whole, it's a big mixed bag, but that's the nature of having thousands of open development projects.

I do think it's much healthier than the era of 1990s commercial workstations, though, which were enjoyed by the few and privileged (while the vast majority were stuck on toy operating systems like DOS and Windows). For me, the earlier era of timesharing and community development at Bell Labs and UCB is like an ideal, and I think that open source projects are closer to that ideal than the approaches taken later at HP, Sun, IBM, etc.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
Hamei, if you want to install Linux on your SGI workstations, I may be able to help. ;)

Unix is primitive to be sure (it was designed from the bottom up), but its architecture has been proven and refined over a few decades now. It's familiar, flexible, and scalable. Part of its consistency is in not knowing what types of processes are what, or what types of files are what. A process is just a process, and a file is just a bag of bytes. The mere fact that it can run on phones or supercomputers, and that it is widely used for "big" systems sold by IBM, HP, Oracle, etc., is proof enough that it is still widely regarded as a "serious" operating system.

If I want a nice workstation experience, I will just install XFCE and be done with it. It's not perfect, but no desktop experience ever is. I have no problem trading the extra desktop polish for nice software management (APT) and all the command line goodies.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
A simple kludge to fix the problem for Hamei would be to schedule a cron job that runs as the root user, and renices any processes running as user "hamei" with priority 0 to a negative value like -10. That would make their priority higher than the system processes.

Maybe a line like this in /etc/crontab ...

Code:
* * * * * root /etc/renice -10 -u hamei >/dev/null 2>&1

That should renice all the processes owned by user "hamei" every minute to a high priority level. That means that for any significant interactive work, the process is guaranteed to take higher priority than other things on the system. The command basically takes no time to complete, so it should never affect system performance in any negative way.

An alternative would be to run a script that identifies just some GUI programs running as hamei, and renices them appropriately. That is if he would want to separate out GUI work from terminal stuff. Basically it shouldn't take more than a few minutes to write up a script like that. Here's an idea....

Code:
#!/bin/sh

bring_hamei_sanity () {
for prog in "$@"; do
pgrep "$prog" | while read pid; do
renice -10 "$pid" >/dev/null 2>&1
done
done
}

bring_hamei_sanity firefox nedit maya gimp

Of course, there may be eloquent or more efficient ways to do this than the example code... but that's the idea. It would have to run as root because only root is allowed to renice programs to a higher priority (negative number). It could be similarly scheduled through cron.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
ShadeOfBlue wrote:
jwp wrote:
That should renice all the processes owned by user "hamei" every minute to a high priority level. That means that for any significant interactive work, the process is guaranteed to take higher priority than other things on the system. The command basically takes no time to complete, so it should never affect system performance in any negative way.

This will also renice any compile jobs and other background tasks that are running under that username, so it will be worse than before :)

I think it's still best to use "nice -n 20 [command]" (or "nice +20 [command]" in csh/tcsh) to run any jobs which are known to cause heavy loads. They will then run only when the system is idle and should greatly improve interactive performance.

It depends on which is the most reliable way to select the processes to modify. For example, if it's a "Hamei vs. System Processes" issue, then doing it by user ID will work fine. But if it's Hamei himself who is running processes that hog resources from himself, then that's a different issue. If he knows that some program will do intensive work, then he can just run in with nice, and that's a pretty good fix for the problem. If he runs the program from an icon or shortcut, he could even modify it to include "nice" in the invocation. I'm not sure what the exact situation is, though.

If he wants to run certain programs at a higher priority, though, then normal users aren't able to do that, so root would have to make the adjustment. That's what the second example does with the little script.

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.
On the subject of reponse time, I just wanted to add that I can't imagine Ken Thompson and Dennis Ritchie working on Unix using ed as the text editor, printing out to a teletype. Yet they wrote the basic operating system this way, and that was more or less the "standard editor" until vi came along (and even longer at Bell Labs). It seems they were working on a totally different level. These days everyone uses big LCD monitors and multiple terminal windows, but who would claim to be more productive than Ken Thompson was using ed?

_________________
Debian GNU/Linux on a ThinkPad, running a simple setup with Fvwm.

Code: Select all

$ uname -a
Linux ux004 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux
$ uptime
21:33:10 up 84 days,  5:38,  8 users,  load average: 0.27, 0.32, 0.34

It's not very long, but it's not bad for a laptop used every day. The last time it went down was when I forgot to plug the power cable back in. :)
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.

Code: Select all

$ alias uptime='uptime|sed -e "s/ days/000 days/"'
$ uptime
21:53:01 up 84000 days,  5:58,  7 users,  load average: 0.06, 0.18, 0.29

8-)
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
ClassicHasClass wrote: mens526? Sounds like a total sausage fest.

Hey vishnu, I ssh'd into your Sun machine, and it's pretty cool. When I logged in, though, the motd was....

Code: Select all

\\\\\\\
MENS     / \\\\\     MENS
ONLY    /##-- \\     ONLY
|_   ) |
(_     |
______\    \______
/ 526 ` ,     526  \
/                    \
'    /   MENS          \
^    | o       o \     |
|   |\            |    |
|  |  \           |  '|
| |    \         /|  |
| |     \  .    / | |
|__\     \_,,,_/  |__\
/ (   \
/ `._.-'\
/   /\   /
/   / /  /
/   / /  /
--'   /  p   ---,
`-----''   -------4
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
No problems here. Microsoft should drop support more often.

I'm waiting in amusement to see what happens in China where pirated XP is still a huge thing.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
If you are going to assume it's systemd that is slowing down your system, you might as well make your decisions with dice or by sacrificing chickens. If you don't want to make choices based on superstitions, then examine and monitor the system performance and see for yourself what is causing all the problems.

Use htop and iotop to see what is gobbling up CPU time, memory, and I/O. Is there swapping going on? How much? Do you have enough swap space, or too much? If you are swapping, how is your hard drive performance? Try installing FVWM and running it, and see if you still have the same problems with sluggish performance. Try to get some perspective on where the performance loss is coming in, and narrow down the possibilities.

The basic thing, though, is that it shouldn't be a matter of ripping out an entire operating system just because your computer seems "slow." Look into the matter and see where the problem is coming from.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
Web browser: Firefox
Email: Thunderbird
Music player: Audacious
Video player: VLC
Image editor: GIMP
Office: LibreOffice

These are the normal everyday GUI applications that most people would use. If someone needs to do CAD, 3D modeling, etc., then their requirements may be different. Since I use my system for normal desktop stuff, programming, and TeX, I'm fine... For me, the "Year of the Linux Desktop" happened over 10 years ago. There will definitely never be a time when everything is perfect and when everyone can switch over without a single hitch. Nobody will roll out the red carpet anytime soon with perfect binary compatibility, so it's all a personal matter if people want to make the leap.

I do think that for most normal tasks, the basic applications needed are already there. Besides those, most applications that people use are on the Web. When my parents were switched over to Ubuntu, they barely noticed the difference. "How do we get to Google? Oh, it's still the Firefox icon..."
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
You can play with this sort of thing using Ettercap, I believe. As a young lad, I used that program to do ARP poisoning on my dormitory network, in both directions, so everything going to or from the gateway was routed through my own machine. Then I could turn on or off services by blocking certain ports, or inspect any network traffic. I would try blocking off ports occasionally, and then I would hear complaints from down the hall that AIM was blocked, or email was blocked. It was amazing how simple it was to reroute all this traffic, although I definitely don't remember the details of exactly how to use this program.

For ARP poisoning, it works by basically broadcasting ARP information. Like "Hey, everyone, 192.168.1.15 is at MAC address such-and-such." For DNS, it the MITM would simply rewrite the DNS responses, I believe. These are pretty simple protocols, so they are probably easy to fake and manipulate for the GFW.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
Everyone has criticized the Bourne syntax and its ambiguity for the last 30 years, and now I guess the chickens are coming home to roost. It doesn't help that Bash is more complex and adds numerous features (basically a superset of ksh88). Fortunately BSD and Debian-derived systems are mostly safe from it ("/bin/sh" is not Bash on those systems).

Updating is easy and only takes a few seconds, but it's unfortunate that it has to happen at all. I wouldn't be sad if Linux distros just replaced Bash with mksh for a standard shell (upgrade to "rc"?). Really, the features of ksh88 were always good enough. We don't need SSH host autocompletion or other stupid things. Unfortunately part of the GNU strategy in the 1980s was to extend Unix programs by adding more features so everyone would want the "super" versions. Some of their improvements were good, like removing artificial limits, and using more efficient algorithms, but adding features led to bloat.

Edsger Dijkstra wrote: How do we convince people that in programming simplicity and clarity —in short: what mathematicians call "elegance"— are not a dispensable luxury, but a crucial matter that decides between success and failure?

Edsger Dijkstra wrote: Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.

On Debian 7:

Code: Select all

$ ls -l /bin/{bash,dash,ksh93,mksh} /usr/bin/rc
-rwxr-xr-x 1 root root  975488 Sep 25 14:49 /bin/bash
-rwxr-xr-x 1 root root  106920 Mar  1  2012 /bin/dash
-rwxr-xr-x 1 root root 1489008 Jan  2  2013 /bin/ksh93
-rwxr-xr-x 1 root root  293648 Feb 15  2013 /bin/mksh
-rwxr-xr-x 1 root root   89720 Feb 24  2012 /usr/bin/rc
$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Mar  1  2012 /bin/sh -> dash
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.