The collected works of jwp - Page 3

porter wrote: I absolutely astounded that the authors of bash thought it a neat idea to

(a) export functions via environment variables
(b) execute contents of any environment variable with the script parser/handler

Its like somebody shooting themselves in the head with every revolver they find to see if they are loaded.

Plonkers!

Part of the problem is that Bash is just too complex. The design of the Bourne shell was convoluted enough, and then they add on so many "special features." Glad that my "/bin/sh" is "/bin/dash", and I will use Bash only for custom shell scripts using Bash features.

Actually some of the extra features in Bash are useful, like in-process testing with "[[ ]]", and in-process arithmetic with "let". By switching over to Bash features, some of the programs I've written have become much more efficient. These are all available in ksh88 and mksh, though.

When a system relies on one component so much, that component has to be simple, safe, and sturdy. Even aside from this Shellshock vulnerability, Bash is very questionable for the role of "/bin/sh". It's too complex.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
robespierre wrote: Yes, we already know that you favor a "See Figure 1" approach to system usability. You really don't need to say it in every post.

I must say it in every post! :shock:
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
theinonen wrote: Real question is if there is anything on Linux that is not available for Windows first? Otherwise there is no real reason to use Linux at all and might just as well use Windows as Windows versions are most likely better in every way anyway.

Linux is good for browsing the web and lot of things need to happen first before it actually gets more useful for general purpose computing.

Most servers, supercomputers, phones, and tablets run either run some version of Linux or some version of BSD / Darwin. There are so many engineers, scientists, animators, developers, and mathematicians using Linux that it's huge for that crowd too. Lots of software support in those areas. Big business software has mostly moved to Java, or to the Web, and big companies like IBM and Oracle are pushing Unix and Linux (Oracle Linux, Solaris, AIX). Flashy new Apple computers? Just BSD derivatives. Chromebooks? Just Linux derivatives. Most everything has been quietly moving toward Unix and Linux over the last 15 years.

Basically, MS Windows is around for compatibility reasons on x86 PCs (MS Office). At some point in the future, computing will change enough that Microsoft will become irrelevant. Then everyone will sail away into the future with Plan 9...
8-)
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
Pretty early on I read the essay, "Csh Programming Considered Harmful," so I never bothered learning csh or tcsh. Since I was using Linux, bash was the default shell typically, and it seemed to do interactive editing and scripting pretty well.

Later when I was doing a lot of shell scripting on HP-UX, I used ksh88, and I really liked that as well. It has the important things that bash has, but without the bloat. When I was using it, though, there were some really annoying compatibility issues between ksh88 and pdksh. With ksh88, the following script prints "1", and on pdksh and mksh, it prints "0". Bash also prints "0".

Code: Select all

x=0
echo onetime | while read line; do
x=1
done
echo $x

It's something stupid related to pipes and processes. The programmer needs to know that anything happening in the body of a loop is happening in another context -- but only when something is being piped into the loop. Why this behavior is reasonable, I have no idea. ksh88 handles it just fine, and did so decades ago. I don't know why these other shells like pdksh, mksh, and bash put the burden of remembering arcane details like this onto the programmer.

When AT&T opened up ksh93, I wish they had also released ksh88. My impression has been that ksh88 is a good all-around shell. It doesn't hurt that it's a long-time standard on commercial Unix systems either.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
armanox wrote: That's an easy one - variable scope. [...] The variable inside of the loop expires when the loop ends, and is in a different scope then the variable outside of the loop, despite having the same name. My guess is the developers of the other shells felt that variable scope was important, where as ksh88 doesn't have the concept.

Ah, but this only happens when piping to a loop . Otherwise, all the shells act the same way with variables, loops, and scope. The inconsistency is with pdksh, mksh, and bash. If the script were to be rewritten this way, then it would work the exact same way on all shells:

Code: Select all

x=0
echo onetime > /tmp/onetime
while read line; do
x=1
done < /tmp/onetime
echo $x

This sort of incompatibility means that pdksh and mksh cannot be used as serious replacements for ksh. The only real path for developing or running ksh88 scripts with open-source software is to try them under ksh93, which is more compatible (and follows ksh88 behavior for this pipe / loop stuff).

Most people happily use pdksh and mksh as ksh replacements, because they don't do a lot of shell scripting, or otherwise don't have to worry about shell scripting compatibility. Sadly, it seems that many shells follow the same behavior as pdksh and mksh, despite there being no rationale other than the implementation details of the shell.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
robespierre wrote: this is a classic "led down the garden path" situation, which you do need to have some programming ability to notice.

you concluded that "It's something stupid related to pipes and processes"; in other words, that the statement "x=1" was not affecting the value of x, because it (surprisingly) executes in a different process. this is an unwarranted assumption, since it might not have affected x simply by never executing at all.

It might be an unwarranted assumption, except for the fact that I tested this pretty carefully years ago, and then again recently.

robespierre wrote:

Code: Select all

x=0
echo onetime | while read line; do
touch quux
done
ls quux

ls: quux: No such file or directory

for those who "do a lot of shell scripting" and are led astray by such basic mistakes, the greater danger may not be their choice of shell, but letting them near computers to begin with.

Sorry, but your script results are wrong using all the shells I'm talking about. All of them create the file quux, and then list it successfully, as you can see here:

Code: Select all

$ cat > quux.sh
echo onetime | while read line; do
touch quux
done
ls quux
$ dash quux.sh
quux
$ ksh93 quux.sh
quux
$ mksh quux.sh
quux
$ bash quux.sh
quux

So you posted bogus results, and then sneered that I shouldn't even be allowed near a computer? Okay... if you want an even simpler example:

Code: Select all

$ cat > inloop.sh
echo onetime | while read line; do echo inloop; done
$ for shell in dash ksh93 mksh bash; do $shell inloop.sh; done
inloop
inloop
inloop
inloop

The shell loops work just fine.

robespierre wrote:

Code: Select all

echo foo bar | read line; echo $line

is a newline, because line is empty.

Yeah, that's why the example scripts I gave never piped to "read", but rather to "while read", like this:

Code: Select all

$ echo foo bar | while read line; do echo $line; done
foo bar

So you didn't bother to notice the difference, yet you're the one with the "real programming ability" who is able to spot these things. Yeah, okay... :roll:

One last script to illustrate the incompatibility:

Code: Select all

$ cat > inloop2.sh
x=1
echo onetime | while read line; do
x=2
echo "inloop: $x"
done
echo "endloop: $x"
$ ksh93 inloop2.sh
inloop: 2
endloop: 2
$ mksh inloop2.sh
inloop: 2
endloop: 1

I'm not some big shell scripting guru, but I'm also not a bumbling idiot who just makes this stuff up because I just lack "real programming ability." I ran into this particular problem because real shell scripts I was writing for HP-UX servers would fail in Cygwin because pdksh and mksh are not faithful ksh88 clones. This sort of incompatibility is dangerous because there is no indication of it other than getting the wrong (old) variable values. Other people have run into it as well, usually when they try to migrate their ksh88 scripts to Linux, and then run into all sorts of errors.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
That looks fantastic. I love those classic PCs (yeah, 1990s is classic for me...). A 250 MB tape drive is pretty big considering the size of hard drives in that era. Do you have tapes to go along with it? Any plans for the system?

IMHO, these PCs look nicer and cleaner without the Intel / Windows stickers.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
ledzep wrote: Agreed. I gave one of the versions (DR3?) a try but it was missing some basic stuff (multiple desktops) that made it a no-go for me. Assuming I didn't simply install it wrong, of course. But if he's going forward with it that is fantastic as my Plan B is using Fvwm with a 4Dwm theme. Not bad, but not nearly as complete.

Ah, would you mind telling us which Fvwm theme?
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
I don't know why someone would need a specially-designed "NAS" solution rather than standard server hardware, unless they were doing really high performance stuff and needed something tuned for that. Anyways, a lot of it depends on what you want a NAS for. Software features (e.g. filesystem tools), hardware features (SCSI support), super reliability (redundant everything)? But one thing I've learned is that companies use RAID to try to convince everyone of redundancy. If anything other than a hard drive fails, you're screwed, and all that "redundancy" goes out the window. I just assume systems will fail at some point, so the question is how to retain everything and keep going if the system completely goes up in smoke.

If I were setting up a NAS (without knowing any further info about the situation), I would have two systems for complete redundancy, and keep one as a full backup synchronized periodically with rsync. If possible, I would have two drives per system in RAID-1, just whatever "big" SATA drives are being sold these days. If I needed SCSI support, I would just throw in an old SCSI card. Unless you are in a corporate datacenter, that's a simple and practical plan. The good points about this type of commodity system are: (1) no proprietary HDD crap, (2) easy to replace components, (3) fully redundant -- not just HDDs. You could also set up the systems to send out regular email reports including disk usage, hardware status, etc.

For software, I really like the simple and sturdy tools like rsync and rsnapshot.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
HDDs can fail suddenly too, so it's a bad idea to count on the HDD failing slowly enough for you to make a backup.

Any data that is important should be backed up regularly. If your system's HDD or SSD dies, just replace it and restore the backups.

Backups can be as simple as an external USB external drive and a shell script calling rsync. That's what I do.

Crucial actually gives the following lifespan for one of their ordinary SSDs:

Endurance: 72TB total bytes written (TBW), equal to 40GB per day for 5 years
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
foetz wrote: serious backups shouldn't be running with the target machine. hook the medium up, run the backup, disconnect, put away. at least as far away so that any sort of accident of the box can't affect the backup.

Yeah, this is easy with external drives: (1) connect the drive, (2) run a shell script, (3) put it away for later. It's not completely automatic, but it's safe, cheap, and simple.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
2 GB for the hard drive is not too big for a Pentium 1... Hard drive sizes were highly variable in the mid-90's...

For Pentium 1 systems in 1995, hard drives between 420 MB and 1.6 GB were common. RAM at 8 MB and 16 MB was common. CD-ROM at 2x or 4x. Modems at 14.4k or 28.8k.

For Pentium 1 systems in 1996, hard drives between 1.0 GB and 2.5 GB were common. RAM at 8 MB to 32 MB was common. CD-ROM at 4x to 8x. Modems at 14.4k, 28.8k, or 33.6k.

These are the values that are very typical of that era for PC's shipped out and sent to the stores. Of course if you were building some PC workstation or server, you would use special components... But for something sold at the retail stores, this is really what they were selling for, and for good money.

At that time, 128 MB of RAM would have been exorbitant.

And for both these years, most home PC's shipped with Windows 95.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
hamei wrote:
jwp wrote: And for both these years, most home PC's shipped with Windows 95.

Three cheers for Stanley Sporkin ! Hip hip, hooray ! hip hip, hooray ! hip hip, hooray !

Five or six more real judges like him and maybe there would still be a credible US.

Alas :(

I've missed you too.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
There shouldn't have to be any service or daemon running to use XFS filesystems. I don't know about CentOS specifically, but I know that vanilla Debian and Debian live CD's have XFS filesystem support. There is a kernel module already built for it and included in the default kernel package. You should be able to do:

Code: Select all

$ find /lib/modules -name xfs.ko
/lib/modules/3.16.0-4-amd64/kernel/fs/xfs/xfs.ko

If you run your mount command (which is correct), then the Linux kernel should automatically load that module. If you still get errors, you can try running:

Code: Select all

# modprobe xfs
# lsmod | grep xfs

That should load the module for sure, and then check to see if it's loaded. Again, I don't know if CentOS specifically has the XFS module built for its live CD's, but I know that the Debian KDE live CD includes the XFS module for sure, as do normal installs of Debian. Maybe Red Hat / CentOS just didn't have enough room on their live CD? Quite strange.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.
You have run into the dilemma of Unix hobbyists everywhere. A lot of interesting old hardware is slow as shit, loud, rare, and almost too old to be useful.

New hardware is boring but fast and featureful. Any little Intel Atom box running Linux or whatever BSD will smoke any old hardware Xterm in every conceivable way. Performance, power consumption, image quality, X11 features, etc. It will also do X11+SSH, and maybe give you a rimjob too.

Recently I was watching an old HP 9000 workstation boot up. It took over 5 minutes to get from power switch to X11 CDE. These days, you can run CDE on any processor, and it will be much faster and also boot in a few seconds -- and not use hundreds of watts and sound like a jet engine. So really, which system is better at running CDE and other applications? In the 1990s, everyone would have thought that today's puny Atom was some super workstation of the future.

So the question is whether this hardware and these operating systems are so compelling that you want the big loud heavy hardware and slow performance that goes along with them. If you're a hobbyist, maybe you really do because you want a piece of history. But at this point, there is almost no sweet spot where you can get something that is genuinely useful in the modern world and yet still novel and cool from a historical perspective.
Debian GNU/Linux on a ThinkPad, running a simple setup with FVWM.