The collected works of spiroyster - Page 1

Files and Directories (Drawers in workbench terminology) will only be displayed in workbench if there is an associated .info file .The icon, location, file type etc are all stored in this file and workbench won't know what to do unless this .info file is present. i.e file.aif needs a file.info to be displayed in workbench. Most games on the amiga simply worked by booting from the game disk itself... in fact now that I think about it I can't remember a single game that had to be run from workbench? (if not installed to a HDD, in which case what I said previously doesn't apply, different rules with installed games) Some games were 'kicked' which means they could boot from the amiga 'kick screen' into a pseudo workbench environment (minimal workbench with hardly any modules loaded... certainly none of the 'utlities' you mention would work if you didn't boot from workbench!), but most simply booted into game main menu (or intro etc). Any boot-able (kick) disk could be viewed as a 'disk' in the workbench GUI (without an associated .info file.... kick disks iirc are the only exception and use the default disk icon), but thats pretty much the extent of what you can do in workbench with these kick disks. You might be able to read the files on the disk by firing up a console (Amiga Shell) and traversing it that way, but you probably won't be able to do much other than standard file operations, excluding execution.

Yes the floppy drives had issues, but since it can read a floppy I don't think this is your issue.... try simply turning your amiga on, wait for the kick screen (1/2 secs) and then insert the game disk 1! Now sit back an enjoy the multitude of games you probably have (50+ games should be minimal with second hand amiga's purchased these days :lol: )

Hope this helps.

P.S Not been here for a while! good to see Amiga's on nekochan! ... albeit in the apple thread :shock:
Glad its working for you sgifanatic. Now that I have reopened that part of the brain I can remember quite a few games that could be run from workbench, so forget what I said in previous post. Also probably best to keep away from the phrase 'kicked' which I used incorrectly, they are just 'kickstart'/'kick' disks! (amiga fans might enter Guru Meditation using the phrase 'kicked' :lol: )

Wihtout a HD, Amiga's are only really useful for games without masses of disk swapping/juggling (although I survived using emacs without a HDD for about 1.5 years!), with a HDD there are quite a few applications for the amiga which become practical/usable... AmosPro, highsoft c++, Storm C, deluxe paint (DP3 was shipped with most amiga's from 500+/600 onwards) and even Lightwave3D! I also rememebr a 3d package called Imagine 2.0 which was my first real foray into 3D modeling (I am now a CAD developer!)... quite amazing what you could do with 7.6 Mhz and a home TV!.

You don't need a HDD for most of them, but you may end up with piles of ordered disks laying around your desk/floor for repeated insertion and then ejection! StormC was 6 disks, Hisoftc++ was 7 disks... still, a game called 'Beneath a Steel Sky' for the AGA amiga's was 15 disks! so you saved a lot of time by installing it on a HDD! I'm sure there were many others.

P.S You have an OCS chipset in the 500 for future reference of getting any more games/programs. 1MB ram upgrade would be recommended if you can find one cheap enough, but the A600 has 1MB anyway and is ECS chipset, the A1200 is AGA chipset and comes with a HD iirc (20MB or so).. A1200 was definately the best for the money.

Have fun!
global variables are considered evil in c++, not c (In c there are pretty much an absolute!) for a number of reasons, mainly they imply bad/non-existant oo design. No access control (any object can access them at any time) not thread safe, namespace pollution.... there is quite a list! I will reiterate though this is from a c++ perspective only.

In c++ world, globals shouldn't really exist as they should be within an object... even if it is a static object called GlobalVariables or something (ContextObjects, singleton... there are a few that can be used). This allows them to be properly and safely managed as desired and not just left out on the pavement for anyone to piss on as they walk by.

However, this is in theory.... in practice I have used globals many a time!

In regards to your code, you have to initialise statics in a source file somewhere...you have only declared FTP::_state, and not defined it anywhere.

In FTP.C, in global scope (i.e above the FTP constructor implementation)

Code: Select all

int FTP::_state = 0;

On another note...

FTP.h

Code: Select all

~FTP(){};
does not need trailing semi-colon... it can become

Code: Select all

~FTP() {}


Not too sure about your typedef struct declaration... 'typedef' is not required, you only need to do this...

Code: Select all

struct TransferInfo
{
...
};


and while we are on the subject of statics, 'const' objects should declared static to stop duplication across contexts. e.g

Code: Select all

const int var

should become

Code: Select all

static const int var


using namespace , is frowned upon as all it does is allows you use std stuff without typying 'std::' in front of it. However it pollutes the namespace and brings in that extra bit of implicit headache when debugging by eye. Explicit usage of std is preffered, or alternatively you can typedef at the top and call your std object whatever you want, plus allowing another object of yours to be called 'string' if desired.... although with small source files its not the end of the world.... but NEVER use using namespace in a header!

Also, I only say this as at work we had a massive argument about this a few years ago.... becareful when using leading underscores for names since this can lead to confusion with certain compilers as preprocessor directives tend to use leading underscores (mainly double leading underscore).

FTP.C
Should this not be FTP.cpp? :)

Hope this helps. rest of the code LGTM
Ha yes! sorry didn't mean to come across as a bit of a code nazi :( . I saw the keywords tips/comments/suggestions in jimmers last post and I went into Code Review mode. Replace the words 'should' and 'NEVER' with 'could' and 'AVOID if possible' respectively in my previous post. Also please note that I didn't compile it, just eyeballed it and noticed that your static wasn't defined.

Evil isn't always a bad thing, as I said I have used global variables, and will continue to do so! void* are my posion. I find them extrmamly useful but extrmeley dangerous, use them all the time in my own code but would get lynched at work... and for a very good reason. As foet says pretty much most c++ features and laguange is there to aid in design, implementation and maintainability (the latter of which I do think is lacking in the industry, or at least the places I've worked!)...otherwise we might as well all just write in assembly and be done with it. Just because you can use non-safe types and directly access low-level stuff in C++, does that mean you should if there is another way which fits more in line with the way the tools are designed to be used?. However if it starts to hinder readability, then it should be questioned... why is something designed to aid work, hindering it. Right tool for the right job I believe is the phrase (this should also be applied when I see those boring arguments about which is better C or C++...they are different tools, to be used differently and within different environments/contexts, probably for different reasons) :)

On using namespace , yes smaller personal projects (in which you are the architect, code monkey, tester...and probably user) it's perfectly fine and should be used as a tool to help/speed coding (it could be argued that this is the same with typedefs ). This is definately the case in source files, but in a header with large projects it can be horrible to deal with... well the person who wrote it probably wouldn't think so, but later maintainers may well cry (especially if they have to write implementations/drivers which use pretty the mush the sames types (naming wise)).... same with void* , if I came across code with void* I would not be happy (other than my own ofcourse!). With you in control of your own project this is all moot.

As has been said, at the end of the day use what ever convention you are most comfortable with as this will probably allow you to be most productive. There are very few conventions/idioms that I would recommend against tbh...

</rant>

jimmer wrote:
spiroyster wrote: global variables are considered evil in c++, not c (In c there are pretty much an absolute!) for a number of reasons, mainly they imply bad/non-existant oo design.

Took this part of your comments to heart and pulled 'The Gang of Four' off my shelf for the first time since uni (the bill says I bought it in 1997...). Have now re-implemented the FTP class as a Singleton. As you suggested, using a more OO idiom fixed the static/non-static linker thing in one fell swoop. Will refactor the rest of my code to be more OO in design.

Thanks again for taking time to give me your feedback and suggestions. Much appreciated :)


No problem, glad its working for you! it makes a nice change to look at something else for a change. Yes singletons are great.... singletons, stack objects and cheshire cat are probably my most used patterns (cheshire cat might seem detrimental, but on larger code bases it can reduced compilation times drastically, which on some of the systems I work with can be the difference between an hour and 20 min for an entire rebuild). They are all there to help, and avoid bitting you in the XXXX (<- well put!)

Good luck!
If not too late...

To create a bootable MSDOS 6/7 floppy with CD-ROM support you need mscdex.exe (messy decks ... MicroSoftCompactDiscEXtensions).

iirc... in DOS6 and Win9X (not 2000, XP or NT!), you created a bootable floppy by typing

Code: Select all

c:\> format a: /s


This created a bootable system floppy disk which could boot into DOS, and then to get the CD drive working you had to copy over and start mscdex.exe with the correct arguments from the floppy. This is done by adding the correct syntax to config.sys and autoexec.bat on the floppy and then booting it... really sorry but this was a while ago and I can't remember what the syntax was (I don't even think I understood it at the time anyway so just copied it from config.sys/autoexec.bat on a working system) so maybe look in those if you have current working DOS/win9X system. I think they were pretty generic settings, other vendors also had their own versions, but mscdex worked for all my CD drives at the time.

I did this regularly because I didn't have an original boot disk (and tended to reuse my boot disk for general data stuff once I had a working system), and my system install cd wasn't an original bootable one (I believe it was a special version which had Memorex 750MB CDR written on it).

I may have an old one in my pile of useless yet sentimental crap. I will have a look in which case I could tell exactly what syntax to add to config.sys and autoexec.bat.

Hope this helps.
Mistake Edition sticker you say? My archaeological deductive skills places that around 2000? in which case it is later 586 (P2/3 era). I had a K6/233 with 32MB ram and an SGI320 then (still got the 320). Windows 2000 would be a good choice for functionality, but maybe not games... the compatibility modes in 98/2K were awful, but DOSBox was great and ran all my old DOS games. I used DOSBox a lot on XP, with no problems that I can remember.

USB and ISA on the same mobo? Nice... My memory is vague with this (and certainly not extended :) ), but tbh I don't remember ISA and USB as being overlapping technologies on mobos? I could be very wrong though as I do remember ISA slots being present on mobo's for legacy reasons a while after the market was saturated with AGP and PCI. USB was crap back then... 'plug and pray' !

uunix wrote: Video S2 something
s2 gfx? any more info on this?.... I used to have s3 ViRGE? The s3 tbh is old for 2000-era (more 95-97 ish). Given the era of your system I would expect there to be an AGP slot on the mobo... in which case imho the ATiRage 128, or one of the later voodoos were excellent (without going to high end FireGL/Quadro)... certainly better than the s3...The ATi Rage played Quake (1,2&3) better than my sgi320 at the time.... 320 left it for dust when doing 2D work though.

uunix wrote: Also has an ISA network card.
Hopefully I will have a £3 Sound blaster (if I win today).
Since you have an ISA... imho the best sound card you could get was the AWE64 gold (there was even a 5.25 bay external interface with different audio connectors and remote that controlled the mouse... all gold plated connectors) It was the only card (that I knew of) at the time which gave absolutely no 'microphonics'... when you get your soundblaster put some headphones in and then do various tasks on the computer.. you'll hear clicking and static in the background, which bleeds into audio playback (during quiet section of DVD film, once you hear it you can't not hear it... if you know what I mean)... not with the 64 gold... in fact I used this old system to play DVD/CD (as a media centre, with minimal MP3's and an even smaller amount of AVI's)... hooked into a quad306/34 with ESS AMT's (the AMT transducer picked out every little sound in a recording, but it was 'clear as light' with the AWE64 Gold). It was the last 'internal' sound card I had, before going external and haven't looked back since... no particular reason though so I'm not saying internal sound cards are crap. I just remember it fondly, and nothing else at the time impressed me enough.

uunix wrote: Question, The Colorado drive is IDE, but the BIOS is complaining it's not ATAPI compatible. Would this actually register in the BIOS? I pop a tape a tape in and it makes hard assed super noises, but not sure how I'm going to get this recognized. The Master/Slave settings are correct.
ATA (or more specifically PATA) was another name for (E)IDE... IDE itself was phased out mid 90's I think????, and it became (Extended)IDE [EDIT:'Enhanced', not 'Extended'..apologies]. IDE was designed for a storage device to be attached at all times I believe... like a HDD...not like a CD-ROM or tape drive which was eject-able... this is why ATAPI was used. ATAPI allowed eject-able media to be attached and used. Under the hood it was essentially SCSI on an IDE cable! Your drive should NOT be using ATAPI so that is a problem. It sounds like a BIOS issue, perhaps it cannot read the drive and is thus falling back on ATAPI, which won't work with HDD's. Does your BIOS give any other information about the attached device, such as sectors or anything?

uunix wrote: BIOS update has caused a FDC error, and I can't revert back as I can't boot from floppy.
Is the light/LED stuck on the floppy drive? or the whole thing not working at all?

sgifanatic wrote: though the voice recognition capabilities require a better spec
For dictation software on the 586 (I think a 486 might struggle)... I used to use something called 'Dragon Dictate' for speech recognition on windows98 (it was on the K6/233 I had with 32Mb RAM!) with no problems at all.. you had to train it by talking to it which I can only assume built up a better phonetic signature for the program to understand... it did get a few words wrong but I do remember it working quite well overall... one had to speak in ones proper english like :) .... 'goto sleep'

ivelegacy wrote: With MSDOS you can't handle more than one cable a time because DOS is not multitasking
Terminate Stay Resident (TSR's), is how DOS performed psuedo multitasking... this is how some device drivers (and viruses) worked in DOS (otherwise unless the executable being executed contained device drivers... hardware usage would be pretty limited). Pre 386, expect to clog the 640KB base.... post 386 you can load into extended/expanded memory!
ivelegacy wrote:
spiroyster wrote: Terminate Stay Resident (TSR's), is how DOS performed pseudo multitasking


the BDM control program crashes if I try the TSR, I have already tried under MSDOS v6.22, and PCDOS v7. May be the TSR model is OK only with simple applications, also I need to have two BDM control programs running at the same time, I can't switch from one to the other: debug purpose of two concurrent boards.
Yes, reading back, I see your requirements, I think you have hit the nail on the head here...

ivelegacy wrote: multitask in real sandbox
TSR's are not real multitasking, they are an afterthought :) . I think with some clever programming you could acheive what you need, but it may be far too much effort for too little gain getting TSR's to work. Good luck with your project ivelegacy.

sgifanatic wrote:
spiroyster wrote:
sgifanatic wrote: though the voice recognition capabilities require a better spec


For dictation software on the 586 (I think a 486 might struggle)... I used to use something called 'Dragon Dictate' for speech recognition on windows98 (it was on the K6/233 I had with 32Mb RAM!) with no problems at all.. you had to train it by talking to it which I can only assume built up a better phonetic signature for the program to understand... it did get a few words wrong but I do remember it working quite well overall... one had to speak in ones proper english like :) .... 'goto sleep'


Very familiar with Dragon. Pre-deep-learning neural networks exhibit a restricted ability to learn with the network getting stuck in local error minima during training; this is what caused 90s era ANNs (Artificial Neural Networks3) to never really achieve a high-enough recognition rate. Alternative techniques such as HMMs and GMMs don't rival the accuracies achieved in current DNN (Deep Neural Network) systems... MSFT, Google and FB all use Deep Learning extensively now.

This passage from a Nuance publication specifically addresses how DNNs address the shortcomings of older techniques.

"Another poignant example of an innovation which is not only “hidden” but also based on a rather old concept is Deep Belief Networks which have revolutionized the architecture of ASR (and related systems) over the last few years; This topic was highlighted at the conference by a number of talks and posters on everything from acoustic and language modeling to text-to-speech (TTS) and voice biometrics. As the more recent term “Deep Neural Networks” already indicates, this is the renaissance of the Neural Networks (NNs) of the 1990s. Back then these networks were hugely appealing to AI researchers as they, modeled after the human brain, seemed to have a superior explanatory and modeling power over the Hidden Markov Models/ Gaussian Mixture Models (HMMs/GMMs) combination, which had dominated acoustic modeling in ASR since long before. But yet again it turned out that when applied to “real problems” training methods for NNs were either vulnerable to get stuck in local minima or computationally too consuming to be of practical value. And so HMMs/GMMs continued to rule, bringing error rates down and accuracy up by piling many smaller optimization and (yes) innovations within this framework. Again, the combination of several factors – modern, more powerful hardware including GPUs (chips originally developed for graphical processing, think: computer games) and new ideas about how to train NNs – changed this picture radically over the last few years. Once researchers knew how to make them work, “deep” NNs helped researchers to bring error rates down by 25% or even more with just this one [at its core, rather old] innovation. After this first revolutionary wave swept over ASR and its relatives of TTS and voice biometrics a couple of years ago it was followed again by a phase in which many smaller innovations saw the day of light day, now within the framework of DNNs."

http://whatsnext.nuance.com/in-the-labs ... ence-2014/


Uh oh... looks like a night of heavy reading ahead of me...that, and rummaging in me box of old software to find this program again!.... I never gave it much thought at the time, but now I am extremely intrigued.

thankyou very much for this information sgifanatic!
Never heard of the Cyrix, Just reading an old review ( http://www.anandtech.com/show/170/2 ), and it doesn't seem like it would be 'great' for EF2000 :( . Although having said that it would probably run fine since EF2000 came out '95 (486-ish). That Cyrix apparently supports MMX (x86's first SIMD extensions), but it's FPU performance is not rated high so unsure how much benefit it would give in everyday performance. Shouldn't be any problems with Lotus.

I suspect this will be one of those cases where running it using software renderer will be better than using 3D accelration with the ViRGE.... not only were S3 one of the first 3D hardware accelerators (slightly beaten by 3Dfx I think???) on the PC market (using S3D framework), the ViRGE was also known as the worlds first '3D decelerator' because it was diabolical with anything other than basic 3D geometry display (one of the reason's I went Rage 128 back int day).

The S3 card accelerates S3D which was only used by a handful of games (terminal velocity and descent spring to mind)...EF2000 has been 3Dfx'ed so a voodoo would be best actual physical hardware to use for it. When they came out the S3 Virge was considered good, but lack of support from developers meant that 3Dfx's Glide quickly became to goto 3D framework for developers... well that and D3D a bit later. OpenGL didn't get popular for anything other than CAD until later... it was all about D3D and Glide/3Dfx...

Some SGI relevance here... as the name suggests (GL)ide WAS modelled on OpenGL, and 3Dfx Interactive (the makers of Glide) was started by ex-SGI engineers!

If you still have that 540... you *might* get better performance running EF2000 with a Glide wrapper which will use GL to render 3Dfx without 3Dfx hardware. 320/540's came out only 3 years after EF2000!

uunix wrote: I found this interesting looking CD ROM

I always thought it was some homage to a Friesian cow... Is a Friesian cow classified as Piebald? or is it just horses?

P.S

uunix wrote: I love the sound of a floppy seeking

You should get an AMIGA!
Hello, out of interest would you ship to the UK? I'm interested in the 540. I know it ain't gonna be cheap but you gotta ask these things :)

Thanks
You have some really nice systems mopar5150, thanks for sharing. Thank god we have people like you (and others here) to save this stuff :)

Forgive my ignorance here as I have no MIPS cluster experience with SGI (only workstations), but how does this differ from an Onyx4 with VPro? I appreciate both systems are scalable, but all essentially the same architecture underneath? In which case can an Onyx350 exist as a node in an Onyx4 system and vice versa as long as it uses VPro (if the node has graphics)?
uunix wrote: I'm downloading UbuntuBSD


BSD != Linux

While both are considered 'unix-like', different kernels, different licenses, different philosophies...

Arch Linux + Awesome (dwm fork, keyboard driven window manager) have been my primary OS for a number of years. Arch is often described as "Linux kernel + package manager" and gives you a basic (rolling release model) linux system which you can entirely customize and apply your OS components to. The AUR (Arch User repository) is bristling with packages and in conjunction with pacman (Arch's package manager) provide easy management including dependencies etc.

Or to understand it all, build your own from scratch. http://linuxfromscratch.org/ , http://linuxfromscratch.org/blfs/

For Out of the Box (everything just works), I have experience with LinuxMint, and it does what it says. Even has most firefox plugins you could want for modern day browsing included already.

http://www.linuxliveusb.com/ , for creating live USB distros. Never had a problem.
Intuition wrote: Hello,

Found your page by way of the fellows at Nekochan.
A few questions about the 600 MHZ mod.

1. Do you still do this modification?
nope, sorry. I sold all the stuff to Khaled Schofield UK.
2. I saw you need a specific 5200 ish chipset?
It is the Xilinx "logic" chip - few bytes - that is different. Ask Khaled.
3. Cost if you still do it?
as far as I remember it has benn about 300 €, and a R5271 CPU was needed.

If you have a repair shop at hand ask if they can do BGA resoldering.
Not an easy task to do but no rocket science either.

Best if they have X-ray inspection at hand to see if all the balls are pretty.


Any other information you feel appropriate I would appreciate.

Thanks,
Casey B.

Cheers

So now I have to find Khaled Schofeild


I, like many o2 owners around here, are no doubt interested in this and have been keeping an eye on events. However it sounds like some work still needs to be done before they can become available? I wish I had the expertise to contribute, but sadly do not for this campaign :(

http://forums.nekochan.net/viewtopic.php?f=3&t=16729196 was the last time I heard anything mentioned about it.
Agree, you don't post one of the rarest Indigo offsprings there is without a picture? :evil:

Dark grey? I thought they were dark green khaki ? or was that another SiemensNixdorf/SGI?
ewww.. that is quite ugly. Almost looks like it could be a prototype.

Structural engineer?
jan-jaap wrote:
spiroyster wrote: ewww.. that is quite ugly. Almost looks like it could be a prototype.

Not a prototype, they're really supposed to be like that. But yeah, I used to have one or two but I gave them away. Because I like them better in the purple version.

For some reason, it was your 'prototype' blue fuel popped into my head when I first saw this Siemens!

Was there a khaki green Siemens, maybe the Siemens o2 version? Or am I just making that up?
jan-jaap wrote:
spiroyster wrote: Was there a khaki green Siemens, maybe the Siemens o2 version? Or am I just making that up?

Not that I'm aware of, but the Siemens is not 100% grey, there's a vague hint of olive or dark green in it so maybe that's what you mean?
Here's another one: http://sgistuff.net/collection/systems/1002.html

I've never seen a rebadged Octane or O2 (or newer system)

hmmm you might be right there. Wouldn't be the first time a brain corruption has got the better of me.

Looking at other Siemens, they all seem to be the same shade of grey so I doubt they would have changed their corporate identity for the RW510 (o2).

Image

Sorry jirka, I have completely derailed this for which I apologize.
jirka wrote:
spiroyster wrote: Sorry jirka, I have completely derailed this for which I apologize.


No problem. :) But what is that big Siemens? The Crimson?


Yes, from this thread http://forums.nekochan.net/viewtopic.php?f=4&t=14435&p=113354&hilit=Siemens+Nixdorf#p113354 .

I was convinced I had seen a siemens o2 and its colour (and semi-convinced it was here on nekochan somewhere) so searched for any likely candidate, and this popped up. Never seen that before :D

I wonder if that paint job was done by a fan, or this hardware was actually used by the Arrows F1 team?

Incidentally didn't Arrows crash out of F1 (no pun intended) about 2002 :)
Yes I can confirm Matrox support for legacy hardware is erm... not great.

After sending 3/4 emails to a support guy (who has worked there for 12 years) the lines just went dead and the returned they favour by spamming me information on their latest graphics cards ( because I'm clearly interested in those after inquiring about a product form 1987 o.0).

A part of me is quite interested in Matrox and their early foray into '3D' hardware. I do have SM-1024 and PG-1281 and can confirm you findings. I wonder if I remove my add-on board from the SM-1024, could I turn my PG-1281 into a SM-1281 :) One day when I have enough guts and enough fun with it in its current 'shipped' configuration. The first SM-640 I've seen was manufactured mid-1987 ( viewtopic.php?t=16730932 ), and mine August 1992 (although SM-1024, so later series). Any more info on these would be great. :)

afaik, there was no GL driver (or GL layer/lib.. IrisGL for that matter) that could be used with them (PG/SM PC cards that is), as I think IrisVision was the first to bring anything hardware related to x86) but they all support CGA instructions and PGC instructions (which I think was the first hardware standard to include 3D functionality (albeit executing the instructions in a non SIMD way :) on x86).

Interestingly, my SM-1024 has an 'EGA in' DE-9 male connector and a 'VIDEO OUT' female DE-9. So it can possibly be used as a pass-thru for something else?

PHIGS and GKS drivers are available (so interesting to see what was available on SUN with the Matrox boards), and there are apparently PGC drivers for AutoCAD and a few others. I would dearly love to see some PGC code (I have the PGC IBM manual) since although there was a standard, I think even then manufacturer's were pushing with their own instruction sets because some of the features of what the SM-XXX cards can apparently do, are beyond the scope of PCG instructions. Which gives rise to the question... What did use this extra hardware, and what API's were employed for the job? Does anyone here have experience with PCG/PGA (Professional Graphics Controllers)?

It's nice to see some results on screen, I'm worried that in order to see anything remotely unique (outside of CGA/EGA display) it would involve me either sourcing some hard to find proprietary software or documentation and examples to code so demos too. Latter preferable, but both equally unlikely :(

P.S The SM/PG-640 are 8-bit ISA. later ones 16-bit ISA.
:shock:

Uber Great find!

I'm more than happy to go through any source code with you if you want to know anything about it. Be interesting to see the likelyhood of writing mondern graphics engine if all the game logic and ancillary stuff is there (just need to swap out n64 code, if thats whats there).

Dare I say it, an Irix port?
What are the chances of this turning up on this side of the pond so soon after this discussion... o.0

http://www.ebay.co.uk/itm/DVS-IRIS-2-0- ... SwuxFYwTx9
20 quid opening bid!

I can vouch for that seller (recycler), have got many a strange and wonderful contraption from them in the past.
ahh sheet, sorry mate. :oops:

What are the chances... thing looks fooked anyway. Sellers a crook. nothing to see here peeps. 8-)
Talk about stereotyping. As a proud 540/320 owner I take offence to being lumped in the same camp as Itanium!

just cuz it got 'intel inside' don't make us all the same ya know. :roll:
uunix wrote:
spiroyster wrote: Talk about stereotyping. As a proud 540/320 owner I take offence to being lumped in the same camp as Itanium!

just cuz it got 'intel inside' don't make us all the same ya know. :roll:

Well, I can't go into finite details on this.. but 'intel inside..' means intel inside.. :mrgreen:


:shock:

I'm astonished, are the moderators of this forum really going to let him talk to me like that. Well, since its such a wild-west around here, you leave me no choice but to unload a full magazine (air gun) of my 21st century opcodes on yourself...

OMG, STFU, GTFO

you godwin invocator ....

I'm done with this.
VenomousPinecone wrote: Cyber City Oedo 808
An unfortunately short lived series. Lots of foul language and violence. Perfect.

Couldn't agree more, one of my favs! I'm still waiting for a gyro stabalised structure to be built IRL ... :) .

In the distant past I watched Jin-Roh (Mamoru Oshii of Patlabor/GITS fame), slow in the midlle, good start and a good end... some of that nice atomospheric feel in the middle...but can be a bit sleep inducing o.0

The best surprise was a trailer at the end for a film called Avalon ( https://en.wikipedia.org/wiki/Avalon_(2001_film) ) directed by said director!

It's not Anime, but live action SCI-FI (Simulated Reality), Polish language and highly recommended. Excellent atomspheric scenes (like Oshii is good at), and really nicely implemented visual FX (it is 16 years old o.0).
Found some old Matrox manuals on bitsavers.. including QG-640.
http://bitsavers.trailing-edge.com/pdf/matrox/

Clicking on the 'Parent Directory' link is quite fun too. :)
jan-jaap wrote:
Raion-Fox wrote: So yes this conbines 2-4 VPro inputs and makes a 2-4kimage out of it.

Nope, the output is a single link DVI so still limited to 1920x1200

IIRC it interleaves the pipes so you can get 4x the frame rate. Or maintain framerate with a more complex scene.

Can it AA the final output, rather than splitting the workload, or increasing the complexity?
Raion-Fox wrote: There's an IBM monitor that can do it, forget the name but Hamek uses one.

I misunderstood the purpose of the compositor, my bad.

The bottleneck is the single-link DVI (max is 1920x1200)?


Cool! It can!
Pixel Average
: This selection combines the inputs from the InfinitePerformance
pipes connected to the compositor and displays an average of those inputs.
(This selection works only if you have inputs from four InfinitePerformance
pipes.) This selection can be used for anti-aliasing (smoothing out of jagged
edges on items on a display)

AA can be achieved using the accumulation buffer, but would have to be implemented on a per application basis. This contraption would AA all inputs (including non GL contexts).

What an awesome bit of kit! shame the cogs and pistons on the other end of the 4 DVI inputs are so difficult and expensive to obtain and run.... unless you already have it of course ..0.o

Nice logo illumination too 8-)
Autodesk have moved to perpetual licensing for 2 years now :twisted: . I don't think you can even get anything from them on 'hard media' anymore. :cry:

Amnesty on 'hard media' license violations FTW! </perfectWorld>
< > <- -> <= >= >< <>

Code: Select all

< > <- -> <= >= >< <>

< > <- -> <= >= >< <>

LGTM

Win10 Firefox 53.0.2 32bit
Ebay account [Not Mine]
iaintpayinyou1.jpg
iaintpayinyou1.jpg (31.49 KiB) Viewed 139 times
I've had 3 Vostok Amphibia (automatic 2416 movement) over the past 10 years, those things are built to withstand a nuclear winter! Straps were crap and the QC for one of them left the watch failing to perform its intended function, the other two have both lasted for 8 years so far...in fact I coult tell you exactly when I last wore one of them. Like a fitbit, you have to remember to move :evil: , but unlike a fitbit doesn't nag you when you don't 8-) ...rather it rewards your laziness with the incorrect time when requested.

It's the cheapest automatic movement which could be useful for diving (when I was doing that) that I could find and was (still are) approx £45. Considering how much I flail my arms around when just walking down the street, I could never justify wearing something unless it was either (a) cheap, and I have a whole drawer of them... or (b) Is built to withstand a nuclear winter.

It's rated at 200m which is useful :roll: so next time you get caught short in the North sea and you need to pop down to repair a well head, you can take solice in knowing that your watch will continue to function as intended... Internal organs at that pressure... ymmv.

tbh, the good'ol "elephant" metric has helped me when profiling code on more occasions that I care to admit. :mrgreen:
gijoe77 wrote: lets pretend someone bought this thing - what can one currently run/do with this thing? Any software that already exists or would someone have to code their own stuff to use this? I'm still confused...

Both, manual says there is a utility that allows you to customise the output of all the graphics pipes (user configurable tiling) as the user desires, or this could be done on an application basis since there appear to be some specific extensions for this hardware, so the developer could choose how they wish to output (which gfx device), multi-contexts all with complex geometry in a system operating in a non-distributed way (in terms of CPU and mem, I'm unsure if the vram can be shared between the different graphics pipes, or it must be replicated for each pipe?). Certainly, it is a novelty item, and you need something in the big-iron class (iirc the ATi's in the Onyx4 support this thing as well) pushing the pixels, but in terms of 'graphics related hardware', its pretty cool imo, while specialised. I don't I know of another breakout box which can perform this as a seperate part of the pipeline other than provided by the gfx, other systems rely on multi GPUs on the same board (or same discrete graphics subsystem) and then perform the mixing as part of its pipeline before outputting through the primary display. e.g Quantum3D AAlchemy.

The AA capability is neat, and the 'tiling' interesting. I don't think the AA would require a special code path for the GL program, since it can interleave all the inputs (which would duplicate all the geometry/texture memory, and jitter each output) for multiple pipes and mix at a hardware-level. Normally this would have to be done by rendering 4 frames (using the same gfx pipe) and then compositing via the accumulation buffer, or (if the pipes' texture buffer can handle it) rendering an image 4x the size and relying on the bilinear filtering when drawing to a downsized quad (not so easy on sgi's tbh).

@vishnu
This item is certainly applicable outside of broadcasting too since it can be used for higher-bandwidth visualisations and perhaps simulations which would make use of multi-context rendering. I think (I have seen) there are dedicated boxes with stuff like BNC ports and wotnot for the dedicated broadcast equivalent.... DMedia Pro breakout box which has a pretty similar livery. ??? :?
gijoe77 wrote: One needs a Onyx4 with ATI's graphics to use this? Not VPro or earlier Onyxs (I don't see any DVI output on the IR4 pics listed in the other thread)? I'm not aware of any Autodesk software working on the Onyx4/ATI platform... I was under the impression nothing really works on the Onyx4...

Apologies.. I meant I think it works with Onyx4 as well as V-brick, but not G-brick. Most stuff which doesn't use SGI specific extensions should work just fine on the Onyx4 (including Autodesk...Inventor/AutoCAD, however I don't have one so can't confirm), at the end of the day its nothing more than a GL implementation provided by ATi so in theory should work fine. We are still talking GL1.5 though, no shaders, the GLSL spec had been released for GL1.5 (GLSL was devised/promoted by ATi at Khronos), however wasn't fully introduced until GL2.0, by which time neither the FireGL nor V-Pro couldn't really handle anything non-standard outside the fixed function pipeline (there were specific SGI extensions for dealing with non-standard stuff which eventually found its way into GL standard, but a lot got superceeded by fragment functionality). I can't think of many CAD programs these days which require shaders tbh. They certainly use compute (GPGPU functionality) but not fragment/vertex shaders. CAD doesn't tend to push the 'looks nice' category, rather 'look how many vertices I can have on screen representing a bad ass mesh and get high FPS as possible' category.

Yes some features of this contraption, the applications had to explicitly implement however... as most things concived by SGI, it eventually made its way into the standard for those bespoke times when the application has more control over this aspect of the display (hyperpipes). Very bespoke though.

https://www.khronos.org/registry/OpenGL ... erpipe.txt

I think this was the first though :) . Not many other systems allowed this level of configurability for their graphics.
Y888099 wrote:
jan-jaap wrote: The purpose of my inventory is not to know where things are. This should be obvious


Not so obvious if you travel a lot and you have parts hosted in different offices and warehouses!
For me, "where is it"? Is important, as well as the person I have to contact if I need parts shipped.

johnnym wrote: So why not use a directory structure like an inventory?


Some time ago (2011? ), I started a file system project, bTree-driven as it should be, but ... at some point I wanted to do a bit of research, so I started wondering why shouldn't I have to use multiple keys to point to an inode.

A tree is not a natural structure when you have do deal with some items which might have correlations.

Allowing correlation breaks the tree-shape of the information, so file systems don't allow them. But I wanted to see what could happen, so I applied some ideas to the engine, and tags were then allowed, so you can have relational items, and organize them into categories by the use of"tags".

A tag is a new actor, you have files with contain data and metadata, you have folders which contain files but only of those with the same mother node (classical concept of a classic folder), and you have tags which contain inode to files and folders (one, or many) from everywhere on the volume.

With a tree-based model you should have have multiple copies of the same information, or use "soft/hard link" (e.g. ln -s ... ) to point to the item (file or folder) you want to relate into a category, whereas the tag-design paradigm is to use tags to group items (files, folders) that can then be pulled into a view (workspace, or category); this allows you to have multiple view of the same information at once without the need to replicate them, and it's also more efficient to assign or reassign those tags and their related views on the fly. More efficient than having to deal with hard/soft links.

But, a tag makes the file system more fragile than a broken link, it's more hugger-mugger when things decide to break themselves, so ... of course there are problems with the consistency of the whole volume, and hey? Don't expect performances. Neither fix-tools, and be sure that a the first crash you will completely lose everything.

Not so good.

Btw, at the end of my research, I understood it's more useful for tasks like handling a library, like a collection of music, or ebooks, on your PDA, I mean when insert/delete events don't happen so frequently.

What you have just described here is pretty much a graph database. One which focuses on the relationships between entities, rather than just the data itself. Dynamic data model!
Here is hackermans tutorial on how to 'hack time'. Easier than measuring it :roll:

Hope this helps.
Y888099 wrote:
spiroyster wrote: What you have just described here is pretty much a graph database.


sorry, I don't know graph databases, never used, never studied, I don't know their properties.

spiroyster wrote: One which focuses on the relationships between entities, rather than just the data itself.
Dynamic data model!


I don't get your point. "Data" is file's content, as well as file's metadata (access time, attributes {RWX RWX RWX}, owner, group, etc etc). Nothing different from a classic UNIX filesystem; I mean it's just a fs with tags, which allows items to be grouped within or without the the constraints of having the same mother node.

if you don't remove this constrain, you obtain a classic tree, with folders and files.

Relaxing one degree of freedom you obtain tags; basically a tag is like a folder (and they have a lot of routines in common), but it can points to an item { file, folder } n-degrees deeper in the tree hierarchy.

A tag can virtually groups other tags, including itself, but this might potentially cause endless disasters because in this case it adds itself twice in the entry block like a closed loop (deadlock). So, I removed this possibility: a tag can point to other tags, but it can't point to itself.

No point really. Just say'in o.0

The situation described could be perhaps, more naturally resolved being represented in a graph database ;)

As you say, using a filesystem as a grouping mechanism allows you to structure you data in a tree like hierarchy. However, this means you need to know your data model in advance, and has it's own inherent problems, such as forcing a more relational like tree hierarchy/structure. Fine, until you require another grouping mechanism which does not fit the tree hierarchy representation, this as you say, can be solved with tags (tags on the nodes/data), but the fundamental data is ultimately structured/ordered in tree-like, so is limited with what the tags can do apart from them basically being metadata attached to a bit of data and have no hierarchy themselves (all at root level). This is like fudging two different data models together (one structured, and the other too flexible). All these issues could be (but don't have to be) resolved by using a graph database which doesn't impose a tree hierarchy (although it can emulate one) and allows you populate any metric you want (even ones which you do not care for at the time the data is acquired). It's how deep neural networks can operate since a graph databases scale well, and are extremely efficient with vast swarms of data allowing relationships to be generated at a later date. i.e. after there is a correlation between two bits of data that was originally thought mutually exclusive. The data has not changed, rather there is a new associated/relation/edge between them. Something was learned 8-)

Incidentally, a filesystem is an abstracted representation itself and probably has little to no relation to how the data is actually physically ordered/stored.

Anyhows, rights and wrongs of relational vs graph is a bit outside the scope of this diatribe, so I shall... as they say, reserve that for the reader.
Cool this sounds a lot more exotic than the usual society destroying compounds I pick up from the local projects o.0

robespierre wrote: I've seen that game before...

Image

God bless mindless violence.
Shiunbird wrote:
spiroyster wrote: As you say, using a filesystem as a grouping mechanism allows you to structure you data in a tree like hierarchy. However, this means you need to know your data model in advance, and has it's own inherent problems, such as forcing a more relational like tree hierarchy/structure. Fine, until you require another grouping mechanism which does not fit the tree hierarchy representation, this as you say, can be solved with tags (tags on the nodes/data), but the fundamental data is ultimately structured/ordered in tree-like, so is limited with what the tags can do apart from them basically being metadata attached to a bit of data and have no hierarchy themselves (all at root level).


Have you ever worked with a CMDB? I hope you never ever ever have to deal with it.
https://en.wikipedia.org/wiki/Configuration_management_database

At work, we've moved from a custom-made Oracle DB to a proprietary DB by a German company built on top of Oracle (and the most awful front-end), and now we are going to the CLOUD (the miraculous solution to all IT problems). The migration is going to take years, because the vendor won't customize much of it for us so we need to review millions of entries.

You mention in a very theoretical way practical problems we always face:
- you need to enter a set of data that is organised in a way that doesn't fit the initial model. What do you do? Build a custom application that writes and reads in a specific way to the database, and tries to format the data in a way that the other entities accessing the database can utilize and understand.
- you have a mix of automatically-populated data and human-populated data, and then you have applications that rely on both kinds of data as FACTS.
- the odd use case.

For example, we had a colleague in Asia who had NO SURNAME. But the system won't allow blank last names, so:
So we had in different systems:
UserId=1234, FirstName=Name, LastName=.
UserId=1234, FirstName=Name, LastName=[blank]
UserId=1234, FirstName=Name, LastName=[space]
UserId=1234, FirstName=Name, LastName=Name
Most applications would use UserId as unique identifier among different systems (Active Directory, Lotus Notes, SSO, etc..), but some operated on basis of user name.
Her phone would periodically stop working and I've heard once she didn't get paid.

Back to our initial discussion...
We can use the IRIX inventory command, or export lsdev, and write scripts to have the formats matching. But once we have a different use case that reviews a limitation in our way of structuring the data, we have two options:
- review all legacy entries
- watch the beginning of chaos unfold in front of our eyes

The hardest of all is to have the discipline to keep everything updated as we go. I'd love to hear from anyone here who feels being successful with that.


No, will try to avoid, thanks :P .

Luckily, I'm a software engineer and we have a dedicated IT dept. to deal with this 8-) . I'm a recent convert, and graph databases have solved design problems for us on at least two occasions. Mainly due to the ability to have multiple data models all represented in a single database (single query interface), and their future proofing. Certainly, you need discipline, and at a higher level, require a vague idea of what you want from the data, but the details can come later ;) .

They are not the be all end all, and yes, I can approach from a theoretical point of view for fresh data, migrating existing databases would be a right pita (And thank lawd I don't have to do this for a job). We do have lots of third party 'manufacturers' data stored in SQL-esque form which we still need to update, deploy and subsequently query at runtime in the same context as our uber database. However, these can be hidden behind a compatible alternate query language interface rather than migrating it to a single database (although IRL, devs just know which database interface to use...atm... I WILL CHANGE THIS!), but I'm confident a graph database would be able to handle all aspects of this data (and more) should we decide to purify it all.

FWIW, I have implemented a graph-like bespoke database as a PoC for internal representation (which is in fact pretty straight forward, until you want to enforce stuff like ACID transactions o.0), however it looks like we will be using neo4j (although I take offence at the jre requirement, but it comes with loads of tools, including nice visual eye-candy tools). Ultimately it doesn't matter as I'm pretty sure most graph-esque query languages can be translated will little effort into other graph-esque queries (explicitly by the user, if not implicitly by the interface). Or at least, I haven't come across any yet o.0 (although I'll be the first to stick my hand up, and admit its a big wide world of databses out there o.0. I'm not hugely experienced with different graph database implementations, more the concepts required for data management).

The cloud is the future imo. I pretty much work via the cloud since I'm part of a distributed development team (we have a dev in NZ! which is literally the opposite side of the planet to me :shock: ) and so am I quite happy with its existence as I could not work the way I do without it. I like to think clouds mean we have come full circle, we don't have mainframes down the corridors fed by terminals, now we have some grid thingy in (potentially) another country that we interact with via dumb down ARM/Atom clients. In 5 years time, the only requirement for workstations will be for legacy software o.0. I speak from CAD perspective, although it seems like everything is turning into SaaS (even gaming :twisted: ), which will end up being facilitated puerly by 'the cloud' only. My hypothesis.

<ontopic>
I don't have an inventory of all my kit, but perhaps I should create one, and will post the case study here when I do.
Shame, that keyboard has one of the nicest tactile feedbacks I have ever used.

The BBC Micro B keyboard too was a solid performer for heavy typing. Modding one of those would be a project :D I wonder if that is even possible without discarding everything else bar the casing and keys.