The collected works of squeen - Page 8

I put 999's in the example above to protect the innocent. I probably should have put xxx's instead.

Sorry for the confusion.
anotheradamdickson has suggested that I use IPFilter . Tech pubs describes how to use it

http://techpubs.sgi.com/library/tpl/cgi ... index.html

and I remember when Iit was on the "Cool Software" site for download--but I thought it was put on the system discs. I just checked the 6.5.30 discs and can't find it. It has command line apps things like ipf and ipfstat . Does anyone know where it lives these days?

BTW I'm not talking about the older ipfilterd that lives in eoe.sw.ipgate .

Thanks.
Neko,
you...
this site...
the community...

words fail
:D
I ran gedit 2.8.1 remotely on an IRIX machine from a Linux machine running CentOS 4.5 just fine and then got the broken message for 2.16.0 on CentOS 5.

Gimp 2.2.13 also reports an error. Nedit and xterm run fine. (both CentOS 4 and 5).
Seemly likely that it's the new gtk+ libs to me. I tired the very simple hellloworld gtk+ example from here:
http://www.gtk.org/tutorial/c39.html#SEC-HELLOWORLD
and it crashed on CentOS 5 (but not 4).
libxrender is supposed to be the fallback for servers w/o the extension. That's how we've been getting AA fonts under Xsgi with the GTK+ apps in nekoware.
When time permits I'll run a IR4 vs Quadro FX 5600. Mostly numbers problably won't favor the IR4, but there still might be a few surprises! :)

SGI just finished installing a CAVE for us this past month.
I grabbed the lastest version (Linux RPM) and it installed just fine. I feed it a number sequence of .RGB file and it read and played them back very nicely.

If you are looking for feedback on what would be nice to see feature wise, I would look toward's Apple Motion software (part of FinalCut Pro), since that seems to be an analogous product. It would be nice to be able to manipulate the image sequence with some 2D effects such as motion blur, fade-in/out, etc. Perhaps they are already there and I just didn't see them.

Lastly, you could very easily (via ffmpeg) export to mp4--although for some reason it's SGI .rgb converter seems broken in the lastest snapshot of the source code.

Anyway, very nice!
dj wrote: Good to hear, thanks. Do you mostly use SGI image files?

Yes. As you said it's simple to code into your project.

I'm currently using libquicktime for movie support on IRIX and Linux, but for a number of reasons am thinking about changing that; I'll definitely take a closer look at ffmpeg.

I hadn't seen libquicktime before. Looks like a nice API. How do I get djv to export my sequence as a quicktime mov? Can I control the compression settings?
Nice QT export. (DJV_view works fine on IRIX as well here.) As for codec, H.264 encoding seems to be supported in libquicktime. My experience with it in FinalCut is that is kicks serious ass--but you'd want to add separate field in the GUI for setting the bit rate (I usually go high for HD ~18-21 MBs)

BTW compositing and simple motion blur is easy, just shove your frames into the OpenGL accumulation buffer.

Even if you only added one other clip "timeline" then it would be very nice for simple video editing--a simple 1-2 sec cross-fade transition between clips would do the trick.

You have in/out mark points, but is there a way to trim at marks?

All-in-all very neat. We have in-house simulation visualization back-end that spits of number RBG images. I've been doing one of the following.

1) coverting first to jpeg and making mp4 using ffmpeg in a script
2) tar'ing up the images, sending them to a Mac, renaming the .rgb to .sgi and bringing them into Apples' Motion and then into FinalCut.

Both are too limited or cumbersome. It'd be nice to have a native IRIX/Linux option. Rewriting FinalCut is impossible, but with a couple of small tweaks, you'd have everything we need! 8-)
dj wrote:
That's good news; I've only tried it so far on VPro and O2. Do you mind if I ask what type of GFX you're using?

I tried a Tzero with V12. I can also try an Onyx with IR4.
Unfortunately the open-source H264 code is GPL, and I'm not entirely sure what legal ramifications that has (DJV is BSD licensed). One option I guess would be for me to supply a package that linked with the user's version of libquicktime, which they could then install with the X264 code.

You could just try dlopen() of the quicktime lib and mapping the functions it uses. If it fails, report to the user that it can't find libquicktime and then wash your hands of it.

Do you use any titling for your videos? Or simple compositing, like adding a logo?

Yes titles are very nice, but that means AA fonts and fades etc. Right now I plan to add it to our backend and bake them in the video.
squeen wrote: You have in/out mark points, but is there a way to trim at marks?


It's been awhile since I've used a video-editing app, do you mean adjust the in/out points after they are set?

I just meant that once I have the video loaded into djv_view, can I select a section for cut-and-paste (or just cut) somehow.
I've was working on our in-house scene graph library last week. We weren't getting very good surface reflection up close with Cube maps, so I justed added a planar reflection render-to-texture (FBO) option. So far so good, but I'm still wrestling with reflecting all the light sources (and shadowmaps!). Still kinda cool. Runs real-time (2 lights, 2 reflection surfaces updated each frame) at around 30 fps with fragment lighting and 5x5 PCF shadows in a pixel shader (Quadro FX 5600). Gonna take a bit of wrangling to get it working on the Onxy.
The Onyx will only have pbuffers and will suffer the hit of reading back (copying) to texture. Fortunately, IR is still very fast with pixel read. We'll see!

I've looked at Radiance a bunch of times, but we are going for hardware accelerated (real-time). Also, I have an obsession for writing my own applications as part of the learning process.

It's been a crappy month at work. Thanks for letting me share one of the lighter moments.
I thought about talking to Greg Ward regarding getting real physics based lighting modeling into our system.
Yes, my scene graph uses OpenGL so it's hardware rendering. Radiance (I think) is pure CPU software which would be many times slower. With the advent of shaders, GPU radiosity is becoming a reality. Brombear, alludes to all the sticky areas in his post above (but radiosity != ray trace as I understand the terms).

When I started I wanted real-time interaction (which we have). With the pressure to produce more physically accurate lighting results (i.e. exactly what will a digital video camera see in a given environment), its hard to keep the frame rate up. But that struggle is the essential fun and challenge of it all, right?

Part of my aversion to Radiance is not it's capabilities. I have a pathological need to write my own software as a means of learning (and controlling) the intricacies of a problem.

Sometimes, something new and exciting pops out. Other times I just reinvent the wheel. Que Sera Sera. :)
In GPU Gems 2 this is called "ambient occlusion" and though it's really a hack (i.e. ambient is a fake out for real radiosity) it's next on the list. If I did as you suggest, it would be better--use a true radiosity application for precomputed true diffuse.

Here's another screenshot from the real-time app. This scene runs at only at 6 fps. (Does not have the new planar reflection maps in it yet, just bump maps, dynamic cube maps and shadow maps)

Attachment:
hst.jpg
Like joerg, it took me a bit to get the hang of reading the captch, but once I did it wasn't really a huge bother. I even got familiar with some of the phrases, and was happy when "an old friend" popped up that I knew I could decipher.

If, as Neko says above, it was a massive pain to install, I'd say he should save his time for better (and more fun!) things. After all, I believe that wiki page authoring is still an infrequent event for most of us.

My two-cents. (sorry joerg :) )

Here let me try my skills....

puxecoral
rangsales
milkyorbit

...what did I WIN?!?!?
Saw that one coming. :)
I never tackled a kernel driver, but we had success writing a user space PCI driver for a National Instrumential I/O board. We used the pciba interface to open and mmap the device. There were some big-endian issues to sort out, but after that it was all about just learning the register layout of the particular device.

When I get back to the office next week, I'll see if I can't post the driver somewhere.
The libsball link is niffty. I use the Spaceball 5000 and I'm hoping it's supported.

With the 5000 at least, the drivers provided by the manufacturer are X11 drivers. So, you start their app and then any other app change get spaceball input via Xevents sent to it's application window. I added the code to our scene graph library in order to use the spaceball, and it works on Linux and/or IRIX 'cause it's just X events.

What is neat about libsball is that I might be able to by-pass (manually) starting their xdriver application and just have our scene graph app talk to the serial port directly.

Now the USB 5000's are probably another story...
AutoCAD 13 was the point I jumped ship, both from AutoCAD and from Windoze. Still, for old time sake I'd love to see 13 (or 12) running under IRIX!
I wonder how I could read all my old game 5.25" disk into something runable?
Wow. I'm gonna have to give this a try. What a blast it would be to retrieve and run some of my old AppleSoft programs. :)
Hi dex!
Care to share?
Cool. Let me know how it goes.
No, but thanks. I'm an guidance system designer and analyst not an animator. The work I've done with graphics has been to support mission design, but the base models are typically shared across a number of people---and they usually require a lot of clean-up since it seems most applications are concerned primarily with import and pay little heed to a clean export.

The ray tracer is a testing tool I built (synthetic video) for a navigation experiment. I just can't seem to stop playing with it. :)
A thought on colors. IRIX usually ran with a gamma correction of 1.6 IIRC. Perhaps the Buffy Colors are set for no gamma correction (1.0).
You also want to set the SGI Motif flag using

Code:
static String fallback_resources[] = {
"*sgiMode: TRUE",
"*SgNuseEnhancedFSB: TRUE",
"*useSchemes: all",
NULL
};


Will make everything look IndigoMagic-ish. :)
Martin Steen wrote:
jan-jaap wrote:
Close, but no cigar. It's not as hideous as the blue version, but it still lacks the SGI scrollbar decorations and the drag-n-drop target. Here's the Nedit File->Open dialog:


Ok, that box looks better than the standard Motif box :cry:
But how can I create the Nedit fileselectorbox?

Best regards,
Martin

"*SgNuseEnhancedFSB: TRUE" should have activated the SGI style file selector box and given you the improved box....hmmm.
Try adding this to the widget's resources:

Code:
argcount=0;
XtSetArg( arglist[argcount], SgNuseEnhancedFSB, TRUE );argcount++;
XmCreateFileSelectionDialog( w, "openFileDialog", arglist, argcount );
Gallery accounts are granted separately from the forum.
If you'd like one, I'd suggest sending a polite request to Nekonoko via PM.
Personally, "what gives" raises my hackles a bit.
Wow. Really nice work. Octane is my favorite.
I'd also vote for the classic SGI logo in future renders.
Cool. Looks like fun. Thanks!
Having asked myself this question for the 3rd time. I decided to update the Wiki page .
Everything runs smoothly from my point of view. Why change, especially if there is a cost.
I guess I'm "out-of-it" but what's quick reply?
I liked this write up (and no, that is not me in the picture). :)

http://www.hec.nasa.gov/news/features/2 ... 72809.html
Awesome image. I've not even touched on motion blur except as a post processing step!
How in the world is everything so clean at only 256 samples per pixel? Do you use photon mapping for some other irradiance caching or is it just magic beans?

I also don't have any sort of HDR evironment map capability yet, so I don't know what to expect in terms of lighting difficulty level for a path tracer, but I seriously doubt I could produced as clean an image in 2 hours on 4 cores, let alone 20 minutes
I'm starting to get the impression from ompf.org that VRED is a real leader in the RTRT community. (One of these days you need to PM me your handle over there).

Nice work!
For the navigation algorithm I developed we are looking at hardware acceleration of the acquisition search, but for the path tracer the primary focus if physical accuracy not real-time (the super computer cluster sure comes in hand!). What I'd like to improve there near-term are some of the following:

1) spectral rendering (as opposed to RGB) including Laser Radar (LADAR) frequencies
2) bidirectional path tracing and energy redistribution (Metropolis-like) methods
3) adaptive sampling in image space
4) BRDF tables

I am considering a much simpler real-time ray tracer engine to replace raster graphics in the CAVE. We just ordered a 24-core SMP machine--I might tinker with that a bit as well. That's your forte and a whole other can-of-worms for me, but I think the industry is heading in that direction and we need to stay on the cutting edge.

Unfortunately, my current assigned project does not have a strong visualization requirement---it's all about multi-body dynamics---and it launches in 2014, so I don't know how much time I will have for high-quality graphics. We'll see....

Graphics for situational (simulation results) visualization continues to be useful and I think we will probably be making significant improvements in our modeling program (scene graph & path tracer front-end) as well. The CAVE work will continue as well, but it may not be by me.

The only recent addition to the path-tracer I've managed to squeeze in this spring was dielectrics (glass)
Attachment:
bunny-glass.jpg
Very nice to see the Onyx crank out some quality renderings.
Thank you!
I had bit of luck today and got some time to sit down in front of the Tzero. :)
I just installed python 2.5.2, openssh 5.3p1, and the latest firefox & thunderbird.

Everything is working like a charm!
Many thanks to Dex and Nekonoko.