The collected works of NCommander

So since getting access to an IRIX box, I've been making slow but steady progress on bootstrapping Firefox on IRIX. After getting a sane and working toolchain, I went and built Firefox's dependencies cleanly with the new GCC as a stress test (and as some of nekoware's packages were out of date and I don't have root access to the box :-) ).

I figured some might be interested in my progress thus far. Currently, I've got a ton of patches that gets Spidermonkey (Firefox's Javascript Engine) running under IRIX, and its pretty happy, if slow (no JIT on MIPS; see below for more information). I still am having some serious issues getting working executables; the official documentation says that -lpthreads MUST be the last library on the linker line, and I ran into issues with spidermonkey due to this (gdb and dbx both reported SIGSEGV in pthread_key_create in pre-main code). My workaround was to use _RLDN32_LIBS to rewrite the rld's loading order which worked. I found however quite by accident that if I leave -lpthreads off the linking line for the js, I get working binaries! (this may be because the js shell itself doesn't spin any threads directly as far as I can tell; all the voodoo is done in libmozjs.so)

Part of the issue seems to be the order MIPSpro vs. GCC feeds libraries into IRIX's ld, but I haven't quite figured out the voodoo to get a working binary every time. It seems rld only looks at libX.so's library linking list if said libraries aren't already loaded by the initial executable; the pratical upshot of this its working by magic, and is primarily influenced by the library load order of the calling executable :-/. I got a lead in #nekoware for someone who might be able to explain the deep-voodoo going on, but as best I can tell, libpthread replaces many symbols in libc.so which is why the linking order is so flighty.

Making life more difficult is dbx has no support for pthreads, and can't properly put breakpoints in g++ compiled code. gdb on the otherhand can find and place breakpoints, but crashes whenever one is it (I guess its more literal on 'break' point :-) ). I haven't built a more recent gdb to see if it helps, that's next on the TODO.

Firefox itself has also lost a lot of its cross-platform magic; i.e., there's a configure check looking for MAP_ANON(YMOUS), and then doesn't use it or any replacement mmap to wrap the function call. In one place, a file did "#define MAP_ANONYMOUS 0" if its undefined, which lead to a silent failure, which required considerable fprintf debugging to find and fix. Another headache is that the garbage collector wants access to a threads stack and stacksize. On most platforms, this is handled by a non-POSIX pthread extension that can grab a running threads attributes, and report the base/stacksize. I couldn't find an equivelent function on IRIX that worked*, so I modified NSPR to include such information in the PRThread structure, and then added a function that I could pull it on the fly; my implementation is not 100% correct as it doesn't account properly for the NSPR thread wrapper function and stack size its using, but its "good enough" for the time being; fixing it is pretty trivial, I just haven't done it yet.

Javascript performance was unusable in a debugging build, but with gcc -O3 -march=mips4, I got pretty good performances out of it. It should be noted that the YARR interperer (which replaced PRCE for non-JIT archs) is known to be upwards of %50 percent slower. Also, please don't take the time into account, python appears to have deadlocked on the final test, and hung until I manually killed it. To run all tests up to this point took 2649 seconds (44 minutes); on a modern dual-core amd64 machine, I've been quoted runtimes between 8 and 12 minutes. Most tests ran extremely fast, but a few (mostly the stress tests) ran slowly and took awhile. Considering we've got no JIT, and I'm not on the fastest box in the world, its not that bad.

Here's what hinv has to say on my system:

Code: Select all

michael@IRIS:~/porting/mozilla-irix/js/src/tests$ hinv
2 360 MHZ IP27 Processors
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Main memory size: 1536 Mbytes
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
Secondary unified instruction/data cache size: 4 Mbytes
Integral SCSI controller 0: Version QL1040B (rev. 2), single ended
Disk drive: unit 1 on SCSI controller 0
Disk drive: unit 2 on SCSI controller 0
Disk drive: unit 3 on SCSI controller 0
Disk drive: unit 4 on SCSI controller 0
Disk drive: unit 5 on SCSI controller 0
Integral SCSI controller 1: Version QL1040B (rev. 2), single ended
Tape drive: unit 4 on SCSI controller 1: DAT
CDROM: unit 6 on SCSI controller 1
Integral SCSI controller 2: Version Fibre Channel QL2200A
Integral SCSI controller 3: Version Fibre Channel QL2200A
IOC3/IOC4 serial port: tty1
IOC3/IOC4 serial port: tty2
IOC3 parallel port: plp1
Integral Fast Ethernet: ef0, version 1, module 1, slot MotherBoard, pci 2
Gigabit Ethernet: eg0, module 1, PCI slot 7, firmware version 0.0.0
Origin 200 base I/O, module 1 slot 1
IOC3/IOC4 external interrupts: 1


Here's what Spidermonkey's test suite spat out after I killed the hung python process (--no-extensions means only test ECMAScript functionality; do not test XUL (which I have as yet compiled).

Code: Select all

michael@IRIS:~/porting/release-mips-sgi-irix6.5/js/src/shell$ ~/porting/mozilla-irix/js/src/tests/jstests.py --no-extensions ./js
[2649|   2| 146] 100% ===============================================>| 5351.3s
REGRESSIONS
ecma_3/RegExp/perlstress-001.js
TIMEOUTS
ecma/Date/15.9.5.10-2.js
FAIL (partial run -- interrupted by user)


EDIT: Just to be clear, this means 2649 tests passed, two failed or timed out, 146 skipped.

A MIPS JIT was written and landed in Firefox 11, but its against O32/LInux. On the plus side, it appears to have been tested and works on big-endian MIPS, so it might be realistically possible to backport to Firefox 10; I will review this once I actually have the browser working!.

If anyone could give some insight into the pthread linking/ordering issues on IRIX, I'd be most appreciative.

*- IRIX's getcontext() suggests it works in threads, but only returned the stack of the base thread as far as I could tell. The same function is used on AIX in jsnativestack.cpp; I suspect it is broken on that platform as well

SON OF EDIT: It should be noted that I was unaware of the Xrender issue with Xsgi when I started this, so the version of Gtk+ I'm building with will not work locally (nor do I have a way to test local output). Firefox should be able to be built with an older Gtk+2 which didn't require Xrender, or relatively easy to backport.
So I'm making more progress, fixing random build failures as they pop up. I think the build is more than 75% through roughly (a full build of FF10 on ARM takes about 8 hours just to build the source, excluding the test suite), so processor speed is a bit of an issue here :-) .

I've found out that as part of that Meego work, a Qt backend landed and was accepted and landed (there are nightly builds of firefox-qt from the mozilla-central tree). From reading around, Qt doesn't have the performance issues Gtk+2 does, nor the Xrender issues. If that is the case, I'll look at retargetting Firefox 10 to build its Qt backend instead of Gtk+, which solve nicely solve the issue.

Can anyone comment on the state of Qt apps on IRIX? :-)

EDIT: And **** ...

Code: Select all

../../dist/include/mozilla/layers/ShadowLayerUtilsX11.h:44:36: fatal error: X11/extensions/Xrender.h: No such file or directory


When did firefox start depending on Xrender :-/. This might kill this port right here and there. I'm still a bit confused on support for Xrender in IRIX ...

SON OF EDIT: I got nekoware's Xrender library installed and the build continues, but its not clear to me if the resulting binary would be usable except over X11 forwarding to a box that did Xrendering ...
So the build ran for several more hours before failing with this

Code: Select all

g++: error trying to exec '/usr/people/michael/local/gcc/libexec/gcc/mips-sgi-irix6.5/4.6.2/collect2': execv: Arg list too long
gmake[5]: *** [libxul.so] Error 1
gmake[5]: Leaving directory `/usr/people/michael/porting/obj1-mips-sgi-irix6.5/toolkit/library'
gmake[4]: *** [libs_tier_platform] Error 2
gmake[4]: Leaving directory `/usr/people/michael/porting/obj1-mips-sgi-irix6.5'
gmake[3]: *** [tier_platform] Error 2
gmake[3]: Leaving directory `/usr/people/michael/porting/obj1-mips-sgi-irix6.5'
gmake[2]: *** [default] Error 2
gmake[2]: Leaving directory `/usr/people/michael/porting/obj1-mips-sgi-irix6.5'
gmake[1]: *** [realbuild] Error 2
gmake[1]: Leaving directory `/usr/people/michael/porting/mozilla-irix'
gmake: *** [build] Error 2


Currently waiting for my sysadmin to bump the limit (AGAIN).

Code: Select all

/usr/people/michael/porting/obj1-mips-sgi-irix6.5/config/nsinstall -D ../../dist/sdk/lib
I dunn
ld32: WARNING 15 : Multiply defined:(base::GetCurrentProcId()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::GetCurrentProcessHandle()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::OpenProcessHandle(long, long*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::OpenPrivilegedProcessHandle(long, long*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::CloseProcessHandle(long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::GetProcId(long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::KillProcess(long, int, bool)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::CloseSuperfluousFds(std::vector<base::InjectionArc, std::allocator<base::InjectionArc> > const&)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::SetAllFDsToCloseOnExec()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::ProcessMetrics(long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::CreateProcessMetrics(long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::ProcessMetrics(long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::~ProcessMetrics()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::EnableTerminationOnHeapCorruption()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::RaiseProcessToHighPriority()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::DidProcessCrash(bool*, long)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::WaitForExitCode(long, int*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::WaitForSingleProcess(long, int)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::CrashAwareSleep(long, int)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::GetCPUUsage()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::GetAppOutput(CommandLine const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> >*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::GetProcessCount(std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > const&, base::ProcessFilter const*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::KillProcesses(std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > const&, int, base::ProcessFilter const*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::WaitForProcessesToExit(std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > const&, int, base::ProcessFilter const*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::CleanupProcesses(std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > const&, int, int, base::ProcessFilter const*)) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 15 : Multiply defined:(base::ProcessMetrics::~ProcessMetrics()) in ../../ipc/chromium/process_util_posix.o and ../../ipc/chromium/process_util_posix.o (2nd definition ignored).
ld32: WARNING 84 : ../../staticlib/libmozreg_s.a is not used for resolving any symbol.
ld32: WARNING 84 : /usr/lib/../lib32/libdl.so is not used for resolving any symbol.
ld32: WARNING 84 : /usr/people/michael/local/lib/libbz2.a is not used for resolving any symbol.
ld32: WARNING 84 : /usr/lib/../lib32/libsocket.so is not used for resolving any symbol.
ld32: ERROR   103: Direct reference to preemptible symbol "sharedstub".
ld32: WARNING 47 : This module (../../layout/base/nsCSSFrameConstructor.o .text) contains branch instruction(s)
that might degrade performance on an older version (rev. 2.2) R4000 processor.
ld32: INFO    171: Multigot invoked. Gp relative region broken up into 25 separate regions.
ld32: mmap'd output file could not grow.
ld32: Not a directory.  Removing output file...
ld32: INFO    152: Output file removed because of error.
collect2: ld returned 32 exit status
gmake[5]: *** [libxul.so] Error 1


So it tried to do one of the final links and crapped out. We *might* have hit an internal limit on the linker which brings us to the point that we simply can't continue. I'll look more into this, but I'm kinda not sure how to continue. Might need to fix binutils GNU ld and pray it can successfullymanage it.

To put this in context, ld ran for over an hour, ate 700 MiB of RAM, and pegged out the processor it was running on.

Monster library is monster ...

EDIT: Cross-compilation might be possible (not to methon faster), but I don't have a local copy of the IRIX core system libs, and GNU ld 2.22 has issues on IRIX. 2.20 is known to be healther though. The idea of trying to cross-compile the lizard though scares me ...
jan-jaap wrote: From man(1) ld :

Code: Select all

I/O Options
The options affect I/O:

-mmap     Directs the linker to use mmap(2) as its preferred mode for
reading object files.  This usually results in better I/O
performances, except when using NFS mounted files with high
network latencies.  This is enabled by default.

-read     Directs the linker to use the open(2), lseek(2), and read(2)
utilities as its preferred mode for reading object files.
Setting this option when many object files are remotely
mounted with high network latency often improves
performance.


Try to pass the linker option '-read' to work around mmap limitations?


Tried passing in -Wl,-read to GCC, no such luck. I *suspect* it might be using mmap to write the file out; none of the input files are over a gigabyte. I suspect binutils will be required though it might not be immune.

IRIX mmap manpage:


ENOMEM zero was passed as the value of addr, and insufficient space was
available in the standard address ranges. This is primarily an
issue for 32 bit programs requesting 1GByte or more, because the
range from 0x30000000 to 0x40000000 is reserved for MAP_FIXED as
described above.

I'm currently debating options; cross-compiliatoin isn't really feasible unless I acquire IRIX hardware and a legal license. I could go hunting through binutils and see if it uses MAP_FIXED + MAP_AUTOGROW, and change it, but I also doubt that will help ...

*grumble*
mattst88 wrote:
hamei wrote:
NCommander wrote: When did firefox start depending on Xrender :-/. ...

Pretty sure that was right about the same time that they stuck their heads firmly up their asses :P


Ugh! I know, right?!

What the heck were they thinking when they started depending on technologies that our beloved IRIX doesn't have?! Don't they know how many IRIX users they have? I can't believe they'd alienate such a huge and important user-base like that. And, doing all of that over the objections of the many IRIX developers who begged and pleaded with them to just not use such a new (from the year 2000!) technology like Xrender.


One of the nice things about Firefox/Mozilla was that it only used the new fangled technology if (and I stress if) it was available and it could be turned off.

That thinking seemed to have died around FF4.
I didn't mean to drop off the face of the planet but RL issues threatened to consume me on several fronts to the point I just went "ARGHBERHEW".

The linker issue is the most problematic. GNU ld simply can't generate largescale IRIX binaries without self-destructing, and without a 64-bit linker, we're kinda stuck. There was some references through Google that SGI released a 64-bit linker (that generates 64-bit binaries) but beyond that I have no idea where to find it. The file was called ld64x.

The other issues in Firefox is there's code that depends on Xrendr, but beyond that, I am not sure of the scope or scale of the issues involved with that. That being said, I did get Spidermoney fully working (with a *lot* of patching) to the point that it passed its test suite, so having a JS engine is definately possible. Given the work to get webkit working, it might be more benefitical if I take a stab at getting JavascriptKit (?)/JSC working which is the native webkit JS renderer.
So, I'm still alive. I've had some real life issues, but I won't mind taking another attempt at building this monster if someone else wants to try and get me on a box.

It appears someone tracked down the binutils issue we were having making it possible for gld to work on IRIX. If this is indeed the case, then it is simply a matter of building it as a 64-bit-to-32-bit binary, sliding GCC under it, and going from there.

http://sourceware.org/ml/binutils/2012-11/msg00409.html
ClassicHasClass wrote:
I'm laughing a little here, because this is a carbon copy of the problems we had between Fx4 and Fx5 with TenFourFox -- libxul was too large to link with a 32-bit ld. We now use a 64-bit ld, backported to 10.4+. If you can get 10 up, 17 should not be a problem.

Is there any way to get your changesets against -esr10?


I'm pretty sure those changesets are gone, though I do have my original notes I took during the process plus this thread.

Rewriting the NSPR extensions will be a $#!@ pain, but I can probably do it mostly from memory (I had to add two members to a struct, modify the PRThread init function to calculate the offsets on the fly, and create getter functions, then patch SpiderMonkey in the right places).

If the linker is actually fixed (and I've learned more about toolchain development since I originally tried this), it shouldn't be TOO hard to solder on and make Firefox exist.

My biggest concerns were with Xrender, and properly ripping out WebGL, but there was no technical basis why the later simply couldn't be rm-ed, and the former ...

Well my understanding is apps requiring Xrender *do* work with the nekoware stub library; they just do all their rendering client-side and are slow.

Classic: I'd love to compare notes with your porting. Feel free to poke me on the official IRC channel.
ClassicHasClass wrote:
Well, the NSPR part was the easy bit for us; it already worked with 10.4 in 3.6, so I just had to keep it that way. For WebGL, I just disabled it at the GfxInfo level since 10.4 doesn't support OpenGL 2 nor NPOT texture sizes.

Are you working off diegel's basis for 3.0.19, or did you do the NSPR bits yourself? The PRThread stuff sounds like it should be done to xpcom as well.


Well, SGI/Mozilla did the original NSPR port. That NSPR port is an absolute mess though, as it tries to use IRIX's own thread library vs. pthreads. I rewrote parts of NSPR to tie in the unix_pthreads layer, and then added additional functionality. Specifically, spidermonkey's garbage collector walks through the stacks of running threads looking for things to run. Because of this, SpiderMonkey needs to be able to know where a stack's address space and location is. I fixed this by having SpiderMonkey use NSThread's library, which in my port where pthreads with a special struct attached to them. When the thread is created, it manually sets the address space and stack size of the thread, and stores them in the struct, so I can retrieve it later.

It's a pretty elegant hack IMHO. Spidermonkey (mostly) passed its test suite with additional patching.

XPCom was a stub in my version, since I needed libxul.so to exist so I can run test suite to figure out how far I could get. XPCom needs to know the C and C++ ABI of a given architecture/operating system. There was a version of XPCom for MIPSpro, but GCC has a different C++ ABI, so I knew that would fail. To fix it would require reverse engineering the proper function prologs and epilogues. Generally if you can XPCom+spidermonkey to pass its test suite, there is a pretty decent chance the damn thing might actually run.

Looking at my old messages, diegel got a dump of the hard drive from the box I did the porting, though I don't know how much code he reused. From the earlier discussions, the issues was with Firefox was that it linked, but fell over mysteriously. I think this was the issue I ran into the pthreads library I hit in Spidermonkey; specifically IRIX is extremely sensitive to the linking idea, and re-writing the linking order via _RLD32 magic variables was able to get it to work.
ClassicHasClass wrote:
diegel is using gcc, so he must have solved the ABI issue (you can't do xptcalls without it, and xptcalls haven't changed in aeons).

That's a clever solution to JS GC.

I could still reconstruct appliable changesets (I guess this calls for porting hg to nekoware 8) ), if you have the source tree you were working with in any form. This sounds highly doable. For TenFourFox, I just distribute changesets overlaid on top of -esr10/17/etc., and I think the same approach works here.


Hrm, its possible xpcom between MIPSpro/GCC might be compatible. I never looked into the C++ breakage issue with GCC, but if it was just a managling issue vs. an actual change in calling convention, then its possible it "just might work". Would make life a LOT easier than writing a new one from scratch.

kubatyszko wrote:
NCommander wrote:
ClassicHasClass wrote:
Is there any way to get your changesets against -esr10?


I'm pretty sure those changesets are gone, though I do have my original notes I took during the process plus this thread.


Well, I still do have all the stuff you did, let me know if you want it in any form...


A tarball might be nice to fish to recover patches, and possibly some of the hand-built binaries I rolled.
I'm going to note that most of that code is extremely messy and somewhat of a trainwreck. I couldn't get hg -OR- quilt to work properly so I planned to generate one diff at the end and split that into individual patches. I do hope you don't find it too ugly :-) .

I'd be willing to take another stab given access to an IRIX box capable of 64-bit code (and if I can get gld to work well enough to not be a piece of crap).
I didn't use the FF3 work since a lot of it wasn't directly revelent (a *lot* was trying to get MIPSpro working with FF3), and the issues I had were different as FF3 hadn't started stripping out IRIX code all over the place.
I decided to take a look at this and compile on x86.

This codebase is dated to say the least. To properly fix the .d issue, patch the makefile to change the CWD:

Code:
in build/maxwell.component.mk:76
-        $(MAXUSERROOT)/build/movedep $(subst .C,.d, $<) $(LIBRARY)
+        cd $(TMPDIR); $(MAXUSERROOT)/build/movedep $(subst .C,.d, $<) $(LIBRARY)


I'm still cleaning up headers to make this look like something from this century though.

EDIT: Discovered running make clean seems to reset the code base to defaults. How it does this I'm not sure, but my changes vanish after doing so.
hamei wrote:
FF still does this to some extent, by the way -
Code:
urchin 1% firefox3
moz_run_program[36]: 1301 Bus error
urchin 2% firefox3
terminate called after throwing an instance of 'std::bad_alloc'
what():  std::bad_alloc
moz_run_program[36]: 2368 Abort
urchin 3% firefox3
moz_run_program[36]: 3841 Memory fault

Not nearly as badly as before but on occasion ... the strange thing is, it often dies while sitting there quietly minimized. I'll be doing something else entirely, look up, and it's gone again. The LAN isn't locked down as tight as a cleveland girl collection but the Fox shouldn't be wandering unsupervised through the Internet, I hope :(


I suspect firefox itself is running out of address space. Older versions of Firefox were notorious RAM hogs and would sometimes leak memory, and with default settings, a 32-bit binary running under IRIX only has approximately 1 GiB of address space, part of which is taken up by the multiude of shared libraries and other fun stuff. I won't be suprised that firefox starts having malloc()s fail, and take a dive.

If you can get a gdb trace, it should be pretty obvious on why its self-destructing.
As it stands, I don't currently have an IRIX box or access to a machine to do any porting work so it as, at best, on ice :-/.
If its not too much of an issue, I'd like to take a stab at it again. My SGI foo is a bit rusty; the Octane can run 64-bit binaries, correct?

If binutils is truly fixed for IRIX (or at least I can work out how to fix it), and get a 64-to-32 bit toolchain, firefox porting enters the realm of possible.
vishnu wrote:
Yes all Octanes are R10000 and above and are thus 64 bit machines.

I just dl'ed the most recent version of binutils but the configure script craps out with "compiler cannot create executables," which just simply ain't true it can. I've gotten this error before from autoconf on my Octane and it mysteriously goes away and then reappears again. Maybe my compiler installation is haunted! :shock:


Building with GCC or MIPSpro? Post relevant config.log output.

ClassicHasClass wrote:
I was hoping to have cycles to work on this but Mozilla decided to kick the legs out from under the PowerPC OS X JIT, so now I'm trying to get IonMonkey to work on my G5 instead of trying to get Firefox to work on my Fuel. :(


Debugging JITs are never fun to say the least. I haven't looked at the ionmonkey mess (I used the PRCE patchset from Debian to work around having to write a JIT for IRIX).
Trying to reduce the clutter in my life, and took these back from NYS when I visited last time with the truck. Here's roughly what's available:

* Sunfire v120
* Netra T1, and T105
* Sun Ultra10
* PowerMac G4 (Quicksilver 733Mhz, two HDDs, though one MIGHT be dead, will check before handing it off)
* HP zx6000 (currently in rackmount configuration sans rails, but I have the parts to make it stand as a desktop

and a couple of old x86 systems which I doubt there is interest. I live in the Portland, OR area, and am willing to travel. I'm not asking much for the suns or PowerMac, essentially gas, lunch and a tip :-) .

The zx6000s's price is negotible, but I'd like to get about $400 for it.

All systems will be either include blank HDDs or installed with Linux, buyers choice. Anything still here by the end of the week going on ebay, or getting trashed.
recondas wrote:
A zx6000 would be a good way to finally try out the HP-UX 11i CD set HP sent me back in '06 , but by the time coast-to-coast shipping adds the weight of it's thumb to the desirability scale I'll probably end up waiting (another seven years) for one to show up locally. :roll:


From personal experience, its $200 to send coast to coast, which is what I paid when I shipped it from NYS a few years ago. To be frank, I think I overpriced this stuff, so I'll take best offer on anything. I'm thinking I just want it gone and not overthink it :-/