It would also matter where your IO cards are right ??
Each of the two nodeboards, each have a connection to each xbow (and there's two) take ownership of each I/O card. Normally one CPU/heart would take ownership and all communications and ISR would go through that CPU/heart.
If you look in /var/sysgen/system/irix.sm around line 573 (for IRIX6.5.28) and you'll find directive to control nobeboard ownership and interrupt routing
This way if you had allot of heavy I/O on XBOW #1 and it had the choice of CPU's with different cache sizes(which would make a larger largish difference for ISR that are always getting trigger. Having 2MB of cache could make a large difference in some cases) or a more powerful CPU you could make sure the BEST cpu is servicing the hardware on the XBOW and that CPU will handle the extra work, interruptions
Line 573: contains the NOINTR directive which excludes CPU from servicing interrupts
Line 583: contains DEVICE_ADMIN which assigns CPU ownership for devices on that XBOW (it would make sense, even if you assign a CPU that's on a Xbow not connected to the I/O port you still have to go through the owning nodeboards heart on the way from I/O->Xbow->owning heart->router->nodeboard xbow->nodeboard heart->CPU compared to Xbow->heart->CPU )
/var/sysgen/system/numa.sm always contains some NUMA directives, which are handy (espc on a busy system , or a system low on mem that has an application that isn't NUMA aware). Migration is turned off by default - turning it on could make a large difference for processes that are memory hungry and have CPU's serially access large amounts of memory. In this case though oview would show a large amount of traffic through the Xbow/Routers. Migration is one of the cooler features of SGI's NUMA hardware, they widely state it's one of the things that make NUMA a high performance, yet it's disabled by default - I guess they have reasons, but ..... I suppose it doesn't matter that now memory is so cheap - but you can also crank down the kernel replication so it's not using as much memory on each nodeboard
Each of the two nodeboards, each have a connection to each xbow (and there's two) take ownership of each I/O card. Normally one CPU/heart would take ownership and all communications and ISR would go through that CPU/heart.
If you look in /var/sysgen/system/irix.sm around line 573 (for IRIX6.5.28) and you'll find directive to control nobeboard ownership and interrupt routing
This way if you had allot of heavy I/O on XBOW #1 and it had the choice of CPU's with different cache sizes(which would make a larger largish difference for ISR that are always getting trigger. Having 2MB of cache could make a large difference in some cases) or a more powerful CPU you could make sure the BEST cpu is servicing the hardware on the XBOW and that CPU will handle the extra work, interruptions
Line 573: contains the NOINTR directive which excludes CPU from servicing interrupts
Line 583: contains DEVICE_ADMIN which assigns CPU ownership for devices on that XBOW (it would make sense, even if you assign a CPU that's on a Xbow not connected to the I/O port you still have to go through the owning nodeboards heart on the way from I/O->Xbow->owning heart->router->nodeboard xbow->nodeboard heart->CPU compared to Xbow->heart->CPU )
/var/sysgen/system/numa.sm always contains some NUMA directives, which are handy (espc on a busy system , or a system low on mem that has an application that isn't NUMA aware). Migration is turned off by default - turning it on could make a large difference for processes that are memory hungry and have CPU's serially access large amounts of memory. In this case though oview would show a large amount of traffic through the Xbow/Routers. Migration is one of the cooler features of SGI's NUMA hardware, they widely state it's one of the things that make NUMA a high performance, yet it's disabled by default - I guess they have reasons, but ..... I suppose it doesn't matter that now memory is so cheap - but you can also crank down the kernel replication so it's not using as much memory on each nodeboard