The collected works of rosmaniac

Once again, thanks to Voralyan's howto, here's the second Altix box on Debian 6. This one is a tad larger; there are 13 compute nodes, two of which have PCI slots and IOC10's (SATA drives instead of SCSI), and it's running with a single-plane router. The hwinfo and dmesg follow.

Code: Select all

winterstar:~# hwinfo --short
cpu:
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison up to 9M cache, 1500 MHz
Madison up to 9M cache, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison, 1500 MHz
Madison up to 9M cache, 1500 MHz
Madison up to 9M cache, 1500 MHz
keyboard:
/dev/ttyS0           serial console
storage:
SGI IOC4 I/O controller
Vitesse VSC7174 PCI/PCI-X Serial ATA Host Bus Controller
LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
SGI IOC4 I/O controller
Vitesse VSC7174 PCI/PCI-X Serial ATA Host Bus Controller
network:
eth1                 SGI Dual Port Gigabit Ethernet (PCI-X,Copper)
eth2                 SGI Dual Port Gigabit Ethernet (PCI-X,Copper)
eth0                 SGI IO9/IO10 Gigabit Ethernet (Copper)
eth4                 SGI Gigabit Ethernet (Copper)
eth3                 SGI IO9/IO10 Gigabit Ethernet (Copper)
SGI Cross Partition Network adapter
network interface:
lo                   Loopback network interface
eth2                 Ethernet network interface
eth4                 Ethernet network interface
eth3                 Ethernet network interface
eth0                 Ethernet network interface
eth1                 Ethernet network interface
pan0                 Ethernet network interface
disk:
/dev/sdb             HDS722580VLSA80
/dev/sda             HDS722580VLSA80
partition:
/dev/sdb1            Partition
/dev/sdb2            Partition
/dev/sdb3            Partition
/dev/sda1            Partition
/dev/sda2            Partition
/dev/sda3            Partition
cdrom:
/dev/hda             MATSHITADVD-ROM SR-8177
/dev/hdc             MATSHITADVD-ROM SR-8178
memory:
Main Memory
unknown:
Timer
PS/2 Controller
winterstar:~#


Code: Select all

winterstar:~# dmesg
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 2.6.32-5-mckinley (Debian 2.6.32-38) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Mon Oct 3 06:04:14 UTC 2011
[    0.000000] EFI v1.10 by INTEL: SALsystab=0x3002a09c90 ACPI 2.0=0x3002a09d80
[    0.000000] booting generic kernel on platform sn2
[    0.000000] console [sn_sal0] enabled
[    0.000000] ACPI: RSDP 0000003002a09d80 00024 (v02    SGI)
[    0.000000] ACPI: XSDT 0000003002a0e3d0 00044 (v01    SGI  XSDTSN2 00010001    ? 0000008C)
[    0.000000] ACPI: APIC 0000003002a0a5f0 00164 (v01    SGI  APICSN2 00010001    ? 00000001)
[    0.000000] ACPI: SRAT 0000003002a0a770 003D8 (v01    SGI  SRATSN2 00010001    ? 00000001)
[    0.000000] ACPI: SLIT 0000003002a0ab60 000D5 (v01    SGI  SLITSN2 00010001    ? 00000001)
[    0.000000] ACPI: FACP 0000003002a0aca0 000F4 (v03    SGI  FACPSN2 00030001    ? 00000001)
[    0.000000] ACPI Warning: 32/64X length mismatch in Pm1aEventBlock: 32/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in Pm1aControlBlock: 16/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in PmTimerBlock: 32/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in Gpe0Block: 64/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: Invalid length for Pm1aEventBlock: 0, using default 32 (20090903/tbfadt-607)
[    0.000000] ACPI Warning: Invalid length for Pm1aControlBlock: 0, using default 16 (20090903/tbfadt-607)
[    0.000000] ACPI Warning: Invalid length for PmTimerBlock: 0, using default 32 (20090903/tbfadt-607)
[    0.000000] ACPI: DSDT 0000003002a0d700 00024 (v02    SGI  DSDTSN2 00020001    ? 000009A0)
[    0.000000] ACPI: FACS 0000003002a0ac50 00040
[    0.000000] ACPI: Local APIC address c0000000fee00000
[    0.000000] Number of logical nodes in system = 13
[    0.000000] Number of memory chunks in system = 13
[    0.000000] SMP: Allowing 26 CPUs, 0 hotplug CPUs
[    0.000000] Initial ramdisk at: 0xe0000630f526f000 (18309670 bytes)
[    0.000000] SAL 3.2: SGI SN2 version 5.4
[    0.000000] SAL Platform features: ITC_Drift
[    0.000000] SAL: AP wakeup using external interrupt vector 0x12
[    0.000000] ACPI: Local APIC address c0000000fee00000
[    0.000000] register_intr: No IOSAPIC for GSI 52
[    0.000000] 26 CPUs available, 26 CPUs total
[    0.000000] Increasing MCA rendezvous timeout from 20000 to 49000 milliseconds
[    0.000000] MCA related initialization done
[    0.000000] ACPI: RSDP 0000003002a09d80 00024 (v02    SGI)
[    0.000000] ACPI: XSDT 0000003002a0e3d0 0008C (v01    SGI  XSDTSN2 00010001    ? 0000008C)
[    0.000000] ACPI: APIC 0000003002a0a5f0 00164 (v01    SGI  APICSN2 00010001    ? 00000001)
[    0.000000] ACPI: SRAT 0000003002a0a770 003D8 (v01    SGI  SRATSN2 00010001    ? 00000001)
[    0.000000] ACPI: SLIT 0000003002a0ab60 000D5 (v01    SGI  SLITSN2 00010001    ? 00000001)
[    0.000000] ACPI: FACP 0000003002a0aca0 000F4 (v03    SGI  FACPSN2 00030001    ? 00000001)
[    0.000000] ACPI Warning: 32/64X length mismatch in Pm1aEventBlock: 32/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in Pm1aControlBlock: 16/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in PmTimerBlock: 32/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: 32/64X length mismatch in Gpe0Block: 64/0 (20090903/tbfadt-526)
[    0.000000] ACPI Warning: Invalid length for Pm1aEventBlock: 0, using default 32 (20090903/tbfadt-607)
[    0.000000] ACPI Warning: Invalid length for Pm1aControlBlock: 0, using default 16 (20090903/tbfadt-607)
[    0.000000] ACPI Warning: Invalid length for PmTimerBlock: 0, using default 32 (20090903/tbfadt-607)
[    0.000000] ACPI: DSDT 0000003002a0d700 009A0 (v02    SGI  DSDTSN2 00020101    ? 000009A0)
[    0.000000] ACPI: FACS 0000003002a0ac50 00040
[    0.000000] ACPI: SSDT 0000003002a0cd70 00095 (v02    SGI  SSDTSN2 00020101    ? 00000095)
[    0.000000] ACPI: SSDT 0000003002a0ce80 00095 (v02    SGI  SSDTSN2 00020101    ? 00000095)
[    0.000000] ACPI: SSDT 0000003002a0e0b0 00095 (v02    SGI  SSDTSN2 00020101    ? 00000095)
[    0.000000] ACPI: SSDT 0000003002a0cf30 000F5 (v02    SGI  SSDTSN2 00020101    ? 000000F5)
[    0.000000] ACPI: SSDT 0000003002a0e160 00096 (v02    SGI  SSDTSN2 00020101    ? 00000096)
[    0.000000] ACPI: SSDT 0000003002a0d040 00096 (v02    SGI  SSDTSN2 00020101    ? 00000096)
[    0.000000] ACPI: SSDT 0000003002a0e210 00096 (v02    SGI  SSDTSN2 00020101    ? 00000096)
[    0.000000] ACPI: SSDT 0000003002a0d0f0 00096 (v02    SGI  SSDTSN2 00020101    ? 00000096)
[    0.000000] ACPI: SSDT 0000003002a0e2c0 000F7 (v02    SGI  SSDTSN2 00020101    ? 000000F7)
[    0.000000] SGI: Disabling VGA console
[    0.000000] SGI SAL version 5.04
[    0.000000] Virtual mem_map starts at 0xa0007ffa95200000
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00c00c00 -> 0x1000000000
[    0.000000]   Normal   0x1000000000 -> 0x1000000000
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[16] active PFN ranges
[    0.000000]     0: 0x00c00c00 -> 0x00c3e000
[    0.000000]     1: 0x02c00c00 -> 0x02c1f000
[    0.000000]     2: 0x04c00c00 -> 0x04c1f000
[    0.000000]     3: 0x06c00c00 -> 0x06c1f000
[    0.000000]     4: 0x08c00c00 -> 0x08c3e000
[    0.000000]     4: 0x08d00000 -> 0x08d3dfff
[    0.000000]     5: 0x0ac00c00 -> 0x0ac3e000
[    0.000000]     6: 0x0cc00c00 -> 0x0cc3dfff
[    0.000000]     7: 0x0ec00c00 -> 0x0ec3e000
[    0.000000]     8: 0x10c00c00 -> 0x10c3e000
[    0.000000]     9: 0x12c00c00 -> 0x12c3e000
[    0.000000]    10: 0x14c00c00 -> 0x14c3e000
[    0.000000]    11: 0x16c00c00 -> 0x16c1f000
[    0.000000]    12: 0x18c00c00 -> 0x18c3d9ff
[    0.000000]    12: 0x18c3de00 -> 0x18c3df54
[    0.000000]    12: 0x18c3df65 -> 0x18c3df82
[    0.000000] On node 0 totalpages: 250880
[    0.000000] free_area_init_node: node 0, pgdat e000003003120000, node_mem_map a0007ffabf22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 1 totalpages: 123904
[    0.000000] free_area_init_node: node 1, pgdat e00000b003030080, node_mem_map a0007ffb2f22a000
[    0.000000]   DMA zone: 424 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 123480 pages, LIFO batch:7
[    0.000000] On node 2 totalpages: 123904
[    0.000000] free_area_init_node: node 2, pgdat e000013003040100, node_mem_map a0007ffb9f22a000
[    0.000000]   DMA zone: 424 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 123480 pages, LIFO batch:7
[    0.000000] On node 3 totalpages: 123904
[    0.000000] free_area_init_node: node 3, pgdat e00001b003050180, node_mem_map a0007ffc0f22a000
[    0.000000]   DMA zone: 424 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 123480 pages, LIFO batch:7
[    0.000000] On node 4 totalpages: 504831
[    0.000000] free_area_init_node: node 4, pgdat e000023003060200, node_mem_map a0007ffc7f22a000
[    0.000000]   DMA zone: 4442 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 500389 pages, LIFO batch:7
[    0.000000] On node 5 totalpages: 250880
[    0.000000] free_area_init_node: node 5, pgdat e00002b003070280, node_mem_map a0007ffcef22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 6 totalpages: 250879
[    0.000000] free_area_init_node: node 6, pgdat e000033003080300, node_mem_map a0007ffd5f22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250021 pages, LIFO batch:7
[    0.000000] On node 7 totalpages: 250880
[    0.000000] free_area_init_node: node 7, pgdat e00003b003090380, node_mem_map a0007ffdcf22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 8 totalpages: 250880
[    0.000000] free_area_init_node: node 8, pgdat e0000430030a0400, node_mem_map a0007ffe3f22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 9 totalpages: 250880
[    0.000000] free_area_init_node: node 9, pgdat e00004b0030b0480, node_mem_map a0007ffeaf22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 10 totalpages: 250880
[    0.000000] free_area_init_node: node 10, pgdat e0000530030c0500, node_mem_map a0007fff1f22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 250022 pages, LIFO batch:7
[    0.000000] On node 11 totalpages: 123904
[    0.000000] free_area_init_node: node 11, pgdat e00005b0030d0580, node_mem_map a0007fff8f22a000
[    0.000000]   DMA zone: 424 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 123480 pages, LIFO batch:7
[    0.000000] On node 12 totalpages: 249712
[    0.000000] free_area_init_node: node 12, pgdat e0000630030e0600, node_mem_map a0007fffff22a000
[    0.000000]   DMA zone: 858 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 248854 pages, LIFO batch:7
[    0.000000] Built 13 zonelists in Node order, mobility grouping on.  Total pages: 2993316
[    0.000000] Policy zone: DMA
[    0.000000] Kernel command line: BOOT_IMAGE=dev001:/EFI/debian/vmlinuz root=/dev/md1  ro
[    0.000000] PID hash table entries: 4096 (order: 1, 32768 bytes)
[    0.000000] Memory: 47902096k/48070944k available (7334k code, 198992k reserved, 3581k data, 736k init)
[    0.000000] SLUB: Genslabs=16, HWalign=128, Order=0-3, MinObjects=0, CPUs=26, Nodes=256
[    0.000000] Hierarchical RCU implementation.
[    0.000000] NR_IRQS:1024
[    0.000000] CPU 0: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.000000] Console: colour dummy device 80x25
[    0.000000] console [ttySG0] enabled
[    0.016000] Calibrating delay loop... 2244.60 BogoMIPS (lpj=4489216)
[    0.117070] Security Framework initialized
[    0.117747] SELinux:  Disabled at boot.
[    0.127077] Dentry cache hash table entries: 8388608 (order: 12, 67108864 bytes)
[    0.288337] Inode-cache hash table entries: 4194304 (order: 11, 33554432 bytes)
[    0.362343] Mount-cache hash table entries: 1024
[    0.365176] Initializing cgroup subsys ns
[    0.365912] Initializing cgroup subsys cpuacct
[    0.368068] Initializing cgroup subsys devices
[    0.368741] Initializing cgroup subsys freezer
[    0.388004] Initializing cgroup subsys net_cls
[    0.389010] ACPI: Core revision 20090903
[    0.392014] Boot processor id 0x0/0x0
[    0.012000] Fixed BSP b0 value from CPU 1
[    0.012000] CPU 1: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 2: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 3: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 4: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 5: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 6: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 7: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 8: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 9: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 10: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 11: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 12: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 13: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 14: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 15: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 16: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 17: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 18: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 19: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 20: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 21: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 22: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 23: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 24: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.012000] CPU 25: base freq=200.000MHz, ITC ratio=15/2, ITC freq=1500.000MHz
[    0.756984] Brought up 26 CPUs
[    0.757435] Total of 26 processors activated (58359.80 BogoMIPS).
[    0.764285] CPU0 attaching sched-domain:
[    0.764290]  domain 0: span 0-1 level CPU
[    0.764294]   groups: group a0000001009a4f70 cpus 0 group e000003003104f70 cpus 1
[    0.764302]   domain 1: span 0-25 level NODE
[    0.764305]    groups: group e0000030f07ac000 cpus 0-1 (cpu_power = 2048) group e0000030f07ac020 cpus 2-3 (cpu_power = 2048) group e0000030f07ac040 cpus 4-5 (cpu_power = 2048) group e0000030f07ac060 cpus 6-7 (cpu_power = 2048) group e0000030f07ac080 cpus 8-9 (cpu_power = 2048) group e0000030f07ac0a0 cpus 10-11 (cpu_power = 2048) group e0000030f07ac0c0 cpus 12-13 (cpu_power = 2048) group e0000030f07ac0e0 cpus 14-15 (cpu_power = 2048) group e0000030f07ac100 cpus 16-17 (cpu_power = 2048) group e0000030f07ac120 cpus 18-19 (cpu_power = 2048) group e0000030f07ac140 cpus 20-21 (cpu_power = 2048) group e0000030f07ac160 cpus 22-23 (cpu_power = 2048) group e0000030f07ac180 cpus 24-25 (cpu_power = 2048)
[    0.764349] CPU1 attaching sched-domain:
[    0.764352]  domain 0: span 0-1 level CPU
[    0.764355]   groups: group e000003003104f70 cpus 1 group a0000001009a4f70 cpus 0
[    0.764362]   domain 1: span 0-25 level NODE
[    0.764365]    groups: group e0000030f07ac000 cpus 0-1 (cpu_power = 2048) group e0000030f07ac020 cpus 2-3 (cpu_power = 2048) group e0000030f07ac040 cpus 4-5 (cpu_power = 2048) group e0000030f07ac060 cpus 6-7 (cpu_power = 2048) group e0000030f07ac080 cpus 8-9 (cpu_power = 2048) group e0000030f07ac0a0 cpus 10-11 (cpu_power = 2048) group e0000030f07ac0c0 cpus 12-13 (cpu_power = 2048) group e0000030f07ac0e0 cpus 14-15 (cpu_power = 2048) group e0000030f07ac100 cpus 16-17 (cpu_power = 2048) group e0000030f07ac120 cpus 18-19 (cpu_power = 2048) group e0000030f07ac140 cpus 20-21 (cpu_power = 2048) group e0000030f07ac160 cpus 22-23 (cpu_power = 2048) group e0000030f07ac180 cpus 24-25 (cpu_power = 2048)
[    0.764406] CPU2 attaching sched-domain:
[    0.764409]  domain 0: span 2-3 level CPU
[    0.764412]   groups: group e00000b003014f70 cpus 2 group e00000b003024f70 cpus 3
[    0.764419]   domain 1: span 0-25 level NODE
[    0.764422]    groups: group e00000b078790000 cpus 2-3 (cpu_power = 2048) group e00000b078790020 cpus 4-5 (cpu_power = 2048) group e00000b078790040 cpus 6-7 (cpu_power = 2048) group e00000b078790060 cpus 8-9 (cpu_power = 2048) group e00000b078790080 cpus 10-11 (cpu_power = 2048) group e00000b0787900a0 cpus 12-13 (cpu_power = 2048) group e00000b0787900c0 cpus 14-15 (cpu_power = 2048) group e00000b0787900e0 cpus 16-17 (cpu_power = 2048) group e00000b078790100 cpus 18-19 (cpu_power = 2048) group e00000b078790120 cpus 20-21 (cpu_power = 2048) group e00000b078790140 cpus 22-23 (cpu_power = 2048) group e00000b078790160 cpus 24-25 (cpu_power = 2048) group e00000b078790180 cpus 0-1 (cpu_power = 2048)
[    0.764465] CPU3 attaching sched-domain:
[    0.764468]  domain 0: span 2-3 level CPU
[    0.764471]   groups: group e00000b003024f70 cpus 3 group e00000b003014f70 cpus 2
[    0.764478]   domain 1: span 0-25 level NODE
[    0.764481]    groups: group e00000b078790000 cpus 2-3 (cpu_power = 2048) group e00000b078790020 cpus 4-5 (cpu_power = 2048) group e00000b078790040 cpus 6-7 (cpu_power = 2048) group e00000b078790060 cpus 8-9 (cpu_power = 2048) group e00000b078790080 cpus 10-11 (cpu_power = 2048) group e00000b0787900a0 cpus 12-13 (cpu_power = 2048) group e00000b0787900c0 cpus 14-15 (cpu_power = 2048) group e00000b0787900e0 cpus 16-17 (cpu_power = 2048) group e00000b078790100 cpus 18-19 (cpu_power = 2048) group e00000b078790120 cpus 20-21 (cpu_power = 2048) group e00000b078790140 cpus 22-23 (cpu_power = 2048) group e00000b078790160 cpus 24-25 (cpu_power = 2048) group e00000b078790180 cpus 0-1 (cpu_power = 2048)
[    0.764524] CPU4 attaching sched-domain:
[    0.764527]  domain 0: span 4-5 level CPU
[    0.764530]   groups: group e000013003024f70 cpus 4 group e000013003034f70 cpus 5
[    0.764537]   domain 1: span 0-25 level NODE
[    0.764540]    groups: group e000013078770000 cpus 4-5 (cpu_power = 2048) group e000013078770020 cpus 6-7 (cpu_power = 2048) group e000013078770040 cpus 8-9 (cpu_power = 2048) group e000013078770060 cpus 10-11 (cpu_power = 2048) group e000013078770080 cpus 12-13 (cpu_power = 2048) group e0000130787700a0 cpus 14-15 (cpu_power = 2048) group e0000130787700c0 cpus 16-17 (cpu_power = 2048) group e0000130787700e0 cpus 18-19 (cpu_power = 2048) group e000013078770100 cpus 20-21 (cpu_power = 2048) group e000013078770120 cpus 22-23 (cpu_power = 2048) group e000013078770140 cpus 24-25 (cpu_power = 2048) group e000013078770160 cpus 0-1 (cpu_power = 2048) group e000013078770180 cpus 2-3 (cpu_power = 2048)
[    0.764583] CPU5 attaching sched-domain:
[    0.764586]  domain 0: span 4-5 level CPU
[    0.764589]   groups: group e000013003034f70 cpus 5 group e000013003024f70 cpus 4
[    0.764596]   domain 1: span 0-25 level NODE
[    0.764599]    groups: group e000013078770000 cpus 4-5 (cpu_power = 2048) group e000013078770020 cpus 6-7 (cpu_power = 2048) group e000013078770040 cpus 8-9 (cpu_power = 2048) group e000013078770060 cpus 10-11 (cpu_power = 2048) group e000013078770080 cpus 12-13 (cpu_power = 2048) group e0000130787700a0 cpus 14-15 (cpu_power = 2048) group e0000130787700c0 cpus 16-17 (cpu_power = 2048) group e0000130787700e0 cpus 18-19 (cpu_power = 2048) group e000013078770100 cpus 20-21 (cpu_power = 2048) group e000013078770120 cpus 22-23 (cpu_power = 2048) group e000013078770140 cpus 24-25 (cpu_power = 2048) group e000013078770160 cpus 0-1 (cpu_power = 2048) group e000013078770180 cpus 2-3 (cpu_power = 2048)
[    0.764642] CPU6 attaching sched-domain:
[    0.764645]  domain 0: span 6-7 level CPU
[    0.764648]   groups: group e00001b003034f70 cpus 6 group e00001b003044f70 cpus 7
[    0.764655]   domain 1: span 0-25 level NODE
[    0.764658]    groups: group e00001b07800bfe0 cpus 6-7 (cpu_power = 2048) group e00001b07800bfc0 cpus 8-9 (cpu_power = 2048) group e00001b07800bfa0 cpus 10-11 (cpu_power = 2048) group e00001b07800bf80 cpus 12-13 (cpu_power = 2048) group e00001b07800bf60 cpus 14-15 (cpu_power = 2048) group e00001b07800bf40 cpus 16-17 (cpu_power = 2048) group e00001b07800bf20 cpus 18-19 (cpu_power = 2048) group e00001b07800bf00 cpus 20-21 (cpu_power = 2048) group e00001b07800bee0 cpus 22-23 (cpu_power = 2048) group e00001b07800bec0 cpus 24-25 (cpu_power = 2048) group e00001b07800bea0 cpus 0-1 (cpu_power = 2048) group e00001b07800be80 cpus 2-3 (cpu_power = 2048) group e00001b07800be60 cpus 4-5 (cpu_power = 2048)
[    0.764700] CPU7 attaching sched-domain:
[    0.764703]  domain 0: span 6-7 level CPU
[    0.764706]   groups: group e00001b003044f70 cpus 7 group e00001b003034f70 cpus 6
[    0.764713]   domain 1: span 0-25 level NODE
[    0.764716]    groups: group e00001b07800bfe0 cpus 6-7 (cpu_power = 2048) group e00001b07800bfc0 cpus 8-9 (cpu_power = 2048) group e00001b07800bfa0 cpus 10-11 (cpu_power = 2048) group e00001b07800bf80 cpus 12-13 (cpu_power = 2048) group e00001b07800bf60 cpus 14-15 (cpu_power = 2048) group e00001b07800bf40 cpus 16-17 (cpu_power = 2048) group e00001b07800bf20 cpus 18-19 (cpu_power = 2048) group e00001b07800bf00 cpus 20-21 (cpu_power = 2048) group e00001b07800bee0 cpus 22-23 (cpu_power = 2048) group e00001b07800bec0 cpus 24-25 (cpu_power = 2048) group e00001b07800bea0 cpus 0-1 (cpu_power = 2048) group e00001b07800be80 cpus 2-3 (cpu_power = 2048) group e00001b07800be60 cpus 4-5 (cpu_power = 2048)
[    0.764758] CPU8 attaching sched-domain:
[    0.764761]  domain 0: span 8-9 level CPU
[    0.764764]   groups: group e000023003044f70 cpus 8 group e000023003054f70 cpus 9
[    0.764771]   domain 1: span 0-25 level NODE
[    0.764774]    groups: group e0000234f7b78000 cpus 8-9 (cpu_power = 2048) group e0000234f7b78020 cpus 10-11 (cpu_power = 2048) group e0000234f7b78040 cpus 12-13 (cpu_power = 2048) group e0000234f7b78060 cpus 14-15 (cpu_power = 2048) group e0000234f7b78080 cpus 16-17 (cpu_power = 2048) group e0000234f7b780a0 cpus 18-19 (cpu_power = 2048) group e0000234f7b780c0 cpus 20-21 (cpu_power = 2048) group e0000234f7b780e0 cpus 22-23 (cpu_power = 2048) group e0000234f7b78100 cpus 24-25 (cpu_power = 2048) group e0000234f7b78120 cpus 0-1 (cpu_power = 2048) group e0000234f7b78140 cpus 2-3 (cpu_power = 2048) group e0000234f7b78160 cpus 4-5 (cpu_power = 2048) group e0000234f7b78180 cpus 6-7 (cpu_power = 2048)
[    0.764817] CPU9 attaching sched-domain:
[    0.764820]  domain 0: span 8-9 level CPU
[    0.764822]   groups: group e000023003054f70 cpus 9 group e000023003044f70 cpus 8
[    0.764829]   domain 1: span 0-25 level NODE
[    0.764832]    groups: group e0000234f7b78000 cpus 8-9 (cpu_power = 2048) group e0000234f7b78020 cpus 10-11 (cpu_power = 2048) group e0000234f7b78040 cpus 12-13 (cpu_power = 2048) group e0000234f7b78060 cpus 14-15 (cpu_power = 2048) group e0000234f7b78080 cpus 16-17 (cpu_power = 2048) group e0000234f7b780a0 cpus 18-19 (cpu_power = 2048) group e0000234f7b780c0 cpus 20-21 (cpu_power = 2048) group e0000234f7b780e0 cpus 22-23 (cpu_power = 2048) group e0000234f7b78100 cpus 24-25 (cpu_power = 2048) group e0000234f7b78120 cpus 0-1 (cpu_power = 2048) group e0000234f7b78140 cpus 2-3 (cpu_power = 2048) group e0000234f7b78160 cpus 4-5 (cpu_power = 2048) group e0000234f7b78180 cpus 6-7 (cpu_power = 2048)
[    0.764875] CPU10 attaching sched-domain:
[    0.764878]  domain 0: span 10-11 level CPU
[    0.764881]   groups: group e00002b003054f70 cpus 10 group e00002b003064f70 cpus 11
[    0.764888]   domain 1: span 0-25 level NODE
[    0.764891]    groups: group e00002b0f077c000 cpus 10-11 (cpu_power = 2048) group e00002b0f077c020 cpus 12-13 (cpu_power = 2048) group e00002b0f077c040 cpus 14-15 (cpu_power = 2048) group e00002b0f077c060 cpus 16-17 (cpu_power = 2048) group e00002b0f077c080 cpus 18-19 (cpu_power = 2048) group e00002b0f077c0a0 cpus 20-21 (cpu_power = 2048) group e00002b0f077c0c0 cpus 22-23 (cpu_power = 2048) group e00002b0f077c0e0 cpus 24-25 (cpu_power = 2048) group e00002b0f077c100 cpus 0-1 (cpu_power = 2048) group e00002b0f077c120 cpus 2-3 (cpu_power = 2048) group e00002b0f077c140 cpus 4-5 (cpu_power = 2048) group e00002b0f077c160 cpus 6-7 (cpu_power = 2048) group e00002b0f077c180 cpus 8-9 (cpu_power = 2048)
[    0.764934] CPU11 attaching sched-domain:
[    0.764937]  domain 0: span 10-11 level CPU
[    0.764940]   groups: group e00002b003064f70 cpus 11 group e00002b003054f70 cpus 10
[    0.764946]   domain 1: span 0-25 level NODE
[    0.764949]    groups: group e00002b0f077c000 cpus 10-11 (cpu_power = 2048) group e00002b0f077c020 cpus 12-13 (cpu_power = 2048) group e00002b0f077c040 cpus 14-15 (cpu_power = 2048) group e00002b0f077c060 cpus 16-17 (cpu_power = 2048) group e00002b0f077c080 cpus 18-19 (cpu_power = 2048) group e00002b0f077c0a0 cpus 20-21 (cpu_power = 2048) group e00002b0f077c0c0 cpus 22-23 (cpu_power = 2048) group e00002b0f077c0e0 cpus 24-25 (cpu_power = 2048) group e00002b0f077c100 cpus 0-1 (cpu_power = 2048) group e00002b0f077c120 cpus 2-3 (cpu_power = 2048) group e00002b0f077c140 cpus 4-5 (cpu_power = 2048) group e00002b0f077c160 cpus 6-7 (cpu_power = 2048) group e00002b0f077c180 cpus 8-9 (cpu_power = 2048)
[    0.764992] CPU12 attaching sched-domain:
[    0.764995]  domain 0: span 12-13 level CPU
[    0.764998]   groups: group e000033003064f70 cpus 12 group e000033003074f70 cpus 13
[    0.765005]   domain 1: span 0-25 level NODE
[    0.765008]    groups: group e0000330f7b78000 cpus 12-13 (cpu_power = 2048) group e0000330f7b78020 cpus 14-15 (cpu_power = 2048) group e0000330f7b78040 cpus 16-17 (cpu_power = 2048) group e0000330f7b78060 cpus 18-19 (cpu_power = 2048) group e0000330f7b78080 cpus 20-21 (cpu_power = 2048) group e0000330f7b780a0 cpus 22-23 (cpu_power = 2048) group e0000330f7b780c0 cpus 24-25 (cpu_power = 2048) group e0000330f7b780e0 cpus 0-1 (cpu_power = 2048) group e0000330f7b78100 cpus 2-3 (cpu_power = 2048) group e0000330f7b78120 cpus 4-5 (cpu_power = 2048) group e0000330f7b78140 cpus 6-7 (cpu_power = 2048) group e0000330f7b78160 cpus 8-9 (cpu_power = 2048) group e0000330f7b78180 cpus 10-11 (cpu_power = 2048)
[    0.765051] CPU13 attaching sched-domain:
[    0.765054]  domain 0: span 12-13 level CPU
[    0.765057]   groups: group e000033003074f70 cpus 13 group e000033003064f70 cpus 12
[    0.765064]   domain 1: span 0-25 level NODE
[    0.765067]    groups: group e0000330f7b78000 cpus 12-13 (cpu_power = 2048) group e0000330f7b78020 cpus 14-15 (cpu_power = 2048) group e0000330f7b78040 cpus 16-17 (cpu_power = 2048) group e0000330f7b78060 cpus 18-19 (cpu_power = 2048) group e0000330f7b78080 cpus 20-21 (cpu_power = 2048) group e0000330f7b780a0 cpus 22-23 (cpu_power = 2048) group e0000330f7b780c0 cpus 24-25 (cpu_power = 2048) group e0000330f7b780e0 cpus 0-1 (cpu_power = 2048) group e0000330f7b78100 cpus 2-3 (cpu_power = 2048) group e0000330f7b78120 cpus 4-5 (cpu_power = 2048) group e0000330f7b78140 cpus 6-7 (cpu_power = 2048) group e0000330f7b78160 cpus 8-9 (cpu_power = 2048) group e0000330f7b78180 cpus 10-11 (cpu_power = 2048)
[    0.765110] CPU14 attaching sched-domain:
[    0.765113]  domain 0: span 14-15 level CPU
[    0.765116]   groups: group e00003b003074f70 cpus 14 group e00003b003084f70 cpus 15
[    0.765122]   domain 1: span 0-25 level NODE
[    0.765125]    groups: group e00003b0f0800000 cpus 14-15 (cpu_power = 2048) group e00003b0f0800020 cpus 16-17 (cpu_power = 2048) group e00003b0f0800040 cpus 18-19 (cpu_power = 2048) group e00003b0f0800060 cpus 20-21 (cpu_power = 2048) group e00003b0f0800080 cpus 22-23 (cpu_power = 2048) group e00003b0f08000a0 cpus 24-25 (cpu_power = 2048) group e00003b0f08000c0 cpus 0-1 (cpu_power = 2048) group e00003b0f08000e0 cpus 2-3 (cpu_power = 2048) group e00003b0f0800100 cpus 4-5 (cpu_power = 2048) group e00003b0f0800120 cpus 6-7 (cpu_power = 2048) group e00003b0f0800140 cpus 8-9 (cpu_power = 2048) group e00003b0f0800160 cpus 10-11 (cpu_power = 2048) group e00003b0f0800180 cpus 12-13 (cpu_power = 2048)
[    0.765168] CPU15 attaching sched-domain:
[    0.765171]  domain 0: span 14-15 level CPU
[    0.765174]   groups: group e00003b003084f70 cpus 15 group e00003b003074f70 cpus 14
[    0.765181]   domain 1: span 0-25 level NODE
[    0.765184]    groups: group e00003b0f0800000 cpus 14-15 (cpu_power = 2048) group e00003b0f0800020 cpus 16-17 (cpu_power = 2048) group e00003b0f0800040 cpus 18-19 (cpu_power = 2048) group e00003b0f0800060 cpus 20-21 (cpu_power = 2048) group e00003b0f0800080 cpus 22-23 (cpu_power = 2048) group e00003b0f08000a0 cpus 24-25 (cpu_power = 2048) group e00003b0f08000c0 cpus 0-1 (cpu_power = 2048) group e00003b0f08000e0 cpus 2-3 (cpu_power = 2048) group e00003b0f0800100 cpus 4-5 (cpu_power = 2048) group e00003b0f0800120 cpus 6-7 (cpu_power = 2048) group e00003b0f0800140 cpus 8-9 (cpu_power = 2048) group e00003b0f0800160 cpus 10-11 (cpu_power = 2048) group e00003b0f0800180 cpus 12-13 (cpu_power = 2048)
[    0.765227] CPU16 attaching sched-domain:
[    0.765230]  domain 0: span 16-17 level CPU
[    0.765233]   groups: group e000043003084f70 cpus 16 group e000043003094f70 cpus 17
[    0.765240]   domain 1: span 0-25 level NODE
[    0.765243]    groups: group e0000430f0774000 cpus 16-17 (cpu_power = 2048) group e0000430f0774020 cpus 18-19 (cpu_power = 2048) group e0000430f0774040 cpus 20-21 (cpu_power = 2048) group e0000430f0774060 cpus 22-23 (cpu_power = 2048) group e0000430f0774080 cpus 24-25 (cpu_power = 2048) group e0000430f07740a0 cpus 0-1 (cpu_power = 2048) group e0000430f07740c0 cpus 2-3 (cpu_power = 2048) group e0000430f07740e0 cpus 4-5 (cpu_power = 2048) group e0000430f0774100 cpus 6-7 (cpu_power = 2048) group e0000430f0774120 cpus 8-9 (cpu_power = 2048) group e0000430f0774140 cpus 10-11 (cpu_power = 2048) group e0000430f0774160 cpus 12-13 (cpu_power = 2048) group e0000430f0774180 cpus 14-15 (cpu_power = 2048)
[    0.765286] CPU17 attaching sched-domain:
[    0.765289]  domain 0: span 16-17 level CPU
[    0.765292]   groups: group e000043003094f70 cpus 17 group e000043003084f70 cpus 16
[    0.765298]   domain 1: span 0-25 level NODE
[    0.765301]    groups: group e0000430f0774000 cpus 16-17 (cpu_power = 2048) group e0000430f0774020 cpus 18-19 (cpu_power = 2048) group e0000430f0774040 cpus 20-21 (cpu_power = 2048) group e0000430f0774060 cpus 22-23 (cpu_power = 2048) group e0000430f0774080 cpus 24-25 (cpu_power = 2048) group e0000430f07740a0 cpus 0-1 (cpu_power = 2048) group e0000430f07740c0 cpus 2-3 (cpu_power = 2048) group e0000430f07740e0 cpus 4-5 (cpu_power = 2048) group e0000430f0774100 cpus 6-7 (cpu_power = 2048) group e0000430f0774120 cpus 8-9 (cpu_power = 2048) group e0000430f0774140 cpus 10-11 (cpu_power = 2048) group e0000430f0774160 cpus 12-13 (cpu_power = 2048) group e0000430f0774180 cpus 14-15 (cpu_power = 2048)
[    0.765344] CPU18 attaching sched-domain:
[    0.765347]  domain 0: span 18-19 level CPU
[    0.765350]   groups: group e00004b003094f70 cpus 18 group e00004b0030a4f70 cpus 19
[    0.765357]   domain 1: span 0-25 level NODE
[    0.765360]    groups: group e00004b0f0770000 cpus 18-19 (cpu_power = 2048) group e00004b0f0770020 cpus 20-21 (cpu_power = 2048) group e00004b0f0770040 cpus 22-23 (cpu_power = 2048) group e00004b0f0770060 cpus 24-25 (cpu_power = 2048) group e00004b0f0770080 cpus 0-1 (cpu_power = 2048) group e00004b0f07700a0 cpus 2-3 (cpu_power = 2048) group e00004b0f07700c0 cpus 4-5 (cpu_power = 2048) group e00004b0f07700e0 cpus 6-7 (cpu_power = 2048) group e00004b0f0770100 cpus 8-9 (cpu_power = 2048) group e00004b0f0770120 cpus 10-11 (cpu_power = 2048) group e00004b0f0770140 cpus 12-13 (cpu_power = 2048) group e00004b0f0770160 cpus 14-15 (cpu_power = 2048) group e00004b0f0770180 cpus 16-17 (cpu_power = 2048)
[    0.765403] CPU19 attaching sched-domain:
[    0.765406]  domain 0: span 18-19 level CPU
[    0.765409]   groups: group e00004b0030a4f70 cpus 19 group e00004b003094f70 cpus 18
[    0.765416]   domain 1: span 0-25 level NODE
[    0.765419]    groups: group e00004b0f0770000 cpus 18-19 (cpu_power = 2048) group e00004b0f0770020 cpus 20-21 (cpu_power = 2048) group e00004b0f0770040 cpus 22-23 (cpu_power = 2048) group e00004b0f0770060 cpus 24-25 (cpu_power = 2048) group e00004b0f0770080 cpus 0-1 (cpu_power = 2048) group e00004b0f07700a0 cpus 2-3 (cpu_power = 2048) group e00004b0f07700c0 cpus 4-5 (cpu_power = 2048) group e00004b0f07700e0 cpus 6-7 (cpu_power = 2048) group e00004b0f0770100 cpus 8-9 (cpu_power = 2048) group e00004b0f0770120 cpus 10-11 (cpu_power = 2048) group e00004b0f0770140 cpus 12-13 (cpu_power = 2048) group e00004b0f0770160 cpus 14-15 (cpu_power = 2048) group e00004b0f0770180 cpus 16-17 (cpu_power = 2048)
[    0.765462] CPU20 attaching sched-domain:
[    0.765465]  domain 0: span 20-21 level CPU
[    0.765468]   groups: group e0000530030a4f70 cpus 20 group e0000530030b4f70 cpus 21
[    0.765474]   domain 1: span 0-25 level NODE
[    0.765477]    groups: group e0000530f0774000 cpus 20-21 (cpu_power = 2048) group e0000530f0774020 cpus 22-23 (cpu_power = 2048) group e0000530f0774040 cpus 24-25 (cpu_power = 2048) group e0000530f0774060 cpus 0-1 (cpu_power = 2048) group e0000530f0774080 cpus 2-3 (cpu_power = 2048) group e0000530f07740a0 cpus 4-5 (cpu_power = 2048) group e0000530f07740c0 cpus 6-7 (cpu_power = 2048) group e0000530f07740e0 cpus 8-9 (cpu_power = 2048) group e0000530f0774100 cpus 10-11 (cpu_power = 2048) group e0000530f0774120 cpus 12-13 (cpu_power = 2048) group e0000530f0774140 cpus 14-15 (cpu_power = 2048) group e0000530f0774160 cpus 16-17 (cpu_power = 2048) group e0000530f0774180 cpus 18-19 (cpu_power = 2048)
[    0.765520] CPU21 attaching sched-domain:
[    0.765523]  domain 0: span 20-21 level CPU
[    0.765526]   groups: group e0000530030b4f70 cpus 21 group e0000530030a4f70 cpus 20
[    0.765533]   domain 1: span 0-25 level NODE
[    0.765536]    groups: group e0000530f0774000 cpus 20-21 (cpu_power = 2048) group e0000530f0774020 cpus 22-23 (cpu_power = 2048) group e0000530f0774040 cpus 24-25 (cpu_power = 2048) group e0000530f0774060 cpus 0-1 (cpu_power = 2048) group e0000530f0774080 cpus 2-3 (cpu_power = 2048) group e0000530f07740a0 cpus 4-5 (cpu_power = 2048) group e0000530f07740c0 cpus 6-7 (cpu_power = 2048) group e0000530f07740e0 cpus 8-9 (cpu_power = 2048) group e0000530f0774100 cpus 10-11 (cpu_power = 2048) group e0000530f0774120 cpus 12-13 (cpu_power = 2048) group e0000530f0774140 cpus 14-15 (cpu_power = 2048) group e0000530f0774160 cpus 16-17 (cpu_power = 2048) group e0000530f0774180 cpus 18-19 (cpu_power = 2048)
[    0.765579] CPU22 attaching sched-domain:
[    0.765582]  domain 0: span 22-23 level CPU
[    0.765585]   groups: group e00005b0030b4f70 cpus 22 group e00005b0030c4f70 cpus 23
[    0.765592]   domain 1: span 0-25 level NODE
[    0.765595]    groups: group e00005b078770000 cpus 22-23 (cpu_power = 2048) group e00005b078770020 cpus 24-25 (cpu_power = 2048) group e00005b078770040 cpus 0-1 (cpu_power = 2048) group e00005b078770060 cpus 2-3 (cpu_power = 2048) group e00005b078770080 cpus 4-5 (cpu_power = 2048) group e00005b0787700a0 cpus 6-7 (cpu_power = 2048) group e00005b0787700c0 cpus 8-9 (cpu_power = 2048) group e00005b0787700e0 cpus 10-11 (cpu_power = 2048) group e00005b078770100 cpus 12-13 (cpu_power = 2048) group e00005b078770120 cpus 14-15 (cpu_power = 2048) group e00005b078770140 cpus 16-17 (cpu_power = 2048) group e00005b078770160 cpus 18-19 (cpu_power = 2048) group e00005b078770180 cpus 20-21 (cpu_power = 2048)
[    0.765638] CPU23 attaching sched-domain:
[    0.765641]  domain 0: span 22-23 level CPU
[    0.765644]   groups: group e00005b0030c4f70 cpus 23 group e00005b0030b4f70 cpus 22
[    0.765651]   domain 1: span 0-25 level NODE
[    0.765654]    groups: group e00005b078770000 cpus 22-23 (cpu_power = 2048) group e00005b078770020 cpus 24-25 (cpu_power = 2048) group e00005b078770040 cpus 0-1 (cpu_power = 2048) group e00005b078770060 cpus 2-3 (cpu_power = 2048) group e00005b078770080 cpus 4-5 (cpu_power = 2048) group e00005b0787700a0 cpus 6-7 (cpu_power = 2048) group e00005b0787700c0 cpus 8-9 (cpu_power = 2048) group e00005b0787700e0 cpus 10-11 (cpu_power = 2048) group e00005b078770100 cpus 12-13 (cpu_power = 2048) group e00005b078770120 cpus 14-15 (cpu_power = 2048) group e00005b078770140 cpus 16-17 (cpu_power = 2048) group e00005b078770160 cpus 18-19 (cpu_power = 2048) group e00005b078770180 cpus 20-21 (cpu_power = 2048)
[    0.765697] CPU24 attaching sched-domain:
[    0.765700]  domain 0: span 24-25 level CPU
[    0.765703]   groups: group e0000630030c4f70 cpus 24 group e0000630030d4f70 cpus 25
[    0.765710]   domain 1: span 0-25 level NODE
[    0.765713]    groups: group e0000630f5134000 cpus 24-25 (cpu_power = 2048) group e0000630f5134020 cpus 0-1 (cpu_power = 2048) group e0000630f5134040 cpus 2-3 (cpu_power = 2048) group e0000630f5134060 cpus 4-5 (cpu_power = 2048) group e0000630f5134080 cpus 6-7 (cpu_power = 2048) group e0000630f51340a0 cpus 8-9 (cpu_power = 2048) group e0000630f51340c0 cpus 10-11 (cpu_power = 2048) group e0000630f51340e0 cpus 12-13 (cpu_power = 2048) group e0000630f5134100 cpus 14-15 (cpu_power = 2048) group e0000630f5134120 cpus 16-17 (cpu_power = 2048) group e0000630f5134140 cpus 18-19 (cpu_power = 2048) group e0000630f5134160 cpus 20-21 (cpu_power = 2048) group e0000630f5134180 cpus 22-23 (cpu_power = 2048)
[    0.765757] CPU25 attaching sched-domain:
[    0.765760]  domain 0: span 24-25 level CPU
[    0.765763]   groups: group e0000630030d4f70 cpus 25 group e0000630030c4f70 cpus 24
[    0.765770]   domain 1: span 0-25 level NODE
[    0.765773]    groups: group e0000630f5134000 cpus 24-25 (cpu_power = 2048) group e0000630f5134020 cpus 0-1 (cpu_power = 2048) group e0000630f5134040 cpus 2-3 (cpu_power = 2048) group e0000630f5134060 cpus 4-5 (cpu_power = 2048) group e0000630f5134080 cpus 6-7 (cpu_power = 2048) group e0000630f51340a0 cpus 8-9 (cpu_power = 2048) group e0000630f51340c0 cpus 10-11 (cpu_power = 2048) group e0000630f51340e0 cpus 12-13 (cpu_power = 2048) group e0000630f5134100 cpus 14-15 (cpu_power = 2048) group e0000630f5134120 cpus 16-17 (cpu_power = 2048) group e0000630f5134140 cpus 18-19 (cpu_power = 2048) group e0000630f5134160 cpus 20-21 (cpu_power = 2048) group e0000630f5134180 cpus 22-23 (cpu_power = 2048)
[    0.772462] devtmpfs: initialized
[    0.787888] DMI not present or invalid.
[    0.788896] regulator: core version 0.5
[    0.788896] NET: Registered protocol family 16
[    0.792155] ACPI: bus type pci registered
[    0.793001] ACPI  DSDT OEM Rev 0x20101
[    0.822113] bio: create slab <bio-0> at 0
[    0.825785] ACPI: SCI (ACPI GSI 52) not registered
[    0.828016] ACPI: EC: Look up EC in DSDT
[    0.828785] ACPI: Interpreter enabled
[    0.832003] ACPI: (supports S0)
[    0.844092] ACPI: Using platform specific model for interrupt routing
[    0.846647] ACPI: No dock devices found.
[    0.847371] ACPI: PCI Root Bridge [P000] (0002:00)
[    0.880610] pci 0002:00:01.0: reg 10 64bit mmio: [0x700000-0x70ffff]
[    0.880683] pci 0002:00:01.0: reg 18 64bit mmio: [0x710000-0x71ffff]
[    0.881118] pci 0002:00:01.0: PME# supported from D3hot D3cold
[    0.896021] pci 0002:00:01.0: PME# disabled
[    0.897476] pci 0002:00:01.1: reg 10 64bit mmio: [0x720000-0x72ffff]
[    0.897549] pci 0002:00:01.1: reg 18 64bit mmio: [0x730000-0x73ffff]
[    0.897978] pci 0002:00:01.1: PME# supported from D3hot D3cold
[    0.900022] pci 0002:00:01.1: PME# disabled
[    0.916267] ACPI: PCI Interrupt Routing Table [\_SB_.H000.P000.D010._PRT]
[    0.916286] ACPI: PCI Interrupt Routing Table [\_SB_.H000.P000.D011._PRT]
[    0.916427] ACPI: PCI Root Bridge [P001] (0001:00)
[    0.917480] pci 0001:00:01.0: reg 10 32bit mmio: [0x200000-0x2fffff]
[    0.918262] pci 0001:00:03.0: reg 10 64bit mmio: [0x700000-0x700fff]
[    0.920204] pci 0001:00:04.0: reg 10 64bit mmio: [0x710000-0x71ffff]
[    0.920717] pci 0001:00:04.0: PME# supported from D3hot
[    0.936021] pci 0001:00:04.0: PME# disabled
[    0.937010] ACPI: PCI Interrupt Routing Table [\_SB_.H000.P001.D010._PRT]
[    0.937026] ACPI: PCI Interrupt Routing Table [\_SB_.H000.P001.D030._PRT]
[    0.937040] ACPI: PCI Interrupt Routing Table [\_SB_.H000.P001.D040._PRT]
[    0.937249] ACPI: PCI Root Bridge [P000] (0012:00)
[    0.952716] pci 0012:00:01.0: reg 10 64bit mmio: [0x700000-0x70ffff]
[    0.953321] pci 0012:00:01.0: PME# supported from D3hot D3cold
[    0.968026] pci 0012:00:01.0: PME# disabled
[    0.969573] pci 0012:00:02.0: reg 10 io port: [0x1000-0x10ff]
[    0.969660] pci 0012:00:02.0: reg 14 64bit mmio: [0x720000-0x73ffff]
[    0.969747] pci 0012:00:02.0: reg 1c 64bit mmio: [0x740000-0x75ffff]
[    0.969835] pci 0012:00:02.0: reg 30 32bit mmio pref: [0x800000-0x8fffff]
[    0.970231] pci 0012:00:02.0: supports D1 D2
[    0.970971] pci 0012:00:02.1: reg 10 io port: [0x1100-0x11ff]
[    0.971059] pci 0012:00:02.1: reg 14 64bit mmio: [0x760000-0x77ffff]
[    0.971146] pci 0012:00:02.1: reg 1c 64bit mmio: [0x780000-0x79ffff]
[    0.971234] pci 0012:00:02.1: reg 30 32bit mmio pref: [0x900000-0x9fffff]
[    0.971630] pci 0012:00:02.1: supports D1 D2
[    0.971989] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P000.D010._PRT]
[    0.972000] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P000.D020._PRT]
[    0.972000] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P000.D021._PRT]
[    0.972000] ACPI: PCI Root Bridge [P001] (0011:00)
[    0.976276] pci 0011:00:01.0: reg 10 32bit mmio: [0x200000-0x2fffff]
[    0.977238] pci 0011:00:03.0: reg 10 64bit mmio: [0x700000-0x700fff]
[    0.978663] pci 0011:00:04.0: reg 10 64bit mmio: [0x710000-0x71ffff]
[    0.979293] pci 0011:00:04.0: PME# supported from D3hot
[    0.988025] pci 0011:00:04.0: PME# disabled
[    0.989159] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P001.D010._PRT]
[    0.989174] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P001.D030._PRT]
[    0.989189] ACPI: PCI Interrupt Routing Table [\_SB_.H004.P001.D040._PRT]
[    0.989481] vgaarb: loaded
[    0.992106] Switching to clocksource sn2_rtc
[    1.023997] pnp: PnP ACPI init
[    1.023997] ACPI: bus type pnp registered
[    1.032502] pnp: PnP ACPI: found 4 devices
[    1.033245] ACPI: ACPI bus type pnp unregistered
[    1.049474] NET: Registered protocol family 2
[    1.050834] IP route cache hash table entries: 524288 (order: 8, 4194304 bytes)
[    1.070330] TCP established hash table entries: 524288 (order: 9, 8388608 bytes)
[    1.105084] TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
[    1.108525] TCP: Hash tables configured (established 524288 bind 65536)
[    1.116043] TCP reno registered
[    1.119180] NET: Registered protocol family 1
[    1.120038] Unpacking initramfs...
[    1.838449] Freeing initrd memory: 17872kB freed
[    1.839883] perfmon: version 2.0 IRQ 238
[    1.840521] perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
[    1.861831] perfmon: added sampling format default_format
[    1.868702] perfmon_default_smpl: default_format v2.0 registered
[    3.522304] audit: initializing netlink socket (disabled)
[    3.523286] type=2000 audit(1321744276.522:1): initialized
[    3.530708] HugeTLB registered 256 MB page size, pre-allocated 0 pages
[    3.547092] VFS: Disk quotas dquot_6.5.2
[    3.548169] Dquot-cache hash table entries: 2048 (order 0, 16384 bytes)
[    3.552912] msgmni has been set to 32768
[    3.557249] alg: No test for stdrng (krng)
[    3.571219] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[    3.587906] io scheduler noop registered
[    3.588548] io scheduler anticipatory registered
[    3.589432] io scheduler deadline registered
[    3.590426] io scheduler cfq registered (default)
[    3.626776] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    3.640896] ACPI: Power Button [PWRF]
[    3.641669] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
[    3.642884] ACPI: Sleep Button [SLPF]
[    3.692040] IRQ 233/system controller events: IRQF_DISABLED is not guaranteed on shared IRQs
[    3.694160] Linux agpgart interface v0.103
[    3.695153] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    3.706345] sn_console: Console driver init
[    3.707182] ttySG0 at I/O 0x0 (irq = 0) is a SGI SN L1
[    3.723441] IRQ 233/SAL console driver: IRQF_DISABLED is not guaranteed on shared IRQs
[    3.725198] mice: PS/2 mouse device common for all mice
[    3.756312] rtc-efi rtc-efi: rtc core: registered rtc-efi as rtc0
[    3.767083] TCP cubic registered
[    3.767600] NET: Registered protocol family 17
[    3.774698] registered taskstats version 1
[    3.779644] rtc-efi rtc-efi: setting system clock to 2011-11-19 23:11:16 UTC (1321744276)
[    3.792220] Freeing unused kernel memory: 736kB freed
[    3.830140] udev[282]: starting version 164
[    3.868602] SCSI subsystem initialized
[    3.869707] IOC4 0001:00:01.0: PCI INT A -> GSI 60 (level, low) -> IRQ 60
[    3.877495] IOC4 0001:00:01.0: IO10 card detected.
[    3.904747] libata version 3.00 loaded.
[    3.915873] tg3.c:v3.116 (December 3, 2010)
[    3.916777] tg3 0002:00:01.0: PCI INT A -> GSI 63 (level, low) -> IRQ 63
[    3.926555] Fusion MPT base driver 3.04.12
[    3.927311] Copyright (c) 1999-2008 LSI Corporation
[    3.948586] Fusion MPT SPI Host driver 3.04.12
[    3.949657] mptspi 0012:00:02.0: PCI INT A -> GSI 69 (level, low) -> IRQ 69
[    3.953834] tg3 0002:00:01.0: eth0: Tigon3 [partno(9210292) rev 2003] (PCIX:100MHz:64-bit) MAC address 00:e0:ed:08:48:dc
[    3.953840] tg3 0002:00:01.0: eth0: attached PHY is 5704 (10/100/1000Base-T Ethernet) (WireSpeed[1])
[    3.953845] tg3 0002:00:01.0: eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
[    3.953849] tg3 0002:00:01.0: eth0: dma_rwctrl[769f4000] dma_mask[64-bit]
[    3.953945] tg3 0002:00:01.1: PCI INT B -> GSI 64 (level, low) -> IRQ 64
[    3.989503] tg3 0002:00:01.1: eth1: Tigon3 [partno(9210292) rev 2003] (PCIX:100MHz:64-bit) MAC address 00:e0:ed:08:48:dd
[    3.989509] tg3 0002:00:01.1: eth1: attached PHY is 5704 (10/100/1000Base-T Ethernet) (WireSpeed[1])
[    3.989514] tg3 0002:00:01.1: eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
[    3.989517] tg3 0002:00:01.1: eth1: dma_rwctrl[769f4000] dma_mask[64-bit]
[    4.094365] mptbase: ioc0: Initiating bringup
[    4.135905] IOC4 0001:00:01.0: PCI clock is 15 ns.
[    4.135934] IOC4 loading sgiioc4 submodule
[    4.136762] sata_vsc 0001:00:03.0: version 2.3
[    4.136849] IOC4 0011:00:01.0: PCI INT A -> GSI 65 (level, low) -> IRQ 65
[    4.136946] sata_vsc 0001:00:03.0: PCI INT A -> GSI 61 (level, low) -> IRQ 61
[    4.137924] scsi0 : sata_vsc
[    4.138379] scsi1 : sata_vsc
[    4.138550] scsi2 : sata_vsc
[    4.138744] scsi3 : sata_vsc
[    4.138834] ata1: SATA max UDMA/133 mmio m4096@0x8c0700000 port 0x8c0700200 irq 71
[    4.138842] ata2: SATA max UDMA/133 mmio m4096@0x8c0700000 port 0x8c0700400 irq 71
[    4.138850] ata3: SATA max UDMA/133 mmio m4096@0x8c0700000 port 0x8c0700600 irq 71
[    4.138858] ata4: SATA max UDMA/133 mmio m4096@0x8c0700000 port 0x8c0700800 irq 71
[    4.139014] tg3 0001:00:04.0: PCI INT A -> GSI 62 (level, low) -> IRQ 62
[    4.336694] IOC4 0011:00:01.0: IO10 card detected.
[    4.459440] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[    4.476946] ata1.00: ATA-6: HDS722580VLSA80, V32OA60A, max UDMA/100
[    4.478049] ata1.00: 160836480 sectors, multi 0: LBA48
[    4.504916] ata1.00: configured for UDMA/100
[    4.507246] scsi 0:0:0:0: Direct-Access     ATA      HDS722580VLSA80  V32O PQ: 0 ANSI: 5
[    4.577738] ioc0: LSI53C1030 B2: Capabilities={Initiator,Target}
[    4.595902] IOC4 0011:00:01.0: PCI clock is 15 ns.
[    4.595907] IOC4 loading sgiioc4 submodule
[    4.596863] sata_vsc 0011:00:03.0: PCI INT A -> GSI 66 (level, low) -> IRQ 66
[    4.603732] scsi4 : sata_vsc
[    4.604482] scsi5 : sata_vsc
[    4.620917] scsi6 : sata_vsc
[    4.621703] scsi7 : sata_vsc
[    4.634768] ata5: SATA max UDMA/133 mmio m4096@0x208c0700000 port 0x208c0700200 irq 72
[    4.636223] ata6: SATA max UDMA/133 mmio m4096@0x208c0700000 port 0x208c0700400 irq 72
[    4.637599] ata7: SATA max UDMA/133 mmio m4096@0x208c0700000 port 0x208c0700600 irq 72
[    4.673993] Uniform Multi-Platform E-IDE driver
[    4.675032] ata8: SATA max UDMA/133 mmio m4096@0x208c0700000 port 0x208c0700800 irq 72
[    4.831396] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[    4.848864] ata2.00: ATA-6: HDS722580VLSA80, V32OA6MA, max UDMA/100
[    4.850018] ata2.00: 160836480 sectors, multi 0: LBA48
[    4.876873] ata2.00: configured for UDMA/100
[    4.879225] scsi 1:0:0:0: Direct-Access     ATA      HDS722580VLSA80  V32O PQ: 0 ANSI: 5
[    4.894941] sd 0:0:0:0: [sda] 160836480 512-byte logical blocks: (82.3 GB/76.6 GiB)
[    4.895360] sd 1:0:0:0: [sdb] 160836480 512-byte logical blocks: (82.3 GB/76.6 GiB)
[    4.895466] sd 1:0:0:0: [sdb] Write Protect is off
[    4.895470] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    4.895504] sd 1:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[    4.895830]  sdb:
[    4.933225] sd 0:0:0:0: [sda] Write Protect is off
[    4.934684]  sdb1 sdb2 sdb3
[    4.954363] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    4.954405] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    4.954680] sd 1:0:0:0: [sdb] Attached SCSI disk
[    4.957114]  sda:
[    4.996361] ata5: SATA link down (SStatus 0 SControl 300)
[    5.003643]  sda1 sda2 sda3
[    5.005698] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.072646] scsi8 : ioc0: LSI53C1030 B2, FwRev=01032710h, Ports=1, MaxQ=255, IRQ=69
[    5.203316] ata3: SATA link down (SStatus 0 SControl 300)
[    5.519484] tg3 0001:00:04.0: eth2: Tigon3 [partno(030-1771-000) rev 0105] (PCI:66MHz:64-bit) MAC address 08:00:69:13:f9:4d
[    5.521341] tg3 0001:00:04.0: eth2: attached PHY is 5701 (10/100/1000Base-T Ethernet) (WireSpeed[1])
[    5.522912] tg3 0001:00:04.0: eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[0]
[    5.523287] ata4: SATA link down (SStatus 0 SControl 300)
[    5.552907] tg3 0001:00:04.0: eth2: dma_rwctrl[76ff3f0f] dma_mask[64-bit]
[    5.711143] tg3 0012:00:01.0: PCI INT A -> GSI 68 (level, low) -> IRQ 68
[    5.843250] ata6: SATA link down (SStatus 0 SControl 300)
[    6.163221] ata7: SATA link down (SStatus 0 SControl 300)
[    6.483194] ata8: SATA link down (SStatus 0 SControl 300)
[    6.486959] SGIIOC4: IDE controller at PCI slot 0001:00:01.0, revision 83
[    6.492540]     ide0: MMIO-DMA
[    6.493206] Probing IDE interface ide0...
[    7.091169] tg3 0012:00:01.0: eth3: Tigon3 [partno(9210289) rev 0105] (PCIX:133MHz:64-bit) MAC address 08:00:69:14:76:83
[    7.092947] tg3 0012:00:01.0: eth3: attached PHY is 5701 (10/100/1000Base-T Ethernet) (WireSpeed[1])
[    7.094709] tg3 0012:00:01.0: eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[0]
[    7.123991] tg3 0012:00:01.0: eth3: dma_rwctrl[76db1b0f] dma_mask[64-bit]
[    7.125287] tg3 0011:00:04.0: PCI INT A -> GSI 67 (level, low) -> IRQ 67
[    7.125310] mptspi 0012:00:02.1: PCI INT B -> GSI 70 (level, low) -> IRQ 70
[    7.126423] mptbase: ioc1: Initiating bringup
[    7.229500] hda: MATSHITADVD-ROM SR-8177, ATAPI CD/DVD-ROM drive
[    7.565107] hda: MWDMA2 mode selected
[    7.566071] ide0 at 0xc00000080f200100-0xc00000080f20011c,0xc00000080f200120 on irq 60
[    7.574816] SGIIOC4: IDE controller at PCI slot 0011:00:01.0, revision 83
[    7.583684]     ide1: MMIO-DMA
[    7.583821] ide-cd driver 5.00
[    7.584992] Probing IDE interface ide1...
[    7.585448] ide-cd: hda: ATAPI 61X DVD-ROM drive
[    7.605524] ioc1: LSI53C1030 B2: Capabilities={Initiator,Target}
[    7.613330] , 256kB Cache
[    7.613943] Uniform CD-ROM driver Revision: 3.20
[    8.096401] scsi9 : ioc1: LSI53C1030 B2, FwRev=01032710h, Ports=1, MaxQ=255, IRQ=70
[    8.321582] hdc: MATSHITADVD-ROM SR-8178, ATAPI CD/DVD-ROM drive
[    8.531058] tg3 0011:00:04.0: eth4: Tigon3 [partno(030-1771-000) rev 0105] (PCI:66MHz:64-bit) MAC address 08:00:69:14:01:e7
[    8.532927] tg3 0011:00:04.0: eth4: attached PHY is 5701 (10/100/1000Base-T Ethernet) (WireSpeed[1])
[    8.534524] tg3 0011:00:04.0: eth4: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[0]
[    8.563903] tg3 0011:00:04.0: eth4: dma_rwctrl[76ff3f0f] dma_mask[64-bit]
[    8.661176] hdc: MWDMA2 mode selected
[    8.662065] ide1 at 0xc00002080f200100-0xc00002080f20011c,0xc00002080f200120 on irq 65
[    8.672094] ide-cd: hdc: ATAPI 24X DVD-ROM drive, 256kB Cache
[    8.901483] md: raid1 personality registered for level 1
[    8.967207] md: md0 stopped.
[    8.969127] md: bind<sdb2>
[    8.969832] md: bind<sda2>
[    8.979161] raid1: raid set md0 active with 2 out of 2 mirrors
[    8.991436] md0: detected capacity change from 0 to 8911781888
[    8.993859]  md0: unknown partition table
[    9.034225] md: md1 stopped.
[    9.064466] md: bind<sdb3>
[    9.065208] md: bind<sda3>
[    9.068740] raid1: raid set md1 active with 2 out of 2 mirrors
[    9.073656] md1: detected capacity change from 0 to 72909914112
[    9.091070]  md1: unknown partition table
[    9.309054] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
[    9.322269] SGI XFS Quota Management subsystem
[    9.391121] XFS mounting filesystem md1
[    9.625952] Starting XFS recovery on filesystem: md1 (logdev: internal)
[    9.870938] Ending XFS recovery on filesystem: md1 (logdev: internal)
[   12.053927] udev[651]: starting version 164
[   12.626989] udev[678]: renamed network interface eth1 to eth1-eth2
[   12.627704] udev[682]: renamed network interface eth4 to eth4-eth3
[   12.628448] udev[673]: renamed network interface eth0 to eth1
[   12.629295] udev[718]: renamed network interface eth3 to eth3-eth4
[   12.630309] udev[669]: renamed network interface eth2 to eth0
[   12.682027] udev[682]: renamed network interface eth4-eth3 to eth3
[   12.684131] udev[718]: renamed network interface eth3-eth4 to eth4
[   12.717560] udev[678]: renamed network interface eth1-eth2 to eth2
[   13.940954] Adding 8702880k swap on /dev/md0.  Priority:-1 extents:1 across:8702880k
[   14.332224] loop: module loaded
[   16.234308] fuse init (API version 7.13)
[   17.645404] RPC: Registered udp transport module.
[   17.646271] RPC: Registered tcp transport module.
[   17.647161] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   17.879273] Installing knfsd (copyright (C) 1996 [email protected]).
[   18.041619] NET: Registered protocol family 10
[   18.047666] ADDRCONF(NETDEV_UP): eth0: link is not ready
[   18.056842] tg3 0001:00:04.0: eth0: Link is up at 100 Mbps, full duplex
[   18.059333] tg3 0001:00:04.0: eth0: Flow control is off for TX and off for RX
[   18.081046] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   18.107988] svc: failed to register lockdv1 RPC service (errno 97).
[   18.112074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[   18.118593] NFSD: starting 90-second grace period
[   19.157583] NetworkManager(1549): unaligned access to 0x600000000002621c, ip=0x20000000007d9a70
[   19.159086] NetworkManager(1549): unaligned access to 0x6000000000026224, ip=0x20000000007d9aa0
[   19.160628] NetworkManager(1549): unaligned access to 0x600000000002622c, ip=0x20000000007d9ac0
[   19.190258] NetworkManager(1549): unaligned access to 0x600000000002621c, ip=0x20000000007d9a70
[   19.199728] NetworkManager(1549): unaligned access to 0x6000000000026224, ip=0x20000000007d9aa0
[   19.810913] tg3 0002:00:01.0: firmware: requesting tigon/tg3_tso.bin
[   19.969488] ADDRCONF(NETDEV_UP): eth1: link is not ready
[   19.990359] tg3 0002:00:01.1: firmware: requesting tigon/tg3_tso.bin
[   20.089943] ADDRCONF(NETDEV_UP): eth2: link is not ready
[   20.372900] ADDRCONF(NETDEV_UP): eth3: link is not ready
[   20.655776] ADDRCONF(NETDEV_UP): eth4: link is not ready
[   23.938160] Bluetooth: Core ver 2.15
[   23.939650] NET: Registered protocol family 31
[   23.940484] Bluetooth: HCI device and connection manager initialized
[   23.946998] Bluetooth: HCI socket layer initialized
[   24.167962] lp: driver loaded but no devices found
[   24.207608] ppdev: user-space parallel port driver
[   24.249033] Bluetooth: L2CAP ver 2.14
[   24.249819] Bluetooth: L2CAP socket layer initialized
[   24.279871] Bluetooth: RFCOMM TTY layer initialized
[   24.280785] Bluetooth: RFCOMM socket layer initialized
[   24.281718] Bluetooth: RFCOMM ver 1.11
[   24.915432] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[   24.916337] Bluetooth: BNEP filters: protocol multicast
[   24.989511] Bridge firewalling registered
[   24.993664] NetworkManager(1549): unaligned access to 0x6000000000074b1c, ip=0x20000000007d9a70
[   24.998595] NetworkManager(1549): unaligned access to 0x6000000000074b24, ip=0x20000000007d9aa0
[   25.000104] NetworkManager(1549): unaligned access to 0x6000000000074b2c, ip=0x20000000007d9ac0
[   25.119557] Bluetooth: SCO (Voice Link) ver 0.6
[   25.120431] Bluetooth: SCO socket layer initialized
[   28.292710] eth0: no IPv6 routers present
[ 1638.475707] lp: driver loaded but no devices found
[ 1638.523242] ide-gd driver 1.18
[ 1638.583026] st: Version 20081215, fixed bufsize 32768, s/g segs 256
[ 1640.354648] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 1640.356246] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 1651.433832] lp: driver loaded but no devices found
winterstar:~#
Image Image Image Image Image Image
And pics.....
100_6130.JPG
Back view, top, Altix 350 system in Altix 3000 rack with a small Altix 3700 in the bottom

100_6131.JPG
Altix 350 system, bottom section.

100_6132.JPG
Altix 350 and 3700 systems 'winterstar' and 'roan' in Altix 3000 rack, front view.

100_6134.JPG
Closer view Altix 350, router in the middle.
Image Image Image Image Image Image
Update:

After working for a while on the additional two Altix 350 compute nodes from this system that had issues, I've now populated this system to 15 compute nodes, making 30 CPUs, and 54GB of RAM.

I had updated the PROMs on the 13 working nodes, but not yet on the two non-working ones. A PROM version mix, as has been noted elsewhere, will cause the system to enter POD, before ever loading EFI. And on compute nodes without disks or an IO9 or IO10 (there are three in this system with IO10's, but neither of the repaired nodes have any IO or disks) booting a node outside the system and updating it individually is not an option.

However, POD has the tools to fix this. So, if the system drops you to POD due to a PROM version mismatch, there is hope. In POD, you can flash the downrev nodes by nasid; you can get the nasid in POD by (note that I entered POD from the EFI shell for this example):

Code: Select all

Shell> pod
Entering POD mode
POD entered via EFI command, using Cac mode
0 000: POD EfiCB Cac> pcfg
NUMAlink Topology: (node 0):
Entry 0: SHub 001c02#0 Chiprev=3 Route=0x0
Module=001c02 Slab=0 Partition=0 Space=RESET
Nasid=0 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 1 SHub 001c10#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 1: SHub 001c10#0 Chiprev=3 Route=0x1
Module=001c10 Slab=0 Partition=0 Space=RESET
Nasid=8 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 0 SHub 001c02#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 1
Port 2 status: UP NF
Entry 2: NL4router 001r18#0 Chiprev=1 Route=0x21
Module=001r18 Slab=0 Partition=0 Space=RESET
Metaid=0 (1) Flags=0x88
Port 1 connection: Entry 1 SHub 001c10#0 port 2
Port 1 status: UP NF
Port 2 connection: Entry 3 SHub 001c12#0 port 2
Port 2 status: UP NF
Port 3 connection: Entry 4 SHub 001c14#0 port 2
Port 3 status: UP NF
Port 4 connection: Entry 5 SHub 001c16#0 port 2
Port 4 status: UP NF
Port 5 connection: Entry 6 SHub 001c20#0 port 2
Port 5 status: UP NF
Port 6 connection: Entry 7 SHub 001c22#0 port 2
Port 6 status: UP NF
Port 7 connection: Entry 8 SHub 001c24#0 port 2
Port 7 status: UP NF
Port 8 connection: Entry 9 SHub 001c26#0 port 2
Port 8 status: UP NF
Entry 3: SHub 001c12#0 Chiprev=3 Route=0x221
Module=001c12 Slab=0 Partition=0 Space=RESET
Nasid=10 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 10 SHub 001c04#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 2
Port 2 status: UP NF
Entry 4: SHub 001c14#0 Chiprev=3 Route=0x321
Module=001c14 Slab=0 Partition=0 Space=RESET
Nasid=12 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 11 SHub 001c06#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 3
Port 2 status: UP NF
Entry 5: SHub 001c16#0 Chiprev=3 Route=0x421
Module=001c16 Slab=0 Partition=0 Space=RESET
Nasid=14 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 12 SHub 001c08#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 4
Port 2 status: UP NF
Entry 6: SHub 001c20#0 Chiprev=3 Route=0x521
Module=001c20 Slab=0 Partition=0 Space=RESET
Nasid=16 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 13 SHub 001c28#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 5
Port 2 status: UP NF
Entry 7: SHub 001c22#0 Chiprev=3 Route=0x621
Module=001c22 Slab=0 Partition=0 Space=RESET
Nasid=18 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 14 SHub 001c32#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 6
Port 2 status: UP NF
Entry 8: SHub 001c24#0 Chiprev=3 Route=0x721
Module=001c24 Slab=0 Partition=0 Space=RESET
Nasid=20 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 15 SHub 001c30#0 port 1
Port 1 status: UP NF
Port 2 connection: Entry 2 NL4router 001r18#0 port 7
Port 2 status: UP NF
Entry 9: SHub 001c26#0 Chiprev=3 Route=0x821
Module=001c26 Slab=0 Partition=0 Space=RESET
Nasid=22 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Not connected
Port 1 status: DN FE
Port 2 connection: Entry 2 NL4router 001r18#0 port 8
Port 2 status: UP NF
Entry 10: SHub 001c04#0 Chiprev=3 Route=0x1221
Module=001c04 Slab=0 Partition=0 Space=RESET
Nasid=2 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 3 SHub 001c12#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 11: SHub 001c06#0 Chiprev=3 Route=0x1321
Module=001c06 Slab=0 Partition=0 Space=RESET
Nasid=4 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 4 SHub 001c14#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 12: SHub 001c08#0 Chiprev=3 Route=0x1421
Module=001c08 Slab=0 Partition=0 Space=RESET
Nasid=6 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 5 SHub 001c16#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 13: SHub 001c28#0 Chiprev=3 Route=0x1521
Module=001c28 Slab=0 Partition=0 Space=RESET
Nasid=24 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 6 SHub 001c20#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 14: SHub 001c32#0 Chiprev=3 Route=0x1621
Module=001c32 Slab=0 Partition=0 Space=RESET
Nasid=28 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 7 SHub 001c22#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
Entry 15: SHub 001c30#0 Chiprev=3 Route=0x1721
Module=001c30 Slab=0 Partition=0 Space=RESET
Nasid=26 Flags=0x100000 ShrMode=11 Syssize=0 Prom=5.04
Port 1 connection: Entry 8 SHub 001c24#0 port 1
Port 1 status: UP NF
Port 2 connection: Not connected
Port 2 status: DN FE
0 000: POD EfiCB Cac>

Suppose that brick 001c26 was at PROM 4.43, however. In my output above, I find that the nasid is 22 for 001c26; I can program its PROM from POD by:

Code: Select all

0 000: POD EfiCB Cac> flash 22
......lots of output showing erase and programming....
0 000: POD EfiCB Cac> initlog n:22
.....more output and confirmation.....
0 000: POD EfiCB Cac>


Then I can exit POD and reboot. As a note, if anyone knows where a POD reference document can be found, that would be cool..... :D

Also as a note: the Altix 350 router brick has the same embedded L2 capability as the 3700BX2 does; all that's required is to plug in a 'USB to Ethernet' adapter into the L1 USB port on the router. Finding what kind of 'USB to Ethernet' adapter was difficult, but I finally found and just tried a 3Com 3C460B that worked right away. Simply plugging in the proper USB to Ethernet adapter into the L1 USB causes the L1 on the r-brick to load the L2 emulator, access to which is Ethernet only. You can set the IP address from the L1; the instructions for doing this I found, ironically, in the 3700BX2 documentation and not the Altix 350 documentation. It appears that the same NL4 router is used for Altix 330 systems, and maybe even for the same vintage Origin systems; if your NL4 router has a host type A USB port labeled 'L1 USB' it may have this capability as well. Altix 330 systems appear to have this capability on the compute node, too, so the separate L2 controller isn't required for these systems to get L2 functionality.

A reboot_l1 seems to be a good thing to do after plugging the adapter in; the L1 serial port stays an L1 serial port and can't access either the L2 prompt or the system console (have to use one of the other compute node's serial ports for console, but have to use the router's L1 to power up the whole system, since the router is the only brick with full L1 connectivity in a routed, non-ring, configuration. In a ring config, the L1 at any node can see all nodes, but not in routed mode, where node visibilities from the compute bricks' L1's are, to say it bluntly, weird).

But the emulated L2 works great once a supported USB to Ethernet adapter is found. Again, I found that a 3Com 3C460B works fine; I haven't tried any other USB to Ethernet adapters. Would be interesting to see what others work.

The box is still running Debian 6, but as soon as I can get CentOS 5 to rebuild it's gong CentOS, since the ProPack and Foundation tools will work there, and I can rebuild system updates from source (can't do that with SLES) can keep the OS updated (with ia32el, and i386 binary capability) at least until 2017, the EOS year for CentOS 5 (and all RHEL5 rebuilds).
Image Image Image Image Image Image
Hmm, the BYTE UnixBench is still around and updated? Cool; I remember running a much older version on my Tandy 6000 running Xenix System III back in the '80's (that box was a 68000 CPU at 8MHz and 1MB of RAM with two 70MB MFM hard drives and a pair of 8 inch floppies; but it was real Unix!). I most recently ran the 4.10 version a while back; didn't know it was a google code project now.

Right now I am waiting for a second 64 process run of the HPCC Challenge benchmark to finish; the hpcc package is distributed in the Debian repos, so it's an easy install and a pretty simple configure. It is fun to watch all 30 CPUs hit and stay at 100% for half an hour. (Wall time: it is really strange to see something like:

Code: Select all

lowen@winterstar:~$ time mpirun.openmpi -np 64 hpcc

real   30m34.852s
user   635m48.624s
sys   276m15.000s
lowen@winterstar:~$

as output of 'time' though!)

The HPCC benchmarks make a good stress test; I know this box isn't going to break any records, but since I'm going to be using it for planetarium graphics rendering I need to stress it pretty hard before putting it in production. It is also somewhat disquieting to see the load average parked at 64 and the system still in a responsive and usable state.... A really large Altix (say a 512 processor 3700) would be really interesting to see, but I wouldn't want the power bill. This Altix 350 is one compute node (and one router) shy of a full dual plane max config, so is a good indication of how powerful such a maxxed out box would be.

In any case, UnixBench doesn't seem to like >16 CPU's..... I'll see what I can do to fix, perhaps. I'm also running a comparison on an 8 CPU Xeon system; two quad core 2.0GHz 64-bit Xeons, running CentOS 6.3 with 16GB of RAM (Dell Precision 690 workstation).

I will report back.
Image Image Image Image Image Image
Ok, some results. First, the 30 CPU Altix 350. Since UnixBench is a system benchmark, the slow SATA drives do somewhat lower the total score (and I've snipped some repetitive sections out):

Code: Select all

lowen@winterstar:~/UnixBench$  ./Run
make all
make[1]: Entering directory `/home/lowen/UnixBench'
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
./tmp  exists
./results  exists
make[1]: Leaving directory `/home/lowen/UnixBench'
sh: 3dinfo: not found
sh: runlevel: not found

#    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
#    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
#    #  # #  #  #    ##            #####   #####   # #  #  #       ######
#    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
#    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
####   #    #  #  #    #          #####   ######  #    #   ####   #    #

Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

Multi-CPU version                  Version 5 revisions by Ian Smith,
Sunnyvale, CA, USA
January 13, 2011                   johantheghost at yahoo period com

Use of uninitialized value in printf at ./Run line 1378.
...snip...
Use of uninitialized value in printf at ./Run line 1590.

1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

...snip...

30 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)

System: winterstar: GNU/Linux
OS: GNU/Linux -- 2.6.32-5-mckinley -- #1 SMP Sun May 6 08:36:37 UTC 2012
Machine: ia64 (unknown)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: Madison (0.0 bogomips)

.....snip....
------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 11:52:35 - 12:20:50
30 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables        4889545.1 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     1366.9 MWIPS (10.0 s, 7 samples)
Execl Throughput                               1709.3 lps   (29.8 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        274218.5 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           72658.4 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        577292.9 KBps  (30.0 s, 2 samples)
Pipe Throughput                              687789.4 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  17779.7 lps   (10.0 s, 7 samples)
Process Creation                               2492.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   2773.4 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   1066.6 lpm   (60.0 s, 2 samples)
System Call Overhead                        1243254.6 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0    4889545.1    419.0
Double-Precision Whetstone                       55.0       1366.9    248.5
Execl Throughput                                 43.0       1709.3    397.5
File Copy 1024 bufsize 2000 maxblocks          3960.0     274218.5    692.5
File Copy 256 bufsize 500 maxblocks            1655.0      72658.4    439.0
File Copy 4096 bufsize 8000 maxblocks          5800.0     577292.9    995.3
Pipe Throughput                               12440.0     687789.4    552.9
Pipe-based Context Switching                   4000.0      17779.7     44.4
Process Creation                                126.0       2492.5    197.8
Shell Scripts (1 concurrent)                     42.4       2773.4    654.1
Shell Scripts (8 concurrent)                      6.0       1066.6   1777.7
System Call Overhead                          15000.0    1243254.6    828.8
========
System Benchmarks Index Score                                         444.0

------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 12:20:50 - 12:50:39
30 CPUs in system; running 30 parallel copies of tests

Dhrystone 2 using register variables      146496555.2 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                    41014.4 MWIPS (10.0 s, 7 samples)
Execl Throughput                               8363.8 lps   (29.6 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks         57773.7 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks            8519.5 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        240297.3 KBps  (30.0 s, 2 samples)
Pipe Throughput                            20854990.5 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 658358.4 lps   (10.0 s, 7 samples)
Process Creation                              14766.7 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   3745.7 lpm   (60.2 s, 2 samples)
Shell Scripts (8 concurrent)                    271.4 lpm   (62.3 s, 2 samples)
System Call Overhead                        1357071.7 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0  146496555.2  12553.3
Double-Precision Whetstone                       55.0      41014.4   7457.2
Execl Throughput                                 43.0       8363.8   1945.1
File Copy 1024 bufsize 2000 maxblocks          3960.0      57773.7    145.9
File Copy 256 bufsize 500 maxblocks            1655.0       8519.5     51.5
File Copy 4096 bufsize 8000 maxblocks          5800.0     240297.3    414.3
Pipe Throughput                               12440.0   20854990.5  16764.5
Pipe-based Context Switching                   4000.0     658358.4   1645.9
Process Creation                                126.0      14766.7   1172.0
Shell Scripts (1 concurrent)                     42.4       3745.7    883.4
Shell Scripts (8 concurrent)                      6.0        271.4    452.4
System Call Overhead                          15000.0    1357071.7    904.7
========
System Benchmarks Index Score                                        1170.6

lowen@winterstar:~/UnixBench$


Now, the Dell Precision 690, two quad core Xeons (E5335's) at 2.0GHz:

Code: Select all

[lowen@grymonia UnixBench]$ ./Run
make all
make[1]: Entering directory `/home/lowen/UnixBench'
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
./tmp  exists
./results  exists
make[1]: Leaving directory `/home/lowen/UnixBench'
sh: 3dinfo: command not found

#    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
#    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
#    #  # #  #  #    ##            #####   #####   # #  #  #       ######
#    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
#    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
####   #    #  #  #    #          #####   ######  #    #   ####   #    #

Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

Multi-CPU version                  Version 5 revisions by Ian Smith,
Sunnyvale, CA, USA
January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10
...snip...

8 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)

System: grymonia.pari.edu: GNU/Linux
OS: GNU/Linux -- 2.6.32-279.5.2.el6.x86_64 -- #1 SMP Fri Aug 24 01:07:11 UTC 2012
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz (3989.8 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET, Intel virtualization
...snip...
CPU 7: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz (3990.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET, Intel virtualization
11:13:44 up 4 days, 19:45,  1 user,  load average: 0.10, 0.04, 0.00; runlevel 3

------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 11:13:44 - 11:41:59
8 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       18557709.3 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     2338.4 MWIPS (10.0 s, 7 samples)
Execl Throughput                               2014.7 lps   (29.8 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        424752.8 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          124802.4 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        793482.0 KBps  (30.0 s, 2 samples)
Pipe Throughput                              803463.3 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 116910.1 lps   (10.0 s, 7 samples)
Process Creation                               5087.3 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   3972.0 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   1991.8 lpm   (60.0 s, 2 samples)
System Call Overhead                         940024.4 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   18557709.3   1590.2
Double-Precision Whetstone                       55.0       2338.4    425.2
Execl Throughput                                 43.0       2014.7    468.5
File Copy 1024 bufsize 2000 maxblocks          3960.0     424752.8   1072.6
File Copy 256 bufsize 500 maxblocks            1655.0     124802.4    754.1
File Copy 4096 bufsize 8000 maxblocks          5800.0     793482.0   1368.1
Pipe Throughput                               12440.0     803463.3    645.9
Pipe-based Context Switching                   4000.0     116910.1    292.3
Process Creation                                126.0       5087.3    403.8
Shell Scripts (1 concurrent)                     42.4       3972.0    936.8
Shell Scripts (8 concurrent)                      6.0       1991.8   3319.7
System Call Overhead                          15000.0     940024.4    626.7
========
System Benchmarks Index Score                                         781.7

------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 11:41:59 - 12:10:12
8 CPUs in system; running 8 parallel copies of tests

Dhrystone 2 using register variables      148109499.9 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                    18693.1 MWIPS (10.0 s, 7 samples)
Execl Throughput                              15104.5 lps   (29.8 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        314890.5 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           87628.0 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        866015.7 KBps  (30.0 s, 2 samples)
Pipe Throughput                             6261602.6 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                1363352.4 lps   (10.0 s, 7 samples)
Process Creation                              43974.3 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                  20045.0 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   2691.5 lpm   (60.1 s, 2 samples)
System Call Overhead                        2652631.1 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0  148109499.9  12691.5
Double-Precision Whetstone                       55.0      18693.1   3398.8
Execl Throughput                                 43.0      15104.5   3512.7
File Copy 1024 bufsize 2000 maxblocks          3960.0     314890.5    795.2
File Copy 256 bufsize 500 maxblocks            1655.0      87628.0    529.5
File Copy 4096 bufsize 8000 maxblocks          5800.0     866015.7   1493.1
Pipe Throughput                               12440.0    6261602.6   5033.4
Pipe-based Context Switching                   4000.0    1363352.4   3408.4
Process Creation                                126.0      43974.3   3490.0
Shell Scripts (1 concurrent)                     42.4      20045.0   4727.6
Shell Scripts (8 concurrent)                      6.0       2691.5   4485.8
System Call Overhead                          15000.0    2652631.1   1768.4
========
System Benchmarks Index Score                                        2780.9

[lowen@grymonia UnixBench]$


There are some areas where the Altix shines, especially considering the age difference and the much faster drives loaded in the 690. But that gives an idea of system speed differences.

I'm working on getting the 20 CPU Altix 3700 running the benchmark, but a script is tossing an error, so it hasn't happened yet.

And, for grins and giggles, a dual-processor 2.4GHz 32-bit Xeon (Dell PowerEdge 1600SC):

Code: Select all

Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
./results  exists
make[1]: Leaving directory `/home/lowen/UnixBench'
sh: 3dinfo: command not found
sh: runlevel: command not found

#    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
#    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
#    #  # #  #  #    ##            #####   #####   # #  #  #       ######
#    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
#    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
####   #    #  #  #    #          #####   ######  #    #   ####   #    #

Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

Multi-CPU version                  Version 5 revisions by Ian Smith,
Sunnyvale, CA, USA
January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

2 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

2 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

2 x Execl Throughput  1 2 3

2 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

2 x File Copy 256 bufsize 500 maxblocks  1 2 3

2 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

2 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

2 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

2 x Process Creation  1 2 3

2 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

2 x Shell Scripts (1 concurrent)  1 2 3

2 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)

System: itadmin.pari.edu: GNU/Linux
OS: GNU/Linux -- 2.6.18-308.13.1.el5 -- #1 SMP Tue Aug 21 17:10:06 EDT 2012
Machine: i686 (i386)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: Intel(R) Xeon(TM) CPU 2.40GHz (4784.4 bogomips)
Hyper-Threading, MMX, Physical Address Ext
CPU 1: Intel(R) Xeon(TM) CPU 2.40GHz (4783.5 bogomips)
Hyper-Threading, MMX, Physical Address Ext
13:18:03 up 16 days, 21:34,  1 user,  load average: 0.17, 0.08, 0.01; runlevel

------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 13:18:03 - 13:46:16
2 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables        3875027.8 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                      934.5 MWIPS (10.5 s, 7 samples)
Execl Throughput                               1031.1 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        138817.0 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           39827.0 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        367564.5 KBps  (30.1 s, 2 samples)
Pipe Throughput                              243402.9 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  57275.1 lps   (10.0 s, 7 samples)
Process Creation                               4895.3 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   2227.5 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    436.7 lpm   (60.1 s, 2 samples)
System Call Overhead                         323050.9 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0    3875027.8    332.1
Double-Precision Whetstone                       55.0        934.5    169.9
Execl Throughput                                 43.0       1031.1    239.8
File Copy 1024 bufsize 2000 maxblocks          3960.0     138817.0    350.5
File Copy 256 bufsize 500 maxblocks            1655.0      39827.0    240.6
File Copy 4096 bufsize 8000 maxblocks          5800.0     367564.5    633.7
Pipe Throughput                               12440.0     243402.9    195.7
Pipe-based Context Switching                   4000.0      57275.1    143.2
Process Creation                                126.0       4895.3    388.5
Shell Scripts (1 concurrent)                     42.4       2227.5    525.4
Shell Scripts (8 concurrent)                      6.0        436.7    727.9
System Call Overhead                          15000.0     323050.9    215.4
========
System Benchmarks Index Score                                         305.0

------------------------------------------------------------------------
Benchmark Run: Thu Sep 13 2012 13:46:16 - 14:14:30
2 CPUs in system; running 2 parallel copies of tests

Dhrystone 2 using register variables        7709694.9 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     1853.5 MWIPS (10.5 s, 7 samples)
Execl Throughput                               2010.5 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        102445.1 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           26696.5 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        306546.8 KBps  (30.1 s, 2 samples)
Pipe Throughput                              484787.1 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 149694.7 lps   (10.0 s, 7 samples)
Process Creation                               9121.6 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   3166.0 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    453.2 lpm   (60.2 s, 2 samples)
System Call Overhead                         618248.9 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0    7709694.9    660.6
Double-Precision Whetstone                       55.0       1853.5    337.0
Execl Throughput                                 43.0       2010.5    467.6
File Copy 1024 bufsize 2000 maxblocks          3960.0     102445.1    258.7
File Copy 256 bufsize 500 maxblocks            1655.0      26696.5    161.3
File Copy 4096 bufsize 8000 maxblocks          5800.0     306546.8    528.5
Pipe Throughput                               12440.0     484787.1    389.7
Pipe-based Context Switching                   4000.0     149694.7    374.2
Process Creation                                126.0       9121.6    723.9
Shell Scripts (1 concurrent)                     42.4       3166.0    746.7
Shell Scripts (8 concurrent)                      6.0        453.2    755.3
System Call Overhead                          15000.0     618248.9    412.2
========
System Benchmarks Index Score                                         442.5




The Altix fares quite well against the NetBurst core Xeon.
Image Image Image Image Image Image
The Altix systems are both at least as responsive as the Dell 690 8 core, even for remote ssh-tunneled X programs. The 690 is going to be used for virtualization, but was available for a quick benchmarking before getting KVM guests going....

I'm overall quite pleased at the responsiveness, even under heavy load, of both Altix boxes. The old NetBurst box is definitely less responsive under load.

And the Altix boxen are destined to do MPI-friendly rendering tasks, so they will do quite well indeed.

[edit] Also, I have a Sun Enterprise 6500 here that I might can run the benchmarks on at some point, but it will be a while. The E6500 has 18 CPUs and 22GB of RAM. Will be an interesting test.
Image Image Image Image Image Image
Glad it helped you.... Sorry it was buried...... :-(

EDIT:

That reminded me that I hadn't powered up the beast in a while.... it still works, a boots Scientific Linux CERN 5.4 just fine. This is my build box for rebuilding CentOS 5 from source; I'm up to the GA release CentOS 5.9 at the moment, and am running the smaller 4 CPU Altix 3700 on my internal CentOS 5.9 rebuild:
Code:
[root@roan ~]# uname -srvm
Linux 2.6.18-348.el5 #1 SMP Sun Jan 20 00:26:37 EST 2013 ia64
[root@roan ~]# uptime
12:33:35 up 165 days, 23:58,  1 user,  load average: 0.01, 0.01, 0.00
[root@roan ~]# cat /etc/redhat-release
CentOS release 5.9 (Final)
[root@roan ~]#


Need to build the updates..... :-)

I started with SLC 5.4 IA64, the last one put out by CERN, and built stepwise through CentOS 5.5, 5.6, 5.7, 5.8, and 5.9, and if there's enough demand I can probably make the unsigned package trees available read-only; I will need to check with the CentOS devs first to make sure I don't need to rebrand before distributing. You can read what I did in the archives of the CentOS-devel mailing list, just Google for 'CentOS IA64' and you'll find it..... it takes a long time, even on a big box, to do the full rebuild.

It does need PROM 5.04.
smj wrote:
It seems like we're building up a decent amount of Altix information, but there are still some frequent questions that might benefit from being centralized. One of those is the question of operating system support for Altix systems. I've incorporated what I could find into the Altix 350 wiki page , but it seemed like it would be good to create a thread for reporting and Q&A.


Excellent idea, and I like the page on the wiki.

Quote:
...
RedHat Enterprise Linux (4.x - 6.4)
...


RHEL is supported only through RHEL 5, which will be supported by Red Hat through 2017. RHEL 6 is not supported, although some rebuild efforts are underway to get CentOS 6 back on IA64. RHEL 5 is currently at 5.9 and is supported by Red Hat on IA64 in parallel with i686 and x86_64.

Fedora 9 is the last Fedora on IA64, but I can't vouch for how well it works; a mirror of the install ISO's is at http://mirrors.rit.edu/fedora-secondary/archive/releases/9/Fedora/ia64/iso/ .

Quote:
...
Forum members have reported successful installs of the following additional distros:
...
Scientific Linux CERN 5.x
...


SL CERN stopped support of IA64 at 5.4, and getting it to install is a bit of a thing, since the 5.4 install DVD won't directly boot. I've got to refresh my memory how I did that; it's been a while.

I'll try to dig that back up, and I'll see if I can't roll install media for CentOS5 or my own rebranded rebuild of CentOS5. I do have an RHEL entitllement, currently in use on a different box, but that allowed me to evaluate RHEL5 on Altix, and suffice to say it just works, as of RHEL 5.8.

What I need to do next is rebuild the redistributable ProPack source packages that last worked for RHEL5 (Foundation 1 SP6 and ProPack 6SP6; Foundation 2 and ProPack 7 are SuSE-only) on CentOS 5.9. This includes the environment modules. If you have the ProPack and Foundation ISO's the source packages are found in the SRPMS directory for each ISO.
UPDATE:

According to the PACKAGE_LICENSES.txt files on both the Foundation and ProPack discs, some of the more useful packages are indeed GPL, including the environmental modules one. So the source for those pieces should be releaseable. Several useful packages aren't GPL, including numatools and pcp-sgi, which have SGI Proprietary licenses..

Do note that what I've called 'environmental modules" is not what it sounds like, and the actual package you want for performance monitoring would be 'pcp-open' which is GPL/LGPL licensed.

I'll attempt to build this one in a few days, and see what info it gives.
I just went through a release update on the maintenance stream from 6.5.19 to 6.5.22 on my been-a-long-time-since-last-boot-because-I-was-busy-playing-with-the-Altix-350-beast Indigo2 R10K. Inst completed successfully, and I rebooted appropriately. However, uname -R returns:

Code: Select all

r10k 13# uname -R
6.5 6.5.19m
r10k 14#


Going back through inst, it seems like the overlays installed properly (they show as S instead of U in the list inside inst, and they showed as U the first run through inst). The install was from 6.5.22 tarballs downloaded from supportfolio while 6.5.22 was still available.......

My install media is the 6.5.19; should the uname -R show me 6.5.22, or am I missing something?
Image Image Image Image Image Image
5086? Yes.
Image Image Image Image Image Image
Ok, thanks to user hamei I got pointed to the correct direction.

For whatever reason, /unix was left in place and was the 6.5.19 version, with the 6.5.22 version as /unix.install. Doing a cp /unix /unix.old and then a cp /unix.install /unix seems to have done the trick, and the system now reports 6.5.22m to a uname -R.

It might have had something to do with the Phobos E100 drivers that I had started loading years ago (yes, I do have the E100 in the machine, but it wasn't being recognized as fe0 until I pulled /unix.install over correctly).
Image Image Image Image Image Image
Ok, thanks. I am a bit speechless as to how long ago I had originally started the install...... I bought the 6.5.19 in February of 2005, and the I2 a week or so later ($111.48 plus shipping from eBay seller sunnking....), and then installed IRIX in March 2005. Last boot was, yep, you guessed it, 2005. Yow. So I picked up the install a bit less than eleven years after I started it and pulled the IRIX 6.5.22 overlays down from supportfolio.....

And the beast just started up like it was supposed to, at least with a serial console (my 13W3 to VGA adapter and my KVM/monitor don't like each other, but I have an SGI monitor I can hook up.....). Hinv to the appropriate place.....

Thanks all!
Image Image Image Image Image Image
Whee. Eleven years after starting to work on this machine..... finally:

Code: Select all

# hinv -mv
FPU: MIPS R10010 Floating Point Chip Revision: 0.0
CPU: MIPS R10000 Processor Chip Revision: 2.5
1 195 MHZ IP28 Processor
Main memory size: 512 Mbytes
Secondary unified instruction/data cache size: 1 Mbyte
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
Integral SCSI controller 0: Version WD33C93B, revision D
Tape drive: unit 2 on SCSI controller 0: DAT
Disk drive: unit 3 on SCSI controller 0 (unit 3)
CDROM: unit 4 on SCSI controller 0
Integral SCSI controller 1: Version WD33C93B, revision D
On-board serial ports: 2
On-board bi-directional parallel port
Graphics board: Solid Impact
Integral Ethernet: ec0, version 1
Iris Audio Processor: version A2 revision 1.1.0
EISA bus: adapter 0
#

Code: Select all

# /usr/gfx/gfxinfo -vv
Graphics board 0 is "IMPACTPC" graphics.
Managed (":0.0") 1280x1024
Product ID 0x1, 1 GE, 1 RE, 0 TRAMs
MGRAS revision 1, RA revision 0
HQ rev A, GE11 rev B, RE4 rev A, PP1 rev A,
VC3 rev A, CMAP rev EMC rev D
unknown, assuming 19" monitor (id 0xf)

(Could not contact X server; thus, no XSGIvc information available)
#

Code: Select all

# scsicontrol -i /dev/scsi/*
/dev/scsi/sc0d2l0:  Tape          ARCHIVE Python 01931-XXX5.63
ANSI vers 2, ISO ver: 0, ECMA ver: 0; supports:  synch linkedcmds
Device is  not ready

/dev/scsi/sc0d3l0:  Disk          IBM     DNES-309170     SAH0
ANSI vers 3, ISO ver: 0, ECMA ver: 0; supports:  synch linkedcmds cmdqueing
Device is  ready
/dev/scsi/sc0d4l0:  CD-ROM        PLEXTOR CD-ROM PX-12TS  1.02
ANSI vers 2, ISO ver: 0, ECMA ver: 0; supports:  synch linkedcmds
Device is  not ready

lorc-r10k 9#

Code: Select all

# uname -R
6.5 6.5.22m
#


Machine was purchased to do Audio DAT downconversion....... just never got around to it until now. Thus, the Archive Python DAT drive.....
Image Image Image Image Image Image
:-)

Can you say 'PhantoM?'
Image Image Image Image Image Image
Thanks to everyone who has mirrored things.
Image Image Image Image Image Image
robespierre wrote: I think it's an aftermarket CDROM drive, they shipped with Toshiba XM-3501B units.

It is; the Plextor 12X drives are settable to 512 byte sectors, and I happened to have fifty or so at the time.... I'm not sure how many I have right now.

It did come from the seller with a Toshiba drive, but it was broken, and I had the Plextor drives......

The Archive CTD-8000 DDS2 were stock.
Let the forum know if you have success with audio DAT transfers. I found that the DATplayer software really liked to hang.


Thanks for the notice. I have the DATgoodies tarball, but haven't built them yet (I did grab the tardist of DATgoodies from somewhere else, but I've always been one to build things myself if possible). And while there is software for Linux and other u*ix variants out there, it is rumored to be less stable than the SGI stuff was, so for now it will be DATman. I also have an O2 that I'm getting up to 6.5.22 also, but it will have to be an external.

I will definitely follow up on the DAT transfers once I get some more round toits to get things set up properly.....
Image Image Image Image Image Image
Ok, I thought I'd share this with everyone, since it made things much faster for me.

First, note that many modern Linux distributions do not build or bundle the efs filesystem anymore. I am running CentOS 7 on my main box, and followed the instructions at https://wiki.centos.org/HowTos/I_need_the_Kernel_Source and then https://wiki.centos.org/HowTos/BuildingKernelModules to get a working efs.ko, which I put into place the normal way. Setting EXTRAVERSIONS in the kernel Makefile was required for my build; YMMV. I am an old hack at building RPM's, but I haven't built a kmod-efs as yet.

Ok, so once a properly versioned efs.ko was in the correct place for the installed kernel to find it, the following commands, as root, will load it:

Code: Select all

depmod -a
modprobe -a efs


Now, I had previously imaged my IRIX distribution disks, and had all of them in one directory, named with a '.image' extension. I then created a mount point, mnt:

Code: Select all

# mkdir mnt

Here's the one-liner that, in a little over two minutes, exploded all the files out of the 22 disk images I have:

Code: Select all

# for disc in *.image; do mount -o ro,loop -t efs $disc mnt; mkdir ${disc}-contents; rsync -avH --progress mnt/ ${disc}-contents; umount mnt; done

(yeah, I know --progress is just eye candy..... but this is an IRIX forum, no? :lol: ) and here's the resulting directories with their sizes:

Code: Select all

[root@dhcp-pool151 IRIX]# du -sh *.image-contents
333M   irix-6512-general-demos-1of2.image-contents
568M   irix-6512-general-demos-2of2.image-contents
547M   irix-6519-1.image-contents
559M   irix-6519-2.image-contents
378M   irix-6519-3.image-contents
291M   irix-6519-4.image-contents
621M   irix-65-applications.image-contents
285M   irix-65-dev-foundation.image-contents
254M   irix-65-dev-libraries.image-contents
101M   irix-65-devtools-maint-rel-7213m.image-contents
1.5M   irix-65-display-ps-dev-opt.image-contents
528M   irix-65-foundation-1.image-contents
237M   irix-65-foundation-2.image-contents
622M   irix-65-freeware-1of4.image-contents
621M   irix-65-freeware-2of4.image-contents
595M   irix-65-freeware-3of4.image-contents
381M   irix-65-freeware-4of4.image-contents
175M   irix-65-softwindows95-50.image-contents
17M   irix-6-mipspro-c-721.image-contents
14M   irix-6-mipspro-f77-721.image-contents
58M   irix-6-onc3-nfs.image-contents
328K   irix-6-snmp-access-to-hp-mib.image-contents
[root@dhcp-pool151 IRIX]#


I hope that helps someone.

Now, to set up a DINA VM on my CentOS 7 box using KVM......
Image Image Image Image Image Image
Additional information:

I did a fresh install of 6.5.19 on one of my O2's, and upgraded to 6.5.22 using the same procedure. The same thing happened as with the Indigo2. But this time I was expecting it, and knew what to look for.

On the O2, I used the GUI Software Manager, and let it do an automatic install of the 6.5.22 overlays.
Image Image Image Image Image Image
robespierre wrote: Archive.Org seems to skate under the Fair Use defense for everything that they do. Mirroring web pages without prior permission is not sanctioned by the copyright laws either.
They have a project to upload and share every CD-ROM ever published.


It would be nice if they would get 6.5.30 up.... I'm missing a couple, and one is scratched (my youngest son used to think CD's were playthings to throw, and while I did let him sail some 8 inch floppies that had nothing useful on them, he decided that Daddy's box with those 8 inch floppies was just full of toys..... ). I have a 6.5.24 overlay set on the way from an eBay seller, but was hoping to get up to 6.5.30; and the eBay sellers who want $425 or $700 for a set..... well, I can't swing that. Oh well, it is what it is.
Image Image Image Image Image Image
While over the years there have been several threads here about the best version of IRIX for different platforms, in the reality of the new post-IRIX supportfolio and the complete non-availability of patches and even the 6.5.22 maintenance release from SGI, what would the brain trust here consider the 'best' version of IRIX for these machines to be?

If I were a new SGI hobbyist and I had just scored an O2 on eBay or at the flea market or on craigslist or wherever, and there was either no hard disk or a blank hard disk, what would my best IRIX option be, assuming several were available? Is it 6.5.22 since that is easily found, and thanks to the internet archive the otherwise difficult to find foundation CD's are available for new installs? Or should someone try to find something later, perhaps even 6.5.30 (even though the CD sets are expensive through eBay and probably not even available through SGI any more)? Are there advantages to 6.5.22 over 6.5.30? (I've heard a few already, but I'd like to see the knowledge pooled given the current IRIX situation, and links to SGI's site are lilkely to not be valid soon enough.......the http://www.sgi.com/products/software/irix/releases/ link, for instance, is no longer useful.

Having said that, if one wanted to get into development, has anyone set up a development environment without using the IRIX Dev Foundation and the MIPSPro compilers using any version of GCC? Specifically, in my case, writing automated tools for pulling audio from audio DATs and stuffing them into properly chunked and named WAV files? I have the compilers that came with 6.5.19 AWE but something more modern would be nice, and the GCC suite is well-tested, if not as optimized.

And I'm well aware that likely the 'best' OS option for the hardware today , if I want to do general things and not 'IRIX-y' things like pulling audio from DAT, is the latest OpenBSD, but that's not IRIX..... :-)
Image Image Image Image Image Image
foetz wrote: [ setting up a dev suite without the IDF ] won't work. even for gcc or whatever other compiler you need the basic dev stuff for the os which would be the dev cds except for the actual mipspro.

First, I appreciate the response, and all that you have done for the community here over the years.

My starting point is 6.5.19 AWE, so I do have the IDF and associated items (MIPSPro C and F77 7.2 along with the 7.2.1.3m maintenance release. But I see here that building Nekoware packages at least needs 7.4? (not that I'm planning yet on anything like that; but I did build PostgreSQL RPMs for Red Hat-ish distributions for 5 years long ago)...)...
the GCC suite is well-tested, if not as optimized.

it's neither as recent tests from some guys here including myself have shown. if you have a mipspro stick to that.

Is that just GCC for IRIX, or GCC in general? I'll look for the threads....
if I want to do general things and not 'IRIX-y' things ....

no point in running something on an sgi that runs way better on x86. that aside with anything but irix you can run nothing that was made for irix which is the very point of an sgi ... you get the point, without irix an sgi is just an old box.


Exactly.
as for your actual question, with an o2 you can only choose between 6.3 and 6.5.x so unless there's something special you have that requires 6.3 the answer is quite obvious


I was meaning something in 6.5.x, and I should have mentioned that in my post..... my bad. It seems the consensus of previous threads was that 6.5.21 was the one to have, with .22 a close second.

Thanks for the information!
Image Image Image Image Image Image
Nice. I've used MESS/MAME for a while to do TRS-80 Model II development.... yeah, really old. However, it is very processor-intensive to emulate all the hardware in a TRS-80 Model II; it would be interesting to see if my O2 would emulate a 4MHz Z80 system with all the support chips like a Model II.....

I see a 0.149 MAME on IRIX contributed by axatax, but I fear getting the latest 0.171 might be a real feat. I build it on CentOS 7 with the help of the Software Collections' DevToolSet-3 compiler chain. It would be a real treat to run IRIX that way, but that's getting pretty complicated, especially given the internal architecture of MAME.
Image Image Image Image Image Image
Linux GUI support is pretty broken on SGI, from what I remember from my research a while back. Linux is basically completely broken on R10K on O2 and not especially well-supported at all, at least the last I looked. OpenBSD is a better non-IRIX choice on that hardware (and I am primarily a CentOS Linux user, with a grand total of one BSD machine around.....), although O2 is better supported on the GUI.

But it has been quite a while since I looked; YMMV and all that, plus people do change things and ports come and they go. The core of the Linux MIPS stuff has been for non-SGI MIPS stuff, especially routers. But there is some good info, if you're into Linux on MIPS or want to be into Linux on an I2, at https://www.linux-mips.org/wiki/IP28

Part of the fun of retrocomputing (and SGI computing is at the point of retro, now) is getting the hardware to do things it was never originally designed to do, or in my case of getting to play with hardware on the cheap that used to cost kilobucks (kindof like getting to play with a VAXstation 4000/96; at one time that box was the bomb). So if you're able to get a usable GUI (or CLI if that's your thing) Linux of BSD working like you want, kudos.

dexter1 wrote: ...
If you get an Indigo2 like Foetz suggested, get a fairly decent system with at least an R10K and Solid impact GFX. That doesn't have texture support, but should be good for programming and modelling. But if you want to run Debian and Gentoo i am not sure if there is graphic driver support for Impact systems, ...


The Octane port supports Impact; the I2 port may or may not at this point (the online docs are pretty old in this area). This is for console framebuffer; X support is (or at least was) either experimental or nonexistant.

I had looked into running a Linux on some Teal Indigo2 Extreme's here, but then I found the stash of IRIX 5.3 CD's that came with the machines, and that might be a better choice for them; there are better Linux machines out there, and IRIX is a far better choice on R4K hardware, IMO. My R10K Indigo2 with SolidIMPACT is running IRIX 6.5.22m, and performances is pretty good (I have 512MB of RAM in it, and a relatively fast hard drive).
Image Image Image Image Image Image
electrithm wrote: Wait so you're saying I wouldn't be able to run Gentoo with the MATE gui on the Indigo?


To the best of my knowledge, and I always reserve the right to be wrong, no, you would not be able to do this. Linux is not NetBSD, which really does run on everything (command line, at least). Gentoo's MIPS hardware requirements page is at https://wiki.gentoo.org/wiki/MIPS/Hardware_Requirements Indigo2 R10K IMPACT is listed as very experimental (italics is on their page.....). O2 R10K is unsupported entirely. Graphics support is minimal at best, inoperative at worst. Serial console recommended for anything other than Newport (XL) framebuffers.

That sucks because I was gonna use it for playing around with SDL. I'm not gonna do very heavy modelling on IRIX, just some small little animations and making 3d models so I probably wouldn't need a very heavy duty SGI.


Use IRIX if you're doing SGI. It was built for these systems, and it runs well on them. SGI didn't open up any of the documentation on how to program the various 3D graphics pipes (there is documentation on Indy, and that's pretty much it), and so without the hardware acceleration, yes, things aren't going to be fast. With IRX, things are usable.
Image Image Image Image Image Image
FWIW, I decided to stick with 6.5.22, which overlays I have from my Supportfolio download long ago, to go with the 6.5.19 CD set (AWE set) I also bought, long ago.

It would be interesting to see if someone was successful in doing a scratch install with only what is up on archive.org. I may try that myself on my I2/R10K, once I find another drive that will work in it.
Image Image Image Image Image Image