IRIX and Jumbo Frames

[Published on , last updated on ]

Enabling jumbo frames on IRIX

Here are instructions on how to enable jumbo frames on IRIX. Before you do this, make sure that the rest of the networking equipment you're using (e.g. switches and other computers) actually supports jumbo frames. Every machine connected to the same subnet should have the same MTU size. Enabling jumbo frames will lower the CPU usage (due to fewer interrupts) and increase bandwidth, see the performance measurements below for more info.

How to set the MTU size depends on the driver. You can determine the driver used by looking at the output of hinv, e.g.:

If using the eg driver:

  1. Open /var/sysgen/master.d/if_eg in an editor (e.g. nedit).
  2. Change #define EG_STDFRAME (on line 42) to #define EG_JUMBOFRAME.
  3. Save.
  4. Run autoconfig as root.
  5. Reboot.

NB: This is for 6.5.30, older kernels may not have that #define and you will need to manually change the values in the eg_mtu array from 1500 to 9000, as well as eg_recv_coal_bd from 130 to 60 and eg_recv_coal_ticks from 900 to 350.

If using the tg driver:

  1. Open /etc/config/ifconfig-1.options in an editor (the name of this file may vary depending on how many network interfaces you have).
  2. Enter mtu 9000 (or add it to the end of the line if the file isn't empty).
  3. Save.
  4. Repeat for any other gigabit interfaces you have.
  5. Reboot.

After the reboot, netstat -i should report 9000 under the MTU column for the gigabit interface(s).

It would also be a good idea to increase the TCP send and receive buffers to 256KB:

# systune tcp_sendspace 262144
# systune tcp_recvspace 262144

Performance comparisons with and without jumbo frames ^

Equipment used:

TCP buffer size in IRIX (tcp_sendspace, tcp_recvspace) was set to 256KB.

Performance tests were done using iperf 2.0.4. I made a script which does 10 consecutive tests for each pair and reports the results. Mean and median are rounded to nearest integer, standard deviation to nearest tenth.

Without jumbo frames (MTU=1500)

Unidirectional bandwidth

ServerClientBmean [Mbit/s]Bmedian [Mbit/s]Bstddev[Mbit/s]
MacBookFuel9349352.6
FuelMacBook82082220.4
FuelOrigin56757627.9
OriginFuel62665362.2
OriginMacBook6026044.5
MacBookOrigin4974985.6

Bidirectional bandwidth

1st2nd1st->2nd Bmean [Mbit/s]1st->2nd Bmedian [Mbit/s]1st->2nd Bstddev [Mbit/s]2nd->1st Bmean [Mbit/s]2nd->1st Bmedian [Mbit/s]2nd->1st Bstddev [Mbit/s]
MacBookFuel78378311.547748618.5
FuelMacBook4874898.27587565.8
FuelOrigin3283280.53683680.8
OriginFuel3693690.53273270.5
OriginMacBook24323713.334034512.8
MacBookOrigin3473470.52352350.3

Performance without jumbo frames is largely disappointing (although the Fuel did quite well). CPU utilization on the Fuel was about 50% in unidirectional tests and nearly 100% in bidirectional tests; jumbo frames will lower this significantly, so greater throughput can be expected.

With jumbo frames (MTU=9000)

Unidirectional bandwidth

ServerClientBmean [Mbit/s]Bmedian [Mbit/s]Bstddev[Mbit/s]
OriginFuel9639632.3
FuelOrigin95295112.6

Bidirectional bandwidth

1st2nd1st->2nd Bmean [Mbit/s]1st->2nd Bmedian [Mbit/s]1st->2nd Bstddev [Mbit/s]2nd->1st Bmean [Mbit/s]2nd->1st Bmedian [Mbit/s]2nd->1st Bstddev [Mbit/s]
OriginFuel6826813.76186193.6
FuelOrigin6196203.96806795.8

With jumbo frames enabled, the unidirectional bandwidth reaches at least 95% of the theoretical maximum and the bidirectional bandwidth almost doubled (to at least 60% of the theoretical maximum) compared to the previous results with standard frames.

The testing conditions were the same as before (i.e. same hardware, no other network traffic, no other CPU intensive apps used at the time of testing), only the MTU changed.

From these results it is apparent that having a separate gigabit network with the MTU set to 9000 is well worth it :)