Jumbo Frames for storage <->

Started by Dieselboy, April 22, 2016, 07:31:19 AM

Previous topic - Next topic

Dieselboy

Quote from: mlan on April 22, 2016, 06:38:34 PM
I recently deployed a 10GbE iSCSI fabric on Nexus 3500's and enabled jumbo frames to make all the compute/storage teams and vendors happy.  Here is some reading material, along with some benchmarks in the two older articles:

https://www.reddit.com/r/networking/comments/3nvvrw/what_advantage_does_enabling_jumbo_frames_provide/
https://vstorage.wordpress.com/2013/12/09/jumbo-frames-performance-with-iscsi/
http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/



Thanks!

wintermute000

#16
my TLDR version


       
  • DC yes but do not expect magic bullet, maybe ~10% improvement
  • Campus LAN no, do you want to mess with windows MTU settings and argue with MS admins / get dragged into SOE team processes
  • make sure none of your firewalls are blocking PMTUD, if so, call deanwebb
  • make sure none of your carriers are blocking PMTUD, if so, also call deanwebb and blame the firewall again
  • get good at wireshark to prove PMTUD issues
  • get really good at interpreting what different vendors mean by MTU @ L2, L3, with/without encapsulation/checksum, and which exact show command does what, and exactly what the ICMP commands are doing (e.g. cisco ping size = total IP packet size, junOS ping size = PAYLOAD size). Kill me now.

Dieselboy

Cheers mate.

for what it's worth apparently Jumbo frames are supposed to be layer 2 only. Theyre called something else when they get to layer 3 - jumbogram.
https://supportforums.cisco.com/discussion/12293501/what-jumbogram

Do carriers really block PMTUD ?? Why would a carrier block anything at all between source internet host and destination internet host? Unless the packet was destined for the carrier equipment, then I could understand that.

mlan

@wintermute - Good summary.  I only use jumbo frames in an un-routed isolated L2 environment, where the only approved application is iSCSI.  Anything else would be asking for trouble, and I have already had many headaches just supporting iSCSI.  I much prefer supporting FCoE for storage.

I wouldn't be surprised to hear of issues with PMTUD.  To make that work properly, you need an IP header flag set correctly, and also need to receive all the ICMP responses correctly.  Plenty of room for error in that equation in a complex environment.

Dieselboy

Implemented Jumbo frames today, I think it's my first time ever implementing jumbos  -not sure, but can't remember ever doing it.

On our red hat virtualisation env. we were finding that some VMs were taking ages to live-migrate to another host. In fact they were exceeding the default timeout of 6 minutes and so were failing. So created a new VLAN, enabled Jumbos on it and set this as the live migration VLAN. VMs take seconds to migrate now. Although this was not the only change. We had found that the migration was not fully using the 10GB network bandwidth - red hat throttles live migration to 32M! So we turned all that rubbish off at the same time.