QoS

Started by icecream-guy, June 06, 2016, 09:14:19 AM

Previous topic - Next topic

icecream-guy

How would one configure Jumbo MTU support for individual server ports connected to a Nexus 5k,

Server admins want to me set MTU to 9000 on a few of their servers.
it looks like a global config either on or off.  not supported on per interface basis
having said that, there are some things that can be set in a service policy,
but setting MTU is a network-qos function which is not supported on an interface configuration.
:professorcat:

My Moral Fibers have been cut.

srg

I thought you could do it per port with a  policy-map but looks like its in the system qos context and box-wide.
som om sinnet hade svartnat för evigt.

NetworkGroover

You're referring to setting IP MTU specifically?  That's a QoS function?? What a PITA.
Engineer by day, DJ by night, family first always

EOS


icecream-guy

yeah, that's what I was figuring.... don't now how it will affect everything else flowing through the switch.
:professorcat:

My Moral Fibers have been cut.

NetworkGroover

Quote from: ristau5741 on June 06, 2016, 02:37:46 PM
yeah, that's what I was figuring.... don't now how it will affect everything else flowing through the switch.

In what regard?
Engineer by day, DJ by night, family first always

srg

Quote from: ristau5741 on June 06, 2016, 02:37:46 PM
yeah, that's what I was figuring.... don't now how it will affect everything else flowing through the switch.
Probably won't affect it at all. It will only enable larger frames than what you're passing today, can't see what that would break. It's not the L3 MTU.
som om sinnet hade svartnat för evigt.

Dieselboy

#7
Ristau, I have this configured on our N3k's since their inception back in 2013:


policy-map type network-qos JUMBO-FRAMES
  class type network-qos class-default
    mtu 9000
system qos
  service-policy type network-qos JUMBO-FRAMES


That was taken from a Cisco doc. somewhere.

All this does is allow the switch to switch jumbo frames at 9000 bytes. It has not changed the routing MTU:

Quote from: switch
3048-1# show int vl 7
Vlan7 is up, line protocol is up, autostate enabled
  Hardware is EtherSVI, address is  hoho.haha.cb3c
  Description: VM-MGMT SVI
  Internet Address is 192.168.7.2/24
  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
  Last clearing of "show interface" counters never

Unfortunately for me, the storage guy didn't bother enabling jumbo's on the SAN and so although the underlying network supports it, they aren't used.

Enabling jumbo frames doesn't really do anything on the switch in terms of traffic flowing through it. I assume that it would allocate more memory to certain processes or the asic so that it could store and forward 9000-byte frames.

The question I would be asking is What are your server guys trying to do? In my experience the server guys have some idea or some knowledge of the end goal but don't usually understand the full extent or full scope of what that involves (In My Past Experiences...).

Are the server guys enabling jumbos on a dedicated server storage NICs, so that jumbos aren't used for non-storage data packets?

EOS

Quote from: ristau5741 on June 06, 2016, 02:37:46 PM
yeah, that's what I was figuring.... don't now how it will affect everything else flowing through the switch.

We recently configured this on a pair of Nexus 5672-UP's for our Virtual Team.  They needed it to support VMotion.

It did not effect anything on the switch...  If a jumbo frame comes through the Nexus, it can now handle it without fragmenting.

Dieselboy

Quote from: EOS on June 07, 2016, 05:47:31 AM


It did not effect anything on the switch...  If a jumbo frame comes through the Nexus, it can now handle it without fragmenting.

Actually this config means the switch can handle it full stop. Without the config the switch would drop the frame. A router would fragment the frame so it can be routed at a different MTU.

icecream-guy

#10
Quote from: Dieselboy on June 07, 2016, 02:18:11 AM

Are the server guys enabling jumbos on a dedicated server storage NICs, so that jumbos aren't used for non-storage data packets?

not dedicated storage NIC's. Storage is to same switch connected on different ports  in a different VLAN, so mixing the server packets and storage packets

and this....
Quote from: EOS on June 07, 2016, 05:47:31 AM

...They needed it to support VMotion.



not that the packets would be affected,  but the ESX servers are on trunk ports, with a whole lot of VLANs trunked as well the storage VLANs also trunked.

I don't get a warm fuzzy feeling when the interface is mixing MTU sizes between the frames for other servers 1500 MTU  and storage 9000 MTU.
I feel like there is going to be a lag there. Queuing or slow server response issues with the switch interface processing 9000 MTU frames sitting in front of 1500  MTU frames in the queue heading to the server.
:professorcat:

My Moral Fibers have been cut.

icecream-guy

Quote from: ristau5741 on June 07, 2016, 08:10:08 AM


I don't get a warm fuzzy feeling when the interface is mixing MTU sizes between the frames for other servers 1500 MTU  and storage 9000 MTU.
I feel like there is going to be a lag there. Queuing or slow server response issues with the switch interface processing 9000 MTU frames sitting in front of 1500  MTU frames in the queue heading to the server.

I suppose 10G interfaces makes this a moot point.
:professorcat:

My Moral Fibers have been cut.

Reggle

You can actually calculate this.
8*9000/10,000,000,000 = 0.000 007 2 seconds, or 7.2 microseconds to serialize a jumbo frame
8*1500/10,000,000,000 = 0.000 001 2 seconds, or 1.2 microseconds to serialize a jumbo frame

You lose 6 microseconds for each jumbo frame in front of you. I don't think it will make a difference for a typical application, really.

wintermute000

Yes, L2 and L3 MTUs are separate, I've seen plenty of deployments with 9k on L2 but standard 1500 on the SVIs.
The 1500 routing MTU is blissfully unaffected by the 9k underlying L2 MTU.

Dieselboy

Quote from: wintermute000 on June 07, 2016, 10:02:01 PM
Yes, L2 and L3 MTUs are separate, I've seen plenty of deployments with 9k on L2 but standard 1500 on the SVIs.
The 1500 routing MTU is blissfully unaffected by the 9k underlying L2 MTU.

Unless it needs to be routed :) Hence the "Storage VLAN".

Ristau, your comment yesterday means I'm going to provision a "vmotion" vlan for our Red Hat system (called live migration). If I can make VMs migrate quicker then I'll do it. I have 90+ VMs running on 4 BEEFY servers with multiple 10GB connections between them. However even red hat say only to migrate a few at a time to save any issues. I don't think Red Hat know about 10GB yet either, we've just upgraded our rhev to the latest so will see if they've fixed the bug where 10GB is seen as 1GB on the reporting console.

Reggle, thanks for the formulae!

Ristau regarding mixing MTU sizes, I see what youre saying. But if you do a packet capture you'll probably find that you have  lots of small frames anyway as well. Going from <200 bytes up to 1500 > 9000. The switch just switches :) It's all done in hardware anyway and it's microseconds of a difference.
Even a 9000 byte jumbo frame at 1GB interface speed only takes 72us (microseconds) using the above calculation