QoS

Started by icecream-guy, June 06, 2016, 09:14:19 AM

Previous topic - Next topic

wintermute000

Quote from: Dieselboy on June 08, 2016, 04:52:24 AM

Unless it needs to be routed :) Hence the "Storage VLAN".



Agree, good clarification :)

icecream-guy

Not one much on storage but it looks like most, if not all the VLAN's trunked to the ESX Servers are trunked to the NetApp storage device.
I don't know if that makes a difference, doesn't seem to be a need for routing with configuration like that.
:professorcat:

My Moral Fibers have been cut.

NetworkGroover

Quote from: ristau5741 on June 07, 2016, 08:10:08 AM

I don't get a warm fuzzy feeling when the interface is mixing MTU sizes between the frames for other servers 1500 MTU  and storage 9000 MTU.
I feel like there is going to be a lag there. Queuing or slow server response issues with the switch interface processing 9000 MTU frames sitting in front of 1500  MTU frames in the queue heading to the server.

Wouldn't Path MTU discovery address this?
Engineer by day, DJ by night, family first always

icecream-guy

Quote from: AspiringNetworker on June 08, 2016, 10:48:44 AM
Quote from: ristau5741 on June 07, 2016, 08:10:08 AM

I don't get a warm fuzzy feeling when the interface is mixing MTU sizes between the frames for other servers 1500 MTU  and storage 9000 MTU.
I feel like there is going to be a lag there. Queuing or slow server response issues with the switch interface processing 9000 MTU frames sitting in front of 1500  MTU frames in the queue heading to the server.

Wouldn't Path MTU discovery address this?

it's really a moot point by Reggle's calculations
:professorcat:

My Moral Fibers have been cut.

NetworkGroover

#19
Erm, moot in what regard?  I'm probably just having an ADD moment and not paying enough attention, but I'd say serialization delay isn't the only concern with mismatched MTU between hosts???

EDIT - http://networkengineering.stackexchange.com/questions/3524/mtu-and-fragmentation

Am I just going down an unnecessary rabbit hole and completely missing the point?
Engineer by day, DJ by night, family first always


Dieselboy

I wrote a reply yesterday but it's not here so I probably was going off on a tangent and didn't post it :)

I calculated that 9000-byte packet on a 1GB would take 72us to serialise, if I blindly use that calculation from earlier. That's 0.072ms. I guess it could add up over a whole day depending on the number of packets.

PMTUD would not come in to play here. I had a quick google and I don't think it would come in to play at all if 2 servers were communicating with different MTU sizes on their NICs on correctly configured network switches. I think their communication could break in one direction.

icecream-guy

Quote from: Dieselboy on June 08, 2016, 10:52:46 PM
I wrote a reply yesterday but it's not here so I probably was going off on a tangent and didn't post it :)

I calculated that 9000-byte packet on a 1GB would take 72us to serialise, if I blindly use that calculation from earlier. That's 0.072ms. I guess it could add up over a whole day depending on the number of packets.

PMTUD would not come in to play here. I had a quick google and I don't think it would come in to play at all if 2 servers were communicating with different MTU sizes on their NICs on correctly configured network switches. I think their communication could break in one direction.

'cording to that article

"The (N5K) switch supports jumbo frames by default."
:professorcat:

My Moral Fibers have been cut.

NetworkGroover

Quote from: ristau5741 on June 09, 2016, 09:30:53 AM
Quote from: Dieselboy on June 08, 2016, 10:52:46 PM
I wrote a reply yesterday but it's not here so I probably was going off on a tangent and didn't post it :)

I calculated that 9000-byte packet on a 1GB would take 72us to serialise, if I blindly use that calculation from earlier. That's 0.072ms. I guess it could add up over a whole day depending on the number of packets.

PMTUD would not come in to play here. I had a quick google and I don't think it would come in to play at all if 2 servers were communicating with different MTU sizes on their NICs on correctly configured network switches. I think their communication could break in one direction.

'cording to that article

"The (N5K) switch supports jumbo frames by default."

At L2 maybe, same as all Arista switches - but we're talking IP MTU here, or no?
Engineer by day, DJ by night, family first always

NetworkGroover

#24
Quote from: Dieselboy on June 08, 2016, 10:52:46 PM
I wrote a reply yesterday but it's not here so I probably was going off on a tangent and didn't post it :)

I calculated that 9000-byte packet on a 1GB would take 72us to serialise, if I blindly use that calculation from earlier. That's 0.072ms. I guess it could add up over a whole day depending on the number of packets.

PMTUD would not come in to play here. I had a quick google and I don't think it would come in to play at all if 2 servers were communicating with different MTU sizes on their NICs on correctly configured network switches. I think their communication could break in one direction.

Don't hosts do PMTUD?  If a host sends a jumbo packet with DF bit set, the receiving end needs to send an ICMP response, "Fragmentation needed and DF bit set" (Or something of that nature).  In a network that is properly configured with jumbo from end to end of course it won't matter within the network, but the hosts still do.  We just ran into this with a customer who had issues with their DNS because of mismatched MTU of 9000 on one side and 1500 on the other - the network was jumbo all the way through.  Of course the network got blamed and we had to prove otherwise.
Engineer by day, DJ by night, family first always

Dieselboy

I'm honestly not sure but I didn't see that hosts themselves would send an ICMP response to packet too big ? My understanding is that if a host received a packet that was larger than its MTU on that interface then it would drop it. But I could be wrong, though because when you install a VPN client, doesn't that drop the MTU to 1380 or something anyway. I would need to test it but I'm in a hotel right now so cant.

NetworkGroover

Quote from: Dieselboy on June 09, 2016, 12:59:24 PM
I'm honestly not sure but I didn't see that hosts themselves would send an ICMP response to packet too big ? My understanding is that if a host received a packet that was larger than its MTU on that interface then it would drop it. But I could be wrong, though because when you install a VPN client, doesn't that drop the MTU to 1380 or something anyway. I would need to test it but I'm in a hotel right now so cant.

It's a confusing topic and I've heard/seen mixed messages.  It gets even more complex with DNS because apparently there's some separate setting for DNS servers in regards to how large of a message they'll recieve...

So I'm right there with you about being sure or not and I guess the answer may be like 90% of all things in IT - "it depends"? It's like one of those Networking 101 things you remember studying about years ago but don't touch anymore unless you need to.
Engineer by day, DJ by night, family first always

Otanx

A host should never have to send packet to big ICMP messages. As part of the TCP handshake the systems report their MSS, lowest wins. So unless some network stack just ignores the MSS the end host should not get anything larger. What if they do? I don't know. Never had it happen. Wait you say what about UDP. UDP and PMTUD is just broken. How can a host resend a UDP packet if it did not keep the information? How long should a host use the smaller size when there is no session? It depends on your network stack. Many just ignore PMTUD and say UDP is unreliable. Good luck. Others respect the new MTU for X minutes. This is why DNS, and many other UDP applications limit packet size to 512 or 576 bytes (minimum IP MTU). That way they are not going to be fragmented.

*Note to self: Find out what happens if I use IPSec on a physical interface with an MTU of 576.

What really gets ugly is when firewalls or ACLs block ICMP in one direction only. So the ICMP packet to big that PMTUD relies on works sometimes depending on who sent the first large packet.

-Otanx

NetworkGroover

#28
Quote from: Otanx on June 09, 2016, 07:15:39 PM
A host should never have to send packet to big ICMP messages. As part of the TCP handshake the systems report their MSS, lowest wins. So unless some network stack just ignores the MSS the end host should not get anything larger.

That's a good point in the case of TCP. 

EDIT - AND a good point about UDP.  I think that was directly related to the DNS issue we saw.

EDIT #2 - AND a good point about blocking ICMP.  That's a message I saw on multiple sources - need to intelligently evaluate and decide how and where to block ICMP instead of just blatantly blocking it altogether and preventing networks from doing their job.
Engineer by day, DJ by night, family first always

wintermute000

I believe that the host doesn't send ICMP too large responses, only intermediate L3 devices do, if we're assuming RFC compliance.

So yeah interesting point, intra-VLAN/subnet traffic there is a potential for MTU mismatch, but as you say TCP should negotiate MSS correctly.