Transfer speed on N5K

Started by fsck, January 14, 2015, 07:34:23 PM

Previous topic - Next topic

fsck

Quote from: javentre on January 27, 2015, 08:18:38 PM
Hard coding hasn't been a recommended practice for about 15 years.   Always use auto/auto where possible, especially at speeds >= 1gbps.
That I didn't know.  I guess I had it reversed, this is very good to know. thanks

icecream-guy

it's been in the EIA standard for some time, lookup Clause 28 of the 802.3u Fast Ethernet supplement to the IEEE 802.3 standard. you should find it there.
:professorcat:

My Moral Fibers have been cut.

fsck

I configured the interface with 'duplex auto' but it still shows it as full-duplex, 10 Gb/s, media type is 10G which I believe I already did because killabee had mentioned it.  So how does one figure it's incorrect, being set to full-duplex if it doesn't distinguish between full-duplex and auto?  I take it that it's showing full-duplex because of the other side.  Is that a correct assumption? Are your thoughts that this could be the problem with the speeds?

sgtcasey

Using UDP might give you a better idea of the total throughput with anything getting "in the way".  But increase the amount of UDP traffic you're trying to send.

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 200.0M -n 1000000000 -T 1

The above is only going to try to send ~200Mbps.  Bump that up to something higher than 10Gbps since the link you have is running at 10Gbps.  I use jperf to set up the testing and iperf as the server on the remote machine so I'm not 100% of the command line but see if this might work:

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 15.0G -n 1000000000 -T 1

An example of where I use this is today I wanted to test total amount of traffic I could get through a new 150Mbps link that was just turned over to my team.  I did the usual TCP testing to see throughput with QoS and crypto enabled and such.  Then I disable QoS and crypto and just spammed UDP traffic across it at 300Mbps (I usually double the bandwidth the link is set at by the provider) to see how much made it through.  I was getting right around 189Mbps across it.
Taking the sh out of IT since 2005!

fsck

#34
Very good information to know here sgtcasey thanks for explaining.   I changed it to 15.0G and got the following

Transfer               Bandwidth
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
Sent 680723 datagrams
WARNING: did not receive ack of last datagram after 10 tries.

sgtcasey

#35
Okay, so I'm at my work laptop and decided to fire up jperf and see if you are even able to set the UDP bandwidth value to xx.xG and the answer is no.  However, you can set the MBps to get where you need.  Someone correct my math if it's wrong.

10Gbps is 1250MBps

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 1250.0M -n 1000000000 -T 1

However, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.  I'm not sure if a Linux OS based machine can.  The links I use jperf/iperf on are our WAN links which are much smaller in sized than the 10Gbps+ you'll find in a data center.

I see your test seems to be completing very quickly.  If you want to run it longer just increase the number of 0's in the -n value.  That is the amount of data to transfer.  I usually set that number to 1,000,000,000 which is 1GB of data.  That way it takes 30-120 seconds for my test to run depending on the site and the pipe size.  That's enough time to get some good data.

I suppose if you set up several jperf/iperf servers/clients you could all get them to run at the same time to try and max out that 10Gbps link.  :)
Taking the sh out of IT since 2005!

fsck

We are using Intel 10Gbps NICs on a couple of HP servers. I believe your math is correct, 10,000 / 8 = 1250Mbps so I'm definitely in the problematic zone and I have no idea what's causing it.

Swapped NICs
Swapped cables
Tested on different server
Tried enabling Jumbo frames
Tried disabling Jumbo frames
Changed to auto/auto on both ends
Verified policy on Nexus for jumbo(when used)
Checked for errors on the interface

javentre

Quote from: sgtcasey on January 29, 2015, 04:54:18 PMHowever, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.

I routinely get 95%+ utilization from iperf on Windows with 1GE NICs.
[url="http://networking.ventrefamily.com"]http://networking.ventrefamily.com[/url]

fsck

Quote from: javentre on January 29, 2015, 05:46:17 PM
Quote from: sgtcasey on January 29, 2015, 04:54:18 PMHowever, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.

I routinely get 95%+ utilization from iperf on Windows with 1GE NICs.
I'm just curious what others get using that test line, or any other test via iPerf on a N5K.

The bouncing also makes me wonder, and what exactly is going on.  But at the same time I wonder if it's just not being pushed to max out the speeds, even though I set it with 1250M

fsck

I take that back the bouncing isn't the case when i run iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 1250.0M -n 1000000000 -T 1

I get average 51.6MBytes transfer and 437Mbit/sec bandwidth.  Thats just not what I expected to see on that N5K

burnyd

try one more thing

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 15.0G -n 1000000000 -T 1 -w 128kb

this will open up the tcp windowing size from the default 8kb I believe it is on windows to 128k you might have to keep bumping up that number but 128kb is a good way to start.



burnyd

to add to that.  If you are not using fex's like you said and you do not see pause frames then bandwidth is three for you to use.  I do not think you have an issue on the 5k level.

javentre

#42
Quote from: burnyd on February 01, 2015, 07:30:04 AMbut 128kb is a good way to start.

That's far too low at 10GE link speeds.  A 10GE link can transmit 128kb in about 13 microseconds.

That means for every 13 microseconds of transmit, you have a guaranteed idle time of 2 microseconds (assuming a 2 micro RTT - for 2 devices connected to a 5K).  That's a pretty bad ratio of transmit to idle.
[url="http://networking.ventrefamily.com"]http://networking.ventrefamily.com[/url]

fsck

So I went ahead and tried the new command that burnyd recommended and I got the following new results

Transfer             Bandwidth
52.1 MBytes       437 Mbits/sec
52.1 MBytes       437 Mbits/sec
52.1 MBytes       437 Mbits/sec
53.0 MBytes       445 Mbits/sec
51.6 MBytes       433 Mbits/sec
52.1 MBytes       437 Mbits/sec

@javentre
what do you think it should be set to?  Should that be doubled or tripled?  I went ahead and increased it to see if that would change anything. I still see the results set above.  I have no idea about these stats and the transmit options.  This is good stuff and glad to be learning.  good place here to learn from others that are very experienced. thank you.

burnyd

You can always up that windows sizing.  128kb is a good place to start and keep going up.

Judging by your last transfer in the 250mb this one looks like its close to doubled in the 450mb range?  Is that correct?

What servers and NIC's are you using?  Are you going to be happy once you eventually see 800ish?