Networking-Forums.com

Professional Discussions => Routing and Switching => Topic started by: fsck on January 14, 2015, 07:34:23 PM

Title: Transfer speed on N5K
Post by: fsck on January 14, 2015, 07:34:23 PM
I'm noticing that on our gigabit network we can hold about a 84MB/s transfer rate, which is about the max of a 1GB connection being that 125MB/s is the theoretical maximum.  My question is on the N5Ks I see it hit about 921MB/s (max theoretical 1250MB/s) and it starts to drop rapidly and hold around 140MB/s.  Transferring a file about 125GB in size. We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

Something odd I noticed, when I set the Intel NIC settings to full-duplex, it no longer peaks at 921MB/s maybe up to about 260MB/s and comes and holds around 133MB/s.  I'd like to learn what exactly is going on, and why these traits are showing.  And of course I'd like to fully take advantage of the 10GB network. The 140MB/s isn't that exciting, I was expecting it to take off.
Title: Re: Transfer speed on N5K
Post by: Seittit on January 15, 2015, 03:33:54 AM
We've noticed ridiculously poor performance on our FEX modules and TAC found it related to its internal hardware queues. With our situation, Nexus automatically allocates a large amount of hardware queues for FCoE (even on non-FCoE capable devices such as our 2148 FEX modules). This led to large amount of drops on the transmit direction.

We found that a home netgear switch continually outperformed our Nexus switches, try delivering that nugget of news to management.

I would recommend opening a TAC to ensure if a similar issue isn't affecting your 5Ks.
Title: Re: Transfer speed on N5K
Post by: icecream-guy on January 15, 2015, 06:11:47 AM
Quote from: fsck on January 14, 2015, 07:34:23 PM
We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

shouldn't the server NIC MTU be the same size as the jumbo frame MTU ?

:professorcat:
Title: Re: Transfer speed on N5K
Post by: javentre on January 15, 2015, 06:17:33 AM
It depends on what you want to accomplish.  As long as the switch is set higher than the end hosts, you shouldn't drop frames.
Title: Re: Transfer speed on N5K
Post by: javentre on January 15, 2015, 06:18:04 AM
Quote from: fsck on January 14, 2015, 07:34:23 PM
I'm noticing that on our gigabit network we can hold about a 84MB/s transfer rate, which is about the max of a 1GB connection being that 125MB/s is the theoretical maximum.  My question is on the N5Ks I see it hit about 921MB/s (max theoretical 1250MB/s) and it starts to drop rapidly and hold around 140MB/s.  Transferring a file about 125GB in size. We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

What tools are you using to test throughput?
Title: Re: Transfer speed on N5K
Post by: fsck on January 15, 2015, 11:44:04 AM
Quote from: Seittit on January 15, 2015, 03:33:54 AM
We've noticed ridiculously poor performance on our FEX modules and TAC found it related to its internal hardware queues. With our situation, Nexus automatically allocates a large amount of hardware queues for FCoE (even on non-FCoE capable devices such as our 2148 FEX modules). This led to large amount of drops on the transmit direction.

We found that a home netgear switch continually outperformed our Nexus switches, try delivering that nugget of news to management.

I would recommend opening a TAC to ensure if a similar issue isn't affecting your 5Ks.
Won't this issue be a little different?  Your FEXs are like your remote line cards.  So we are talking an added piece of connectivity, one that I'm not dealing with if that makes sense.  We are simply treating our N5K right now as a core switch, and trying to maximize the throughput.  I only wish we had TAC.
Title: Re: Transfer speed on N5K
Post by: fsck on January 15, 2015, 11:45:09 AM
Quote from: ristau5741 on January 15, 2015, 06:11:47 AM
Quote from: fsck on January 14, 2015, 07:34:23 PM
We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

shouldn't the server NIC MTU be the same size as the jumbo frame MTU ?

:professorcat:
My mistake.  I'm so used to writing 1500 MTU for my labs.  I changed the NIC to 9014 bytes.
Title: Re: Transfer speed on N5K
Post by: fsck on January 15, 2015, 11:52:03 AM
Quote from: javentre on January 15, 2015, 06:18:04 AM
Quote from: fsck on January 14, 2015, 07:34:23 PM
I'm noticing that on our gigabit network we can hold about a 84MB/s transfer rate, which is about the max of a 1GB connection being that 125MB/s is the theoretical maximum.  My question is on the N5Ks I see it hit about 921MB/s (max theoretical 1250MB/s) and it starts to drop rapidly and hold around 140MB/s.  Transferring a file about 125GB in size. We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

What tools are you using to test throughput?
tbh javentre, this was a simple Windows copy which I know isn't a true test, kind of.  I'm looking at setting up IOMeter and see what that says.  Any tools you think are best?
Title: Re: Transfer speed on N5K
Post by: ZiPPy on January 15, 2015, 11:56:49 AM
Quote from: fsck on January 15, 2015, 11:52:03 AM
Quote from: javentre on January 15, 2015, 06:18:04 AM
Quote from: fsck on January 14, 2015, 07:34:23 PM
I'm noticing that on our gigabit network we can hold about a 84MB/s transfer rate, which is about the max of a 1GB connection being that 125MB/s is the theoretical maximum.  My question is on the N5Ks I see it hit about 921MB/s (max theoretical 1250MB/s) and it starts to drop rapidly and hold around 140MB/s.  Transferring a file about 125GB in size. We are using jumbo frames and I've verified the NIC on the server is set for 1500 MTU.

What tools are you using to test throughput?
tbh javentre, this was a simple Windows copy which I know isn't a true test, kind of.  I'm looking at setting up IOMeter and see what that says.  Any tools you think are best?
Use iPerf to test throughput and use something like Steelhead Packet Analyzer.  Do a Wireshark capture, then run results through Steelhead Packet Analyzer. which can show you problems with TCP and latency. iPerf won't show you problems with TCP and latency, just throughput so the combination could be beneficial. Definitely start with iPerf first.
Title: Re: Transfer speed on N5K
Post by: that1guy15 on January 15, 2015, 11:59:01 AM
There is so many different things this could be. I am assuming this test is between two ports on the N5K right? What model?

Why jumbo frames? Why?

First make sure the switch is actually getting overloaded. Do you see buffer misses? Do you see queuing and drops on the interfaces?

Next double check your source and destination hardware can handle this type of network and processing load. I have never been able to take a standard desktop or laptop and truly test 1Gbps.

A standard file transfer from windows to windows is going to add so many variables from the Windows side. Try iperf.

Title: Re: Transfer speed on N5K
Post by: javentre on January 15, 2015, 12:03:08 PM
Quote from: ZiPPy on January 15, 2015, 11:56:49 AM
Use iPerf to test throughput
Exactly.  Repeat all of your tests with iPerf, and make sure things are repeatable.
Title: Re: Transfer speed on N5K
Post by: hizzo3 on January 15, 2015, 12:25:53 PM


Quote from: Seittit on January 15, 2015, 03:33:54 AM
We found that a home netgear switch continually outperformed our Nexus switches, try delivering that nugget of news to management.
Hey those GS-108T (v2) are no joke lol.
Title: Re: Transfer speed on N5K
Post by: fsck on January 15, 2015, 04:29:05 PM
Quote from: that1guy15 on January 15, 2015, 11:59:01 AM
There is so many different things this could be. I am assuming this test is between two ports on the N5K right? What model?

Why jumbo frames? Why?

First make sure the switch is actually getting overloaded. Do you see buffer misses? Do you see queuing and drops on the interfaces?

Next double check your source and destination hardware can handle this type of network and processing load. I have never been able to take a standard desktop or laptop and truly test 1Gbps.

A standard file transfer from windows to windows is going to add so many variables from the Windows side. Try iperf.
This will be used for an iSCSI SAN network, so that's why we are going with jumbo frames.

iPerf is showing the following ...
7,632Mbit/sec
2990Mbit/sec
2104Mbit/sec
2205Mbit/sec
2102Mbit/sec

So it spikes up to where it should be, it just won't hold.  I have the iperf command as iperf -c 192.168.10.10 -p 5001 -t 15 -i 1 -f m -w 50MB and I tried it with 500MB too.
Title: Re: Transfer speed on N5K
Post by: fsck on January 15, 2015, 04:49:35 PM
Just so I'm not mistaken, when you say buffer misses do you mean the 'no buffer' option?  I see 0 across the board for runts, CRC, no buffer, ignored, ect...  So it looks to be clean.  I doubt the N5k is overloaded, as these are the only 2 test servers running the iPerf test.
Title: Re: Transfer speed on N5K
Post by: hizzo3 on January 15, 2015, 05:38:03 PM
I was having issue with my iSCSI connected to a win2k12 vm at home that was doing similar with jumbo frames... I was seeing a bunch of error packets in wireshark (can't remember the details). The switch wasn't seeing errors, it was something to do with the iSCSI portion. Keep in mind iSCSI isn't a layer 2/3 protocol so I don't know enough about the N5K to tell you if it is processing any higher level protocol errors.

I rolled it back to 1500 MTU and the errors stopped and it stabilized the speed. I didn't have time to trouble shoot it right now since the lab I am doing is fine with non-jumbo.

Did you try a packet capture using wireshark while doing iperf? I'm willing to bet that your having a bunch of unneeded traffic somewhere that is bogging down as the transfers get going.
Title: Re: Transfer speed on N5K
Post by: wintermute000 on January 15, 2015, 07:28:01 PM
When even SAN guys say stuff jumbo... https://forums.freenas.org/index.php?threads/jumbo-frames-notes.26064/
Title: Re: Transfer speed on N5K
Post by: killabee on January 15, 2015, 09:05:33 PM
I haven't found a clearer explaination of this, but gig links and above NEED auto-neg in order to achieve that level of throughput (or close to it).  Augo-neg in those cases do more than just negotiate the speed/duplex.  It determines the master/slave relationship between the connected endpoints, determines the flowcontrol, etc, all of which play a part in the throughput.  That's probably why you're seeing poorer performance at full-duplex over auto.

http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01148835&cc=lb&dlc=en&lc=fr

Are those throughput numbers with full-duplex, or auto?

As for the jumbo vs no jumbo.....we network engineers know what's up.  It's the server team, storage team, and their vendors that pressure us to enable it. 
Title: Re: Transfer speed on N5K
Post by: ZiPPy on January 16, 2015, 12:30:11 AM
I've always gone by best practice for SAN infrastructures, and that was to enable jumbo frames.  This is with the notion that you keep the traffic contained, no default gateways.  The benefits certainly shine in file transfers, respectively. 

I'm not saying it's a must, or that one way is better than the other.  It's more so configuring and tuning the network to meet a specific need in your network, in your data center.  Enabling jumbo in one network, might not reap the benefits in another. 

I stumbled across this paper awhile ago, that really gets down to the nitty gritty of jumbo.  http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech
Title: Re: Transfer speed on N5K
Post by: sgtcasey on January 17, 2015, 07:16:52 AM
During WAN bandwidth testing on new links I usually use jperf/iperf with UDP packets and set the amount of traffic way past what the link speed we bought is.  Then I just check the other end interface to see how much is actually getting through.  This might be a better way to test the true throughput of the links on the Nexus stuff.  Just an idea.

I use N5K's with FEX switches in my work environment and I've never seen issues that are described by the OP.  It does make me wonder, though...
Title: Re: Transfer speed on N5K
Post by: burnyd on January 19, 2015, 04:44:41 PM
If iperf is working fine and you are getting the speed you are expecting and this is a tcp based application a lot of times before I even troubleshoot cifs/smb I always doublecheck to make sure tcp off load is turned off at the OS level. 

Otherwise, it can be a large amount of things.  If you are handing off to fex's and you still have everything default hte default queue buffer is one large buffer spread across every port.  I would disable that for safe measures depending on the model of FEX you will only see for example a max of 2.25gbps out of a port which would explain your 220mb/s I believe you said prior.  If you can also please take a look at the pause frames under the interface and tell me if it is a large number?
Title: Re: Transfer speed on N5K
Post by: fsck on January 23, 2015, 01:45:13 PM
Quote from: sgtcasey on January 17, 2015, 07:16:52 AM
During WAN bandwidth testing on new links I usually use jperf/iperf with UDP packets and set the amount of traffic way past what the link speed we bought is.  Then I just check the other end interface to see how much is actually getting through.  This might be a better way to test the true throughput of the links on the Nexus stuff.  Just an idea.

I use N5K's with FEX switches in my work environment and I've never seen issues that are described by the OP.  It does make me wonder, though...
May I please know what command you ran for your iPerf? I know the command can give you false results if you write it wrong.  So I want to be sure that's correct.  This is what I'm running iperf -c 172.1.1.10 -p 5001 -t 15 -i 1 -f m -w 50MB
Title: Re: Transfer speed on N5K
Post by: fsck on January 23, 2015, 03:12:33 PM
Quote from: burnyd on January 19, 2015, 04:44:41 PM
If iperf is working fine and you are getting the speed you are expecting and this is a tcp based application a lot of times before I even troubleshoot cifs/smb I always doublecheck to make sure tcp off load is turned off at the OS level. 

Otherwise, it can be a large amount of things.  If you are handing off to fex's and you still have everything default hte default queue buffer is one large buffer spread across every port.  I would disable that for safe measures depending on the model of FEX you will only see for example a max of 2.25gbps out of a port which would explain your 220mb/s I believe you said prior.  If you can also please take a look at the pause frames under the interface and tell me if it is a large number?
We aren't using any fex's. I'm seing 0's for the pause frames.  I made sure i enabled the flowcontrol.  Not seeing anything for the pause frames is a good thing right?
Title: Re: Transfer speed on N5K
Post by: sgtcasey on January 24, 2015, 12:08:48 AM
Quote from: fsck on January 23, 2015, 01:45:13 PM
May I please know what command you ran for your iPerf? I know the command can give you false results if you write it wrong.  So I want to be sure that's correct.  This is what I'm running iperf -c 172.1.1.10 -p 5001 -t 15 -i 1 -f m -w 50MB

Sure.  I'll set up a laptop connected to the remote router I want to test WAN bandwidth over.  On that laptop I start up iperf -s (for TCP) or iperf -s -u (for UDP).  Then on my work-issued machine I'll go to the other end of that WAN link and depending on what I want to use I'll run the following.

TCP - iperf.exe -c <iperf server IP> -P 1 -i 1 -p 5001 -f m -n 1000000000
UDP - iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 200.0M -n 1000000000 -T 1

I use TCP to see what kind of performance a normal TCP session will get over a link.  For UDP I tell it to just spam tons of traffic through the WAN link.  I then check the other side interface stats to see what is actually making it through.  You do need to change how your iperf server is set up depending on if you want to use UDP or TCP.
Title: Re: Transfer speed on N5K
Post by: fsck on January 26, 2015, 01:57:23 PM
So I ran the command as you have shown sgtcasey and I'm getting the following results

TCP - iperf.exe -c <iperf server IP> -P 1 -i 1 -p 5001 -f m -n 1000000000
Transfer               Bandwidth
144 MBytes       1208 Mbits/sec
180 MBytes       1514 Mbits/sec
148 MBytes       1237 Mbits/sec
176 MBytes       1481 Mbits/sec
177 MBytes       1486 Mbits/sec
954 MBytes       1394 Mbits/sec


UDP - iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 200.0M -n 1000000000 -T 1
Transfer               Bandwidth
24.1 MBytes       202Mbits/sec
24.1 MBytes       202Mbits/sec
24.5 Mbytes       206Mbits/sec
24.1 MBytes       202Mbits/sec
24.1 MBytes       202Mbits/sec
24.1 MBytes       202Mbits/sec
24.1 MBytes       202Mbits/sec


Could somebody else post up there results?  I'd be curious to compare.  But based on what burnyd said, there could be all kinds of reasons why the performance is lacking.

If you take a gigabit Cisco Catalyst switch, you can pretty much power it up and it's ready to go.  No port configuration necessary, and you'll get gigabit speeds.  Is that not the same for the Nexus 5Ks?  Do I need to specify certain parameters on it?  So far all I've configured on it, is the policy for jumbo frames.  I'm going to look around and see if I missed a configuration somewhere that might limit the speeds.
Title: Re: Transfer speed on N5K
Post by: fsck on January 27, 2015, 07:37:31 PM
I still wonder what I'm doing wrong.  Was anybody able to run the test too?  I know it will vary because we don't have the same NICs or possibly different cabling, but it should be around the same area.  I'm pretty sure higher than what I have now.  How does one troubleshoot an issue like this?  I used iPerf to prove we have a problem, so now what do i turn to?  I know in class they say use OSI model.  Not sure how to go about that at this point.
Title: Re: Transfer speed on N5K
Post by: javentre on January 27, 2015, 08:04:24 PM
post a 'show interface' of the involved ports
Title: Re: Transfer speed on N5K
Post by: killabee on January 27, 2015, 08:05:35 PM
Quote from: killabee on January 15, 2015, 09:05:33 PM
Are those throughput numbers with full-duplex, or auto?
Title: Re: Transfer speed on N5K
Post by: fsck on January 27, 2015, 08:14:48 PM
Quote from: javentre on January 27, 2015, 08:04:24 PM
post a 'show interface' of the involved ports
Ethernet1/1 is up
Dedicated Interface
  Hardware: 1000/10000 Ethernet, address: 0005.73f0.b828 (bia 0005.73f0.b828)
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
  reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA
  Port mode is access
  full-duplex, 10 Gb/s, media type is 10G
  Beacon is turned off
  Input flow-control is on, output flow-control is on
  Rate mode is dedicated
  Switchport monitor is off
  EtherType is 0x8100
  Last link flapped 4d05h
  Last clearing of "show interface" counters never
  30 seconds input rate 0 bits/sec, 0 packets/sec
  30 seconds output rate 752 bits/sec, 0 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 0 bps, 0 pps; output rate 352 bps, 0 pps
  RX
    115894665 unicast packets  10167 multicast packets  16803 broadcast packets
    115921635 input packets  233682617296 bytes
    8635860 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    79116716 unicast packets  1108990 multicast packets  465984 broadcast packet
s
    80691690 output packets  102570428844 bytes
    4899 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble 0 output discard
    0 Tx pause
  7 interface resets


Ethernet1/13 is up
Dedicated Interface
  Hardware: 1000/10000 Ethernet, address: 0005.73f0.b834 (bia 0005.73f0.b834)
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
  reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA
  Port mode is access
  full-duplex, 10 Gb/s, media type is 10G
  Beacon is turned off
  Input flow-control is on, output flow-control is on
  Rate mode is dedicated
  Switchport monitor is off
  EtherType is 0x8100
  Last link flapped 1d06h
  Last clearing of "show interface" counters never
  30 seconds input rate 464 bits/sec, 0 packets/sec
  30 seconds output rate 328 bits/sec, 0 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 32 bps, 0 pps; output rate 200 bps, 0 pps
  RX
    8324552 unicast packets  14437 multicast packets  445021 broadcast packets
    8784010 input packets  9361511697 bytes
    0 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    25630187 unicast packets  529285 multicast packets  18281 broadcast packets
    26177753 output packets  100467975215 bytes
    8633316 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble 0 output discard
    0 Tx pause
  21 interface resets
Title: Re: Transfer speed on N5K
Post by: fsck on January 27, 2015, 08:16:02 PM
Quote from: killabee on January 27, 2015, 08:05:35 PM
Quote from: killabee on January 15, 2015, 09:05:33 PM
Are those throughput numbers with full-duplex, or auto?
Currently with full-duplex.  I tested with auto but I was getting the same results.  Wouldn't you want it set to full-duplex though?  Why and when do you use auto?  I always though you set it to full-duplex to be safe.
Title: Re: Transfer speed on N5K
Post by: javentre on January 27, 2015, 08:18:38 PM
Hard coding hasn't been a recommended practice for about 15 years.   Always use auto/auto where possible, especially at speeds >= 1gbps.
Title: Re: Transfer speed on N5K
Post by: fsck on January 27, 2015, 08:19:50 PM
Quote from: javentre on January 27, 2015, 08:18:38 PM
Hard coding hasn't been a recommended practice for about 15 years.   Always use auto/auto where possible, especially at speeds >= 1gbps.
That I didn't know.  I guess I had it reversed, this is very good to know. thanks
Title: Re: Transfer speed on N5K
Post by: icecream-guy on January 28, 2015, 07:27:40 AM
it's been in the EIA standard for some time, lookup Clause 28 of the 802.3u Fast Ethernet supplement to the IEEE 802.3 standard. you should find it there.
Title: Re: Transfer speed on N5K
Post by: fsck on January 28, 2015, 12:09:03 PM
I configured the interface with 'duplex auto' but it still shows it as full-duplex, 10 Gb/s, media type is 10G which I believe I already did because killabee had mentioned it.  So how does one figure it's incorrect, being set to full-duplex if it doesn't distinguish between full-duplex and auto?  I take it that it's showing full-duplex because of the other side.  Is that a correct assumption? Are your thoughts that this could be the problem with the speeds?
Title: Re: Transfer speed on N5K
Post by: sgtcasey on January 28, 2015, 06:49:52 PM
Using UDP might give you a better idea of the total throughput with anything getting "in the way".  But increase the amount of UDP traffic you're trying to send.

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 200.0M -n 1000000000 -T 1

The above is only going to try to send ~200Mbps.  Bump that up to something higher than 10Gbps since the link you have is running at 10Gbps.  I use jperf to set up the testing and iperf as the server on the remote machine so I'm not 100% of the command line but see if this might work:

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 15.0G -n 1000000000 -T 1

An example of where I use this is today I wanted to test total amount of traffic I could get through a new 150Mbps link that was just turned over to my team.  I did the usual TCP testing to see throughput with QoS and crypto enabled and such.  Then I disable QoS and crypto and just spammed UDP traffic across it at 300Mbps (I usually double the bandwidth the link is set at by the provider) to see how much made it through.  I was getting right around 189Mbps across it.
Title: Re: Transfer speed on N5K
Post by: fsck on January 28, 2015, 07:41:52 PM
Very good information to know here sgtcasey thanks for explaining.   I changed it to 15.0G and got the following

Transfer               Bandwidth
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
31.3 MBytes       263 Mbits/sec
31.3 MBytes       263 Mbits/sec
62.6 MBytes       525 Mbits/sec
Sent 680723 datagrams
WARNING: did not receive ack of last datagram after 10 tries.
Title: Re: Transfer speed on N5K
Post by: sgtcasey on January 29, 2015, 04:54:18 PM
Okay, so I'm at my work laptop and decided to fire up jperf and see if you are even able to set the UDP bandwidth value to xx.xG and the answer is no.  However, you can set the MBps to get where you need.  Someone correct my math if it's wrong.

10Gbps is 1250MBps

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 1250.0M -n 1000000000 -T 1

However, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.  I'm not sure if a Linux OS based machine can.  The links I use jperf/iperf on are our WAN links which are much smaller in sized than the 10Gbps+ you'll find in a data center.

I see your test seems to be completing very quickly.  If you want to run it longer just increase the number of 0's in the -n value.  That is the amount of data to transfer.  I usually set that number to 1,000,000,000 which is 1GB of data.  That way it takes 30-120 seconds for my test to run depending on the site and the pipe size.  That's enough time to get some good data.

I suppose if you set up several jperf/iperf servers/clients you could all get them to run at the same time to try and max out that 10Gbps link.  :)
Title: Re: Transfer speed on N5K
Post by: fsck on January 29, 2015, 04:59:26 PM
We are using Intel 10Gbps NICs on a couple of HP servers. I believe your math is correct, 10,000 / 8 = 1250Mbps so I'm definitely in the problematic zone and I have no idea what's causing it.

Swapped NICs
Swapped cables
Tested on different server
Tried enabling Jumbo frames
Tried disabling Jumbo frames
Changed to auto/auto on both ends
Verified policy on Nexus for jumbo(when used)
Checked for errors on the interface
Title: Re: Transfer speed on N5K
Post by: javentre on January 29, 2015, 05:46:17 PM
Quote from: sgtcasey on January 29, 2015, 04:54:18 PMHowever, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.

I routinely get 95%+ utilization from iperf on Windows with 1GE NICs.
Title: Re: Transfer speed on N5K
Post by: fsck on January 29, 2015, 05:50:04 PM
Quote from: javentre on January 29, 2015, 05:46:17 PM
Quote from: sgtcasey on January 29, 2015, 04:54:18 PMHowever, keep in mind that if you're using a laptop or PC with a 1Gbps NIC you wouldn't be able to push 10Gbps anyway.  In fact, if you're running a Windows OS jerpf/iperf can rarely even get up to the full speed of your NIC anyway.

I routinely get 95%+ utilization from iperf on Windows with 1GE NICs.
I'm just curious what others get using that test line, or any other test via iPerf on a N5K.

The bouncing also makes me wonder, and what exactly is going on.  But at the same time I wonder if it's just not being pushed to max out the speeds, even though I set it with 1250M
Title: Re: Transfer speed on N5K
Post by: fsck on January 29, 2015, 05:56:34 PM
I take that back the bouncing isn't the case when i run iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 1250.0M -n 1000000000 -T 1

I get average 51.6MBytes transfer and 437Mbit/sec bandwidth.  Thats just not what I expected to see on that N5K
Title: Re: Transfer speed on N5K
Post by: burnyd on February 01, 2015, 07:30:04 AM
try one more thing

iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 15.0G -n 1000000000 -T 1 -w 128kb

this will open up the tcp windowing size from the default 8kb I believe it is on windows to 128k you might have to keep bumping up that number but 128kb is a good way to start.


Title: Re: Transfer speed on N5K
Post by: burnyd on February 01, 2015, 07:31:02 AM
to add to that.  If you are not using fex's like you said and you do not see pause frames then bandwidth is three for you to use.  I do not think you have an issue on the 5k level.
Title: Re: Transfer speed on N5K
Post by: javentre on February 01, 2015, 10:10:07 AM
Quote from: burnyd on February 01, 2015, 07:30:04 AMbut 128kb is a good way to start.

That's far too low at 10GE link speeds.  A 10GE link can transmit 128kb in about 13 microseconds.

That means for every 13 microseconds of transmit, you have a guaranteed idle time of 2 microseconds (assuming a 2 micro RTT - for 2 devices connected to a 5K).  That's a pretty bad ratio of transmit to idle.
Title: Re: Transfer speed on N5K
Post by: fsck on February 02, 2015, 02:10:46 PM
So I went ahead and tried the new command that burnyd recommended and I got the following new results

Transfer             Bandwidth
52.1 MBytes       437 Mbits/sec
52.1 MBytes       437 Mbits/sec
52.1 MBytes       437 Mbits/sec
53.0 MBytes       445 Mbits/sec
51.6 MBytes       433 Mbits/sec
52.1 MBytes       437 Mbits/sec

@javentre
what do you think it should be set to?  Should that be doubled or tripled?  I went ahead and increased it to see if that would change anything. I still see the results set above.  I have no idea about these stats and the transmit options.  This is good stuff and glad to be learning.  good place here to learn from others that are very experienced. thank you.
Title: Re: Transfer speed on N5K
Post by: burnyd on February 02, 2015, 08:49:16 PM
You can always up that windows sizing.  128kb is a good place to start and keep going up.

Judging by your last transfer in the 250mb this one looks like its close to doubled in the 450mb range?  Is that correct?

What servers and NIC's are you using?  Are you going to be happy once you eventually see 800ish?
Title: Re: Transfer speed on N5K
Post by: fsck on February 03, 2015, 07:07:22 PM
Quote from: burnyd on February 02, 2015, 08:49:16 PM
You can always up that windows sizing.  128kb is a good place to start and keep going up.

Judging by your last transfer in the 250mb this one looks like its close to doubled in the 450mb range?  Is that correct?

What servers and NIC's are you using?  Are you going to be happy once you eventually see 800ish?
I changed it even higher and noticed that past 512kb it showed the same results of 525Mbits/sec. 

We are using Intel Ethernet Converged Network Adaptors X520-DA2 with Twinax cabling.

Yes, I would be very happy with 800ish as that would be the max, give or take correct?  Since it's theoretically 1,250 but of course we have to factor in TCP overhead.