Nexus Outside the Datacenter

Started by Fred, May 14, 2015, 09:58:03 PM

Previous topic - Next topic

NetworkGroover

Quote from: that1guy15 on May 26, 2015, 12:23:18 PM
Quote from: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?
Shared control-plane. Split-brain and failure scenarios with VSS can get pretty hairy.

Yep... the same reasons some folks don't like this or stacking tech.
Engineer by day, DJ by night, family first always

killabee

Quote from: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?

I used to be pro VSS because of the single management plane selling point.  I've since avoied it for the reasons that1guy15 mentiones.  To elaborate a bit more...

We have two campus core switches (6880s) running VSS.  The campus distro and DC core uplinks to the campus core.  Up until recently we were hitting a bug where the VSL keepalives were timing out.  This caused the VSL links to fail, dual-active detection to kick in and forced one of the VSS members to automatically shut down all of its interfaces.  This didn't cause an outage because the other VSS member stayed up, but it opened my eyes to seeing that bugs can hit even systems that are designed to be highly available.  And with such strange bugs...anything can happen!

Some other reasons:
-Since VSS makes devices act and behave as one, upgrading would be more involved as you played a shuffling game of moving links/IPs around to the new switches.  It wouldn't be as straightforward as "replace switch A, now replace switch B"
-Weigth the pros and cons -- You get a single management and control plane, more backplane bandwidth, eliminate FHRP and STP considerations, but now you have to worry about this and the appropriate caveats and failure/recovery scenarios

killabee

Quote from: Fred on May 14, 2015, 09:58:03 PM
I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?

The way I see it is Cisco positions their products for different places in the network (PIN), and those PINs dictate the product's lifecycle, roadmap, feature set, customer base, architecture, etc.  With the Nexus currently positioned for the datacenter, you won't find the common stuff you typically find in the campus (e.g. dot1x, etc). 

@burnyd/@that1guy15
So why are you guys against chassis-based platforms now?

NetworkGroover

Quote from: killabee on May 26, 2015, 06:04:53 PM
Quote from: Fred on May 14, 2015, 09:58:03 PM
I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?

The way I see it is Cisco positions their products for different places in the network (PIN), and those PINs dictate the product's lifecycle, roadmap, feature set, customer base, architecture, etc.  With the Nexus currently positioned for the datacenter, you won't find the common stuff you typically find in the campus (e.g. dot1x, etc). 

@burnyd/@that1guy15
So why are you guys against chassis-based platforms now?

Exactly.
Engineer by day, DJ by night, family first always

that1guy15

#34
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?
That1guy15
@that1guy_15
blog.movingonesandzeros.net

wintermute000

#35
summed it up pretty much

and yeah everyone round here hates VSS. Plenty of horror stories. If I get to call the shots, i'd avoid VSS if I could - the creation of a single point of failure and reliance on the cisco software writers to do do their jobs properly vs good old, battle tested protocols like OSPF/EIGRP (on the WAN) and HSRP/VRRP (on the LAN) - I know which one I'd back, every single time. Same logic goes for a stack for smaller deployments, except stacks are usually less buggy

Another thing is that in Campus deployments, LAN capacity is rarely an issue i.e. you don't really need to get rid of STP unlike DC.




icecream-guy

Quote from: that1guy15 on May 26, 2015, 08:55:31 PM

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?).


No, The 9K still don't support dual homed fex's.  I think that feature slipped into spring of 2016.
:professorcat:

My Moral Fibers have been cut.

that1guy15

Quote from: ristau5741 on May 27, 2015, 07:42:19 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?).


No, The 9K still don't support dual homed fex's.  I think that feature slipped into spring of 2016.

I have not dug into the 9K extension modules and their design. I would assume they are a lot like 5Ks and FEXs. I never liked the idea of collapsing the FEX up to the 7Ks. The 10G ports on a 7K are hell-a expensive compared to a 5K/2K model.
That1guy15
@that1guy_15
blog.movingonesandzeros.net

Reggle

#38
Concerning VSS, I don't like that shared control plane either. I've seen an issue on one supervisor take down the entire cluster. Not the redundancy I was looking for.
Switch stacks have that too. The only positive point is that this seems to have improved with more recent IOS releases.

@Ristau: you're sure about N9K not supporting dual-homed FEX? I thought the data cheats say it's supported. I'm about to buy a pair here based on that functionality...

icecream-guy

Quote from: Reggle on May 28, 2015, 03:19:16 AM

@Ristau: you're sure about N9K not supporting dual-homed FEX? I thought the data cheats say it's supported. I'm about to buy a pair here based on that functionality...

we dood that.  yep,  specifically pair of C9396PX connecting a N2K-C2248TP-E-1GE, switch is running code 6.1(2)I3(2) one of the fex status is simply offline

9396-01# show fex
  FEX         FEX           FEX                       FEX               
Number    Description      State            Model            Serial     
------------------------------------------------------------------------
101        fex-101               Offline   N2K-C2248TP-E-1GE   FOT12345678

9396-02# show fex
  FEX         FEX           FEX                       FEX               
Number    Description      State            Model            Serial     
------------------------------------------------------------------------
101        fex-101                Online   N2K-C2248TP-E-1GE   FOT12345678


9396-01# show fex 101 det
FEX: 101 Description: ent-fex-101   state: Offline
  FEX version: 6.1(2)I3(2) [Switch version: 6.1(2)I3(2)]
  FEX Interim version: 6.1(2)I3(2)
  Switch Interim version: 6.1(2)I3(2)
  Extender Model: N2K-C2248TP-E-1GE,  Extender Serial: FOT12345678
  Part No: 73-13671-01
  Card Id: 149, Mac Addr: xx:xx:xx:xx:xx:xx, Num Macs: 64
  Module Sw Gen: 21  [Switch Sw Gen: 21]
pinning-mode: static    Max-links: 1
  Fabric port for control traffic:
  Fabric interface state:
    Po101 - Interface Up. State: Active
    Eth1/7 - Interface Up. State: Active
:professorcat:

My Moral Fibers have been cut.

NetworkGroover

Quote from: that1guy15 on May 26, 2015, 08:55:31 PM
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?

What are you missing?  Possibly simplifying your life with another vendor?  :problem?:

It's terrible to see people making chassis vs. fixed config switch decisions based on licensing and sup complexity.  That's garbage.
Engineer by day, DJ by night, family first always

NetworkGroover

Quote from: wintermute000 on May 27, 2015, 05:26:12 AM
summed it up pretty much

and yeah everyone round here hates VSS. Plenty of horror stories. If I get to call the shots, i'd avoid VSS if I could - the creation of a single point of failure and reliance on the cisco software writers to do do their jobs properly vs good old, battle tested protocols like OSPF/EIGRP (on the WAN) and HSRP/VRRP (on the LAN) - I know which one I'd back, every single time. Same logic goes for a stack for smaller deployments, except stacks are usually less buggy

Another thing is that in Campus deployments, LAN capacity is rarely an issue i.e. you don't really need to get rid of STP unlike DC.

+1
Engineer by day, DJ by night, family first always

that1guy15

Quote from: AspiringNetworker on May 28, 2015, 10:42:32 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?

What are you missing?  Possibly simplifying your life with another vendor?  :problem?:

It's terrible to see people making chassis vs. fixed config switch decisions based on licensing and sup complexity.  That's garbage.

Wont SDN solve all this!?!
That1guy15
@that1guy_15
blog.movingonesandzeros.net

NetworkGroover

#43
Quote
Wont SDN solve all this!?!

Heh, the pain point you're feeling here is it a lower level.  You shouldn't be asking how SDN can fix that - you should be asking your vendor to manage their products better.
Engineer by day, DJ by night, family first always

that1guy15

Quote from: AspiringNetworker on May 28, 2015, 02:42:02 PM
Quote
Wont SDN solve all this!?!

Heh, the pain point you're feeling here is it a lower level.  You shouldn't be asking how SDN can fix that - you should be asking your vendor to manage their products better.
hehe. No Im pretty sure I just need to buy some SDN. 3 maybe 4 SDNs will do. Not sure how or who is selling SDN, but everyone tells me its gonna change everything :)
That1guy15
@that1guy_15
blog.movingonesandzeros.net