Networking-Forums.com

Professional Discussions => Routing and Switching => Topic started by: Fred on May 14, 2015, 09:58:03 PM

Title: Nexus Outside the Datacenter
Post by: Fred on May 14, 2015, 09:58:03 PM
I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?
Title: Re: Nexus Outside the Datacenter
Post by: Reggle on May 15, 2015, 03:58:06 AM
You just named them: PoE and limited 100 Mbps support.
Usually price is a factor as well, although I haven't checked that lately.
And yes, with the 9K's in the game now price is probably less of an issue.

One other thing I can think of is the lack of some security features, 802.1x being the biggest concern here.

I'm interested in what others have to say as well.
Title: Re: Nexus Outside the Datacenter
Post by: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.
Title: Re: Nexus Outside the Datacenter
Post by: Otanx on May 15, 2015, 09:29:13 AM
Look at the 6800ia switch. This is basically a FEX for the 6800 and 6500 Sup2T chassis. My understanding is these are geared to access and campus deployments.

-Otanx
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 15, 2015, 09:58:05 AM
Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Title: Re: Nexus Outside the Datacenter
Post by: deanwebb on May 15, 2015, 10:05:05 AM
Quote from: AspiringNetworker on May 15, 2015, 09:58:05 AM
Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Ah, so there's no push to have these as collapsed cores for remote sites, in spite of the enticements of the price point... and, yes, you don't want dot1x in the datacenter. You want one MAC address assigned to each port and alarms to sound and troops to deploy as soon as there's an unscheduled media disconnect event.
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 15, 2015, 10:49:40 AM
Quote from: deanwebb on May 15, 2015, 10:05:05 AM
Quote from: AspiringNetworker on May 15, 2015, 09:58:05 AM
Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Ah, so there's no push to have these as collapsed cores for remote sites, in spite of the enticements of the price point... and, yes, you don't want dot1x in the datacenter. You want one MAC address assigned to each port and alarms to sound and troops to deploy as soon as there's an unscheduled media disconnect event.

Sorry, define "collapsed core" as you mean it here. If you wanted to collapse in the data center or campus, usually you'd collapse core to distro/spine, but not access - and access is where you'd usually have dot1x if at all, right?  As for assigning MACs to ports - no? That's still basic dot1x right? Again, as far as I know, not too many folks (one that I know of has asked about it) are doing that (could easily be wrong here and feel free to correct me if I'm wrong) in the data center.  In the campus it's not uncommon, but in a controlled data center, if you have people plugging in unauthorized devices - you've got problems.

EDIT - Plus, think about it. I'm very rusty on my dot1x, but if I have an ESXi host with a ton of VMs, how do I handle that?  Then take into account you may have 10s or 100s of leaf switches... not very scalable I'd imagine.
Title: Re: Nexus Outside the Datacenter
Post by: packetherder on May 15, 2015, 10:55:03 AM
10Gbe (hell, the 40Gbe they throw in for free too) density is hard to beat on the 9ks. But bpdu-guard can't be disabled on FEX ports, which is great in theory, but I've always had to deal with exceptions to this.

Like Otanx said, the 6800ia's are cisco's answer to campus FEXs. They can handle 20 of them which, I think, is more than the 9ks. I've heard that doubling that number is on the road-map, but last I checked it was still 20. But yeah, you're stuck with VSS on that platform.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 15, 2015, 12:14:19 PM
I laughed when I read this title. Not because its a stupid question but because my predecessors purchased two Nexus 7010s a while back to replace the campus core with this cool new vPC stuff.

Needless to say they dropped the network for 3 days! 3 F'n days due to loops and incorrect configuration on the 7Ks. They also proved too incompetent to trace loops and ended up ripping every cable out of every switch in the DC around day 2. Then relized they had no clue how to plug it all back in...

But the plus side is they got shit-canned and I got a new job and almost $1M to rearchitect the Campus and DC. The 7Ks are now nice and happy running the DC Core! VSS is running the campus core but the more I run it the less I like it.
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 15, 2015, 03:55:57 PM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
I laughed when I read this title. Not because its a stupid question but because my predecessors purchased two Nexus 7010s a while back to replace the campus core with this cool new vPC stuff.

Needless to say they dropped the network for 3 days! 3 F'n days due to loops and incorrect configuration on the 7Ks. They also proved too incompetent to trace loops and ended up ripping every cable out of every switch in the DC around day 2. Then relized they had no clue how to plug it all back in...

But the plus side is they got shit-canned and I got a new job and almost $1M to rearchitect the Campus and DC. The 7Ks are now nice and happy running the DC Core! VSS is running the campus core but the more I run it the less I like it.

:wtf: Good lord that's terrible! One for putting in gear they weren't fully aware how to operate, and two for ripping out cable that A) apparently wasn't previously labeled and B) wasn't labeled as they ripped it out.

There's like... no words for that epic fail.  Besides.. epic fail.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 15, 2015, 08:20:14 PM
Yeah its pretty crazy. They where given a large chunk of money to spend on a refreshes, so my best guess is they went with the biggest baddest box at the time. They also made sure to spend every dime of it. We have a number of 6509s fully populated with 8x10G cards which are fully populated with X2s! Not just SX either, a fair amount of LH SM! I also have about 150-200 10G optics (SX and LH) in spare form what they built out. Its nuts!

And the fiber. Im not bitching about this one either. I have three main buildings about 200-300 yards apart all in a triangle of 48 count single mode buried fiber! Its all mine, all mine I tell yah!
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 15, 2015, 10:43:48 PM
The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 16, 2015, 04:58:14 PM
Quote from: burnyd on May 15, 2015, 10:43:48 PM
The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.

Good point - didn't even think about that.  That's also something rarely asked for in my experience.  Usually in the data centers I work in, bandwidth isn't an issue so there's no need for QoS really... hard as that may be to believe to some people.  3:1 to 5:1 oversubscription between spine and leaf layers is usually ok for most general DC application stuff, with better ratios needed for applications like Big Data, HPC, and IP Storage.  QoS never really enters those conversations (to my knowledge, anyway).
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 16, 2015, 04:59:38 PM
Quote from: that1guy15 on May 15, 2015, 08:20:14 PM
Yeah its pretty crazy. They where given a large chunk of money to spend on a refreshes, so my best guess is they went with the biggest baddest box at the time. They also made sure to spend every dime of it. We have a number of 6509s fully populated with 8x10G cards which are fully populated with X2s! Not just SX either, a fair amount of LH SM! I also have about 150-200 10G optics (SX and LH) in spare form what they built out. Its nuts!

And the fiber. Im not bitching about this one either. I have three main buildings about 200-300 yards apart all in a triangle of 48 count single mode buried fiber! Its all mine, all mine I tell yah!

Haha - nice.  At least there's some silver lining to that mess. ;)
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 16, 2015, 05:06:27 PM
Quote from: AspiringNetworker on May 16, 2015, 04:58:14 PM
Quote from: burnyd on May 15, 2015, 10:43:48 PM
The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.

Good point - didn't even think about that.  That's also something rarely asked for in my experience.  Usually in the data centers I work in, bandwidth isn't an issue so there's no need for QoS really... hard as that may be to believe to some people.  3:1 to 5:1 oversubscription between spine and leaf layers is usually ok for most general DC application stuff, with better ratios needed for applications like Big Data, HPC, and IP Storage.  QoS never really enters those conversations (to my knowledge, anyway).
i will propose any chassis based device will have a nightmare based qos setup. Hardware plays a huge role with qos, then add on the mix and match of a chassis.
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 16, 2015, 10:14:09 PM
Im phasing out any chassis based device in the dc anyways.  I just dont have a need for one anymore everything is like highly populated 40gb/10gb anymore.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 16, 2015, 10:52:58 PM
Quote from: burnyd on May 16, 2015, 10:14:09 PM
Im phasing out any chassis based device in the dc anyways.  I just dont have a need for one anymore everything is like highly populated 40gb/10gb anymore.
Yep, just had this conversation with the boss the other day.  Id rather build out on 5600 or 6K if we stayed Nexus. But I am interested in dropping the 2Ks and going full on 9K and ACI.

But being a smaller shop we could easily go with a Big Switch or Nuage networks deployment and make it work.

Not sure Ill be around to see the 7Ks retire...
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 17, 2015, 02:16:57 AM
Quote from: that1guy15 on May 16, 2015, 10:52:58 PM
Quote from: burnyd on May 16, 2015, 10:14:09 PM
Im phasing out any chassis based device in the dc anyways.  I just dont have a need for one anymore everything is like highly populated 40gb/10gb anymore.
Yep, just had this conversation with the boss the other day.  Id rather build out on 5600 or 6K if we stayed Nexus. But I am interested in dropping the 2Ks and going full on 9K and ACI.

But being a smaller shop we could easily go with a Big Switch or Nuage networks deployment and make it work.

Not sure Ill be around to see the 7Ks retire...

ACI - ha, yeah I'd like to see that.
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 17, 2015, 06:50:08 AM
lol aci
Title: Re: Nexus Outside the Datacenter
Post by: deanwebb on May 17, 2015, 08:15:49 AM
Cisco is really pushing for us to get ACI where I'm at, since we're basically all Nexus, Vblock, and stuff like that in the DC. I'm supposed to have an ACI meeting next week, but I think that I would rather spend the two hours working on NAC issues...
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 17, 2015, 08:51:20 PM
Dean, as a security person there are some benefits.  Everything has to have a policy in order for anything to talk.  However, if multisegmentation is your goal Im going to tell you NSX is the better way to go.
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 17, 2015, 10:47:21 PM
Quote from: deanwebb on May 17, 2015, 08:15:49 AM
Cisco is really pushing for us to get ACI where I'm at, since we're basically all Nexus, Vblock, and stuff like that in the DC. I'm supposed to have an ACI meeting next week, but I think that I would rather spend the two hours working on NAC issues...

Want a fun conversation?  Ask for customer references.  Ask what to do with your current Nexus gear because if it doesn't have the custom ACI ASICs, it can't play in the ACI fabric.  You HAVE to have the ACI ASICs (The ACI Spine Engine, or "ASE," and the ACI Leaf Engine, or "ALE") in order for the ACI policy model to function - and at BOTH the leaf and spine layers (the spine also has to be able to read the eVXLAN meta data).  Ask your application guys if their apps fall into the pretty 3-tier model that ACI forces you to abide  by.

I researched this pretty heavily when it was first coming out... but now when asked about it I just kind of laugh.. not trying to be a troll about it.. it's just how it is. Though - I don't really get asked about it anymore...


EDIT - Most importantly, I would LOVE to hear some real feedback - I've gotten little to none. 

Basically, make sure your entire networking/server/db/etc. team is there when you have that conversation.

EDIT 2 - Wait a minute - are we still talking about outside the data center?  I seriously hope when talking about ACI you're talking about inside the data center... though that would be quite entertaining to know if Cisco's trying to sell ACI to the campus/enterprise...
Title: Re: Nexus Outside the Datacenter
Post by: icecream-guy on May 18, 2015, 07:59:54 AM
Cisco has a nice small ACI Starter kit, I think it starts about 250K.  We've got one in procurement now, hopefully I get the project. The important thing is to know all your application flows, as you will need to create groups of things and rules for those things to communicate with other things.  so if your web tier has a back end DB, you'll need a rule to allow users to the web site, and a rule to allow the web server to communicate with the db server, a rule for the db server to communicate with the web server, and a rule for the web server to communicate with the customers.  you'll also need to take int account dns, ntp etc. because nothing talks unless it's part of a group.  luckily similar things can be bundled and put into the same group.
ACI is very similar to a non-stateful firewall.   
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 18, 2015, 08:49:21 AM
Im still talking DC. How about we split this side discussion off into its own thread as to not dilute the original question?
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 18, 2015, 11:11:13 AM
That was towards Dean, but yeah sorry - I'm going down a rabbit hole here.
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 18, 2015, 02:58:57 PM
Quote from: ristau5741 on May 18, 2015, 07:59:54 AM
Cisco has a nice small ACI Starter kit, I think it starts about 250K.  We've got one in procurement now, hopefully I get the project. The important thing is to know all your application flows, as you will need to create groups of things and rules for those things to communicate with other things.  so if your web tier has a back end DB, you'll need a rule to allow users to the web site, and a rule to allow the web server to communicate with the db server, a rule for the db server to communicate with the web server, and a rule for the web server to communicate with the customers.  you'll also need to take int account dns, ntp etc. because nothing talks unless it's part of a group.  luckily similar things can be bundled and put into the same group.
ACI is very similar to a non-stateful firewall.

:zomgwtfbbq:

roflcopter!
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 18, 2015, 03:28:52 PM
Quote from: burnyd on May 18, 2015, 02:58:57 PM
Quote from: ristau5741 on May 18, 2015, 07:59:54 AM
Cisco has a nice small ACI Starter kit, I think it starts about 250K.  We've got one in procurement now, hopefully I get the project. The important thing is to know all your application flows, as you will need to create groups of things and rules for those things to communicate with other things.  so if your web tier has a back end DB, you'll need a rule to allow users to the web site, and a rule to allow the web server to communicate with the db server, a rule for the db server to communicate with the web server, and a rule for the web server to communicate with the customers.  you'll also need to take int account dns, ntp etc. because nothing talks unless it's part of a group.  luckily similar things can be bundled and put into the same group.
ACI is very similar to a non-stateful firewall.

:zomgwtfbbq:

roflcopter!

Are you still doing your ACI sessions at CLUS? What have you seen lately that is pushing you away from ACI? I dont think I can keep up with you and your DC trends anymore :) Hell I still want to play around with Fabric Path and OTV... Maybe Ill learn this new-fangled VxLAN stuff one of these days.
Title: Re: Nexus Outside the Datacenter
Post by: burnyd on May 18, 2015, 06:20:09 PM
Well like in your post... Hey these 7ks/5ks and all the other stuff you purchased go ahead and use them as cores or something.  They do not have our aci fabric in them so they are not compatible.  Im just not a fan of a product that needs specific hardware to function properly.  It is the right idea to have the application end groups and I cant remember what the other one is called.  But it just strikes me wrong.  So as you scale out your dc fabric you absolutely need 9ks.

The hypervisor method makes so much more sense to me because you can run it on anything and use the underlay simply as a fabric.  Any network devices you want to.  Plus the price is aaalllootttt cheaper and multi tenancy is easier.

But yes, I will be at the ACI class just to check it out.  I have zero intentions of running it I just want the deep dive for learning purposes.
Title: Re: Nexus Outside the Datacenter
Post by: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 26, 2015, 12:23:18 PM
Quote from: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?
Shared control-plane. Split-brain and failure scenarios with VSS can get pretty hairy.   
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 26, 2015, 01:09:03 PM
Quote from: that1guy15 on May 26, 2015, 12:23:18 PM
Quote from: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?
Shared control-plane. Split-brain and failure scenarios with VSS can get pretty hairy.

Yep... the same reasons some folks don't like this or stacking tech.
Title: Re: Nexus Outside the Datacenter
Post by: killabee on May 26, 2015, 05:47:19 PM
Quote from: ScottF on May 26, 2015, 10:13:29 AM
Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
VSS is running the campus core but the more I run it the less I like it.

Out of interested why don't you like VSS?

I used to be pro VSS because of the single management plane selling point.  I've since avoied it for the reasons that1guy15 mentiones.  To elaborate a bit more...

We have two campus core switches (6880s) running VSS.  The campus distro and DC core uplinks to the campus core.  Up until recently we were hitting a bug where the VSL keepalives were timing out.  This caused the VSL links to fail, dual-active detection to kick in and forced one of the VSS members to automatically shut down all of its interfaces.  This didn't cause an outage because the other VSS member stayed up, but it opened my eyes to seeing that bugs can hit even systems that are designed to be highly available.  And with such strange bugs (http://www.networking-forums.com/index.php?topic=326.msg3388)...anything can happen!

Some other reasons:
-Since VSS makes devices act and behave as one, upgrading would be more involved as you played a shuffling game of moving links/IPs around to the new switches.  It wouldn't be as straightforward as "replace switch A, now replace switch B"
-Weigth the pros and cons -- You get a single management and control plane, more backplane bandwidth, eliminate FHRP and STP considerations, but now you have to worry about this (http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SX/configuration/guide/book/vss.html#wp1063718) and the appropriate caveats and failure/recovery scenarios
Title: Re: Nexus Outside the Datacenter
Post by: killabee on May 26, 2015, 06:04:53 PM
Quote from: Fred on May 14, 2015, 09:58:03 PM
I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?

The way I see it is Cisco positions their products for different places in the network (PIN), and those PINs dictate the product's lifecycle, roadmap, feature set, customer base, architecture, etc.  With the Nexus currently positioned for the datacenter, you won't find the common stuff you typically find in the campus (e.g. dot1x, etc). 

@burnyd/@that1guy15
So why are you guys against chassis-based platforms now?
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 26, 2015, 06:06:12 PM
Quote from: killabee on May 26, 2015, 06:04:53 PM
Quote from: Fred on May 14, 2015, 09:58:03 PM
I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?

The way I see it is Cisco positions their products for different places in the network (PIN), and those PINs dictate the product's lifecycle, roadmap, feature set, customer base, architecture, etc.  With the Nexus currently positioned for the datacenter, you won't find the common stuff you typically find in the campus (e.g. dot1x, etc). 

@burnyd/@that1guy15
So why are you guys against chassis-based platforms now?

Exactly.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 26, 2015, 08:55:31 PM
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?
Title: Re: Nexus Outside the Datacenter
Post by: wintermute000 on May 27, 2015, 05:26:12 AM
summed it up pretty much

and yeah everyone round here hates VSS. Plenty of horror stories. If I get to call the shots, i'd avoid VSS if I could - the creation of a single point of failure and reliance on the cisco software writers to do do their jobs properly vs good old, battle tested protocols like OSPF/EIGRP (on the WAN) and HSRP/VRRP (on the LAN) - I know which one I'd back, every single time. Same logic goes for a stack for smaller deployments, except stacks are usually less buggy

Another thing is that in Campus deployments, LAN capacity is rarely an issue i.e. you don't really need to get rid of STP unlike DC.



Title: Re: Nexus Outside the Datacenter
Post by: icecream-guy on May 27, 2015, 07:42:19 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?).


No, The 9K still don't support dual homed fex's.  I think that feature slipped into spring of 2016.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 27, 2015, 08:05:54 AM
Quote from: ristau5741 on May 27, 2015, 07:42:19 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?).


No, The 9K still don't support dual homed fex's.  I think that feature slipped into spring of 2016.

I have not dug into the 9K extension modules and their design. I would assume they are a lot like 5Ks and FEXs. I never liked the idea of collapsing the FEX up to the 7Ks. The 10G ports on a 7K are hell-a expensive compared to a 5K/2K model.
Title: Re: Nexus Outside the Datacenter
Post by: Reggle on May 28, 2015, 03:19:16 AM
Concerning VSS, I don't like that shared control plane either. I've seen an issue on one supervisor take down the entire cluster. Not the redundancy I was looking for.
Switch stacks have that too. The only positive point is that this seems to have improved with more recent IOS releases.

@Ristau: you're sure about N9K not supporting dual-homed FEX? I thought the data cheats say it's supported. I'm about to buy a pair here based on that functionality...
Title: Re: Nexus Outside the Datacenter
Post by: icecream-guy on May 28, 2015, 08:16:03 AM
Quote from: Reggle on May 28, 2015, 03:19:16 AM

@Ristau: you're sure about N9K not supporting dual-homed FEX? I thought the data cheats say it's supported. I'm about to buy a pair here based on that functionality...

we dood that.  yep,  specifically pair of C9396PX connecting a N2K-C2248TP-E-1GE, switch is running code 6.1(2)I3(2) one of the fex status is simply offline

9396-01# show fex
  FEX         FEX           FEX                       FEX               
Number    Description      State            Model            Serial     
------------------------------------------------------------------------
101        fex-101               Offline   N2K-C2248TP-E-1GE   FOT12345678

9396-02# show fex
  FEX         FEX           FEX                       FEX               
Number    Description      State            Model            Serial     
------------------------------------------------------------------------
101        fex-101                Online   N2K-C2248TP-E-1GE   FOT12345678


9396-01# show fex 101 det
FEX: 101 Description: ent-fex-101   state: Offline
  FEX version: 6.1(2)I3(2) [Switch version: 6.1(2)I3(2)]
  FEX Interim version: 6.1(2)I3(2)
  Switch Interim version: 6.1(2)I3(2)
  Extender Model: N2K-C2248TP-E-1GE,  Extender Serial: FOT12345678
  Part No: 73-13671-01
  Card Id: 149, Mac Addr: xx:xx:xx:xx:xx:xx, Num Macs: 64
  Module Sw Gen: 21  [Switch Sw Gen: 21]
pinning-mode: static    Max-links: 1
  Fabric port for control traffic:
  Fabric interface state:
    Po101 - Interface Up. State: Active
    Eth1/7 - Interface Up. State: Active
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 28, 2015, 10:42:32 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?

What are you missing?  Possibly simplifying your life with another vendor?  :problem?:

It's terrible to see people making chassis vs. fixed config switch decisions based on licensing and sup complexity.  That's garbage.
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 28, 2015, 10:44:09 AM
Quote from: wintermute000 on May 27, 2015, 05:26:12 AM
summed it up pretty much

and yeah everyone round here hates VSS. Plenty of horror stories. If I get to call the shots, i'd avoid VSS if I could - the creation of a single point of failure and reliance on the cisco software writers to do do their jobs properly vs good old, battle tested protocols like OSPF/EIGRP (on the WAN) and HSRP/VRRP (on the LAN) - I know which one I'd back, every single time. Same logic goes for a stack for smaller deployments, except stacks are usually less buggy

Another thing is that in Campus deployments, LAN capacity is rarely an issue i.e. you don't really need to get rid of STP unlike DC.

+1
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 28, 2015, 11:59:18 AM
Quote from: AspiringNetworker on May 28, 2015, 10:42:32 AM
Quote from: that1guy15 on May 26, 2015, 08:55:31 PM
In not totally against chassis based deployments, They have their place. Big one for me is if you need additional licensing, then its more effective to license the sup instead of a rack of switches. You also have features like ISSU and better fault tolerance.

Now for the N7K. I dont think Ill consider deploying the 7K again. All the features that made the 7K stand out have either been dropped into the 5/6K or the ASR. You want dense 10/40/100G then the N6K is very attractive for 4U. Need more, then go with the N9K. Need Layer3, the 5600 and 6K have it built in. With all those options why go with the N7K unless you are supering a massive DC but that is what the N9K is positioning itself.

The only selling point I still have for the N7K are VDCs. 9Ks support them but thats it for now (still true?). So if you already have 7Ks deployed and cant fork-lift for 9Ks then you might as well upgrade the SUP and line cards.

So all you get with the N7K is a shit-ton of complexity, 1,001 line cards and the endless puzzle of which sups they support. Code upgrades are a nightmare and take a fair amount of planing. Not fun. But, but what about the service cards that will come out in....

What am I missing or not thinking about guys?

What are you missing?  Possibly simplifying your life with another vendor?  :problem?:

It's terrible to see people making chassis vs. fixed config switch decisions based on licensing and sup complexity.  That's garbage.

Wont SDN solve all this!?!
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 28, 2015, 02:42:02 PM
Quote
Wont SDN solve all this!?!

Heh, the pain point you're feeling here is it a lower level.  You shouldn't be asking how SDN can fix that - you should be asking your vendor to manage their products better.
Title: Re: Nexus Outside the Datacenter
Post by: that1guy15 on May 28, 2015, 09:51:11 PM
Quote from: AspiringNetworker on May 28, 2015, 02:42:02 PM
Quote
Wont SDN solve all this!?!

Heh, the pain point you're feeling here is it a lower level.  You shouldn't be asking how SDN can fix that - you should be asking your vendor to manage their products better.
hehe. No Im pretty sure I just need to buy some SDN. 3 maybe 4 SDNs will do. Not sure how or who is selling SDN, but everyone tells me its gonna change everything :)
Title: Re: Nexus Outside the Datacenter
Post by: wintermute000 on May 29, 2015, 04:09:54 AM
Don't forget to buy a cloud. Dat shiz rains dineros 
Title: Re: Nexus Outside the Datacenter
Post by: NetworkGroover on May 29, 2015, 10:59:31 AM
Quote from: that1guy15 on May 28, 2015, 09:51:11 PM
Quote from: AspiringNetworker on May 28, 2015, 02:42:02 PM
Quote
Wont SDN solve all this!?!

Heh, the pain point you're feeling here is it a lower level.  You shouldn't be asking how SDN can fix that - you should be asking your vendor to manage their products better.
hehe. No Im pretty sure I just need to buy some SDN. 3 maybe 4 SDNs will do. Not sure how or who is selling SDN, but everyone tells me its gonna change everything :)

You still gotta run those 3 or 4 SDNs on quality hardware! You're at the mercy of your lowest common denominator:

1. Crappy SDN = Crappy SDN
2. Great SDN + Crappy physical network = Crappy SDN ;)

It's kind of like, who's more important - the network guy or the facilities guy... because with crappy power and cooling... you get a crappy network.. yadda yadda.