Nexus Outside the Datacenter

Started by Fred, May 14, 2015, 09:58:03 PM

Previous topic - Next topic

Fred

I really like VPC's.  Separate switches with separate control planes, but all the redundancy benefits and fast failover of VSS or a stack.

So why are nexus switches relegated to the datacenter? Seems to me the 9k's make excellent collapsed cores for smaller remote sites. Why wouldn't you do this?

Further, what are the reasons you wouldn't want to run FEX's at the campus (rather than datacenter) access layer?  I know some of them (no POE, no 100Mbps support), but what else?

Reggle

You just named them: PoE and limited 100 Mbps support.
Usually price is a factor as well, although I haven't checked that lately.
And yes, with the 9K's in the game now price is probably less of an issue.

One other thing I can think of is the lack of some security features, 802.1x being the biggest concern here.

I'm interested in what others have to say as well.

deanwebb

No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

Otanx

Look at the 6800ia switch. This is basically a FEX for the 6800 and 6500 Sup2T chassis. My understanding is these are geared to access and campus deployments.

-Otanx

NetworkGroover

Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Engineer by day, DJ by night, family first always

deanwebb

Quote from: AspiringNetworker on May 15, 2015, 09:58:05 AM
Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Ah, so there's no push to have these as collapsed cores for remote sites, in spite of the enticements of the price point... and, yes, you don't want dot1x in the datacenter. You want one MAC address assigned to each port and alarms to sound and troops to deploy as soon as there's an unscheduled media disconnect event.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

NetworkGroover

#6
Quote from: deanwebb on May 15, 2015, 10:05:05 AM
Quote from: AspiringNetworker on May 15, 2015, 09:58:05 AM
Quote from: deanwebb on May 15, 2015, 08:37:45 AM
No dot1x on the 9K? I wonder why, as dot1x is central to Cisco's security strategy.

Likely because the 9k was designed for the data center - and there isn't a huge need for 802.1x in the data center that I'm aware of.  It's more of a campus network requirement.
Ah, so there's no push to have these as collapsed cores for remote sites, in spite of the enticements of the price point... and, yes, you don't want dot1x in the datacenter. You want one MAC address assigned to each port and alarms to sound and troops to deploy as soon as there's an unscheduled media disconnect event.

Sorry, define "collapsed core" as you mean it here. If you wanted to collapse in the data center or campus, usually you'd collapse core to distro/spine, but not access - and access is where you'd usually have dot1x if at all, right?  As for assigning MACs to ports - no? That's still basic dot1x right? Again, as far as I know, not too many folks (one that I know of has asked about it) are doing that (could easily be wrong here and feel free to correct me if I'm wrong) in the data center.  In the campus it's not uncommon, but in a controlled data center, if you have people plugging in unauthorized devices - you've got problems.

EDIT - Plus, think about it. I'm very rusty on my dot1x, but if I have an ESXi host with a ton of VMs, how do I handle that?  Then take into account you may have 10s or 100s of leaf switches... not very scalable I'd imagine.
Engineer by day, DJ by night, family first always

packetherder

10Gbe (hell, the 40Gbe they throw in for free too) density is hard to beat on the 9ks. But bpdu-guard can't be disabled on FEX ports, which is great in theory, but I've always had to deal with exceptions to this.

Like Otanx said, the 6800ia's are cisco's answer to campus FEXs. They can handle 20 of them which, I think, is more than the 9ks. I've heard that doubling that number is on the road-map, but last I checked it was still 20. But yeah, you're stuck with VSS on that platform.

that1guy15

I laughed when I read this title. Not because its a stupid question but because my predecessors purchased two Nexus 7010s a while back to replace the campus core with this cool new vPC stuff.

Needless to say they dropped the network for 3 days! 3 F'n days due to loops and incorrect configuration on the 7Ks. They also proved too incompetent to trace loops and ended up ripping every cable out of every switch in the DC around day 2. Then relized they had no clue how to plug it all back in...

But the plus side is they got shit-canned and I got a new job and almost $1M to rearchitect the Campus and DC. The 7Ks are now nice and happy running the DC Core! VSS is running the campus core but the more I run it the less I like it.
That1guy15
@that1guy_15
blog.movingonesandzeros.net

NetworkGroover

Quote from: that1guy15 on May 15, 2015, 12:14:19 PM
I laughed when I read this title. Not because its a stupid question but because my predecessors purchased two Nexus 7010s a while back to replace the campus core with this cool new vPC stuff.

Needless to say they dropped the network for 3 days! 3 F'n days due to loops and incorrect configuration on the 7Ks. They also proved too incompetent to trace loops and ended up ripping every cable out of every switch in the DC around day 2. Then relized they had no clue how to plug it all back in...

But the plus side is they got shit-canned and I got a new job and almost $1M to rearchitect the Campus and DC. The 7Ks are now nice and happy running the DC Core! VSS is running the campus core but the more I run it the less I like it.

:wtf: Good lord that's terrible! One for putting in gear they weren't fully aware how to operate, and two for ripping out cable that A) apparently wasn't previously labeled and B) wasn't labeled as they ripped it out.

There's like... no words for that epic fail.  Besides.. epic fail.
Engineer by day, DJ by night, family first always

that1guy15

Yeah its pretty crazy. They where given a large chunk of money to spend on a refreshes, so my best guess is they went with the biggest baddest box at the time. They also made sure to spend every dime of it. We have a number of 6509s fully populated with 8x10G cards which are fully populated with X2s! Not just SX either, a fair amount of LH SM! I also have about 150-200 10G optics (SX and LH) in spare form what they built out. Its nuts!

And the fiber. Im not bitching about this one either. I have three main buildings about 200-300 yards apart all in a triangle of 48 count single mode buried fiber! Its all mine, all mine I tell yah!
That1guy15
@that1guy_15
blog.movingonesandzeros.net

burnyd

The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.

NetworkGroover

Quote from: burnyd on May 15, 2015, 10:43:48 PM
The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.

Good point - didn't even think about that.  That's also something rarely asked for in my experience.  Usually in the data centers I work in, bandwidth isn't an issue so there's no need for QoS really... hard as that may be to believe to some people.  3:1 to 5:1 oversubscription between spine and leaf layers is usually ok for most general DC application stuff, with better ratios needed for applications like Big Data, HPC, and IP Storage.  QoS never really enters those conversations (to my knowledge, anyway).
Engineer by day, DJ by night, family first always

NetworkGroover

Quote from: that1guy15 on May 15, 2015, 08:20:14 PM
Yeah its pretty crazy. They where given a large chunk of money to spend on a refreshes, so my best guess is they went with the biggest baddest box at the time. They also made sure to spend every dime of it. We have a number of 6509s fully populated with 8x10G cards which are fully populated with X2s! Not just SX either, a fair amount of LH SM! I also have about 150-200 10G optics (SX and LH) in spare form what they built out. Its nuts!

And the fiber. Im not bitching about this one either. I have three main buildings about 200-300 yards apart all in a triangle of 48 count single mode buried fiber! Its all mine, all mine I tell yah!

Haha - nice.  At least there's some silver lining to that mess. ;)
Engineer by day, DJ by night, family first always

that1guy15

Quote from: AspiringNetworker on May 16, 2015, 04:58:14 PM
Quote from: burnyd on May 15, 2015, 10:43:48 PM
The first time you are going to try to configure QoS on these boxes for a remote site you will figure out why right then.  They are made for large buffered traffic without any real feature rich features you would find in a remote office.

Good point - didn't even think about that.  That's also something rarely asked for in my experience.  Usually in the data centers I work in, bandwidth isn't an issue so there's no need for QoS really... hard as that may be to believe to some people.  3:1 to 5:1 oversubscription between spine and leaf layers is usually ok for most general DC application stuff, with better ratios needed for applications like Big Data, HPC, and IP Storage.  QoS never really enters those conversations (to my knowledge, anyway).
i will propose any chassis based device will have a nightmare based qos setup. Hardware plays a huge role with qos, then add on the mix and match of a chassis.
That1guy15
@that1guy_15
blog.movingonesandzeros.net