leaf and spine - no spine xconnects

Started by wintermute000, June 17, 2015, 08:39:57 PM

Previous topic - Next topic

wintermute000

Reading up vmware material on leaf/spine recommendations and they do not want you to x-connect spine switches.


'Links between spine switches are not required. If there is a link failure between a spine switch and a leaf switch, the
routing protocol will ensure that no traffic for the affected rack is attracted to the spine switch that has lost connectivity
to that rack."


What's everyone's take on this, esp those with production leaf/spine DCs? If you're running L3 ECMP, then what's the harm in having spine-spine L3 xconnects, as your routing protocol will sort it all out (esp. if you make them an inferior metric so they are only preferred if a leaf loses its uplinks to one spine)?


I guess the flip side arguments as I can think of are
- they're not required anyway if leafs are uplinked to all spines
- easier cabling (assuming spines are located in different racks, possibly rows)

that1guy15

http://blog.ipspace.net/2013/02/intra-spine-links-in-leaf-and-spine.html

Leaf/Spine is designed to be two hops away from anything. So if you pass traffic over the spine interconnects you are adding an additional hop (eh...). But like you said your routing protocol should sort this out. I really dont see a big deal either way but I will wait for what others have to say!!
That1guy15
@that1guy_15
blog.movingonesandzeros.net

NetworkGroover

#2
The idea is simplicity.  You talked about modifying metrics - you shouldn't have a need to do that.  I know we all come from a Cisco world of "nerd-knobs" that's been engrained in us with our studies, but the world is getting tired of that and moving on with easily repeatable designs - at least in the DC.

I'm writing a white paper on BGP in the DC... (still - lol... I swear I'll get it done one day) I can shoot you what I have as a draft if you're interested. 

Like you said, it's not required, so why add complexity.  That and a nice thing about using BGP in the DC is if you have your spines in the same AS with no cross-connect, that's a very easy loop prevention mechanism that requires no extra config... as a route gets propagated from spine1 to leaf1, and then back up to spine2, it's dropped because it sees its own AS in the path.  Adding a cross connect means you need to figure out how you want to handle that neighborship, etc. - and for what gain?

K.I.S.S.

EDIT - When I think about it, I guess it's kinda like you're compartmentalizing your spines... "This is Spine1, and these are the leaves it talks to.  This is Spine 2, and THESE are the leaves it talks to."
Engineer by day, DJ by night, family first always

burnyd

There is no need to connect spine switches.  You generally want traffic from lets say a leaf switch when you look at the routing table see as many destinations as there are spines for a particular destination.  Traffic should never cross a link between spine switches. 

In general you will run more than 2 spine switches anyways. 

wintermute000


LynK

The easiest way to answer this, is why do YOU see the need to have the additional links between spines?

The only thing I can think of is if you have a merged DC/User access environment. Each of the spines is supposed to have full topology of all devices in the DC (WAN L3 links, Internet L3 links). So the only reason why you would need the interconnects is if Spine A does not have access to everything Spine B does... requiring a hop from A-->B.

To be honest, you are only going to see these topologies in new data-centers because most of the existing are piecemealed together
Sys Admin: "You have a stuck route"
            Me: "You have an incorrect Default Gateway"

NetworkGroover

Quote from: LynK on June 19, 2015, 09:22:00 AM
The easiest way to answer this, is why do YOU see the need to have the additional links between spines?

The only thing I can think of is if you have a merged DC/User access environment. Each of the spines is supposed to have full topology of all devices in the DC (WAN L3 links, Internet L3 links). So the only reason why you would need the interconnects is if Spine A does not have access to everything Spine B does... requiring a hop from A-->B.

To be honest, you are only going to see these topologies in new data-centers because most of the existing are piecemealed together

:zomgwtfbbq:
Engineer by day, DJ by night, family first always

wintermute000

And yeah shoot me the paper when done, would be interesting cheers

LynK

Sys Admin: "You have a stuck route"
            Me: "You have an incorrect Default Gateway"

NetworkGroover

Quote from: LynK on June 22, 2015, 10:14:06 AM
Quote from: AspiringNetworker on June 19, 2015, 09:25:27 AM

:zomgwtfbbq:

explain your gifs good sir. :problem?:

Hehe... that's the best I can put it...  just seems odd to me, but I don't generally work in those kinds of environments.  Generally the DC is very separated from the user environment in my world.
Engineer by day, DJ by night, family first always

NetworkGroover

#10
Quote from: wintermute000 on June 21, 2015, 07:47:17 AM
And yeah shoot me the paper when done, would be interesting cheers

Sure... let me find a place I can upload it and everyone can review it/weigh-in.  I'd value feedback from guys who administer these types of environments on a daily basis.

EDIT - Actually that may not be a good idea... don't want to risk getting myself in trouble somehow.  I'll definitely point you guys to it when I finish it, get it peer-reviewed, etc.
Engineer by day, DJ by night, family first always