In need of some pre-sales design help for internal network

Started by Dieselboy, February 02, 2021, 03:22:58 AM

Previous topic - Next topic

Dieselboy

Could any of you whom are all more knowledgeable than I am at the specifics please help me if you can with some parts? Wintermute if you see this? :)

The driver for this is that my ucs 5108 chassis is going away. The blades are EOL and we're looking forward for 1RU standalone servers with more CPU and RAM grunt. I am thinking to get C220 M5 because I already have 2 in another datacentre.

Currently I have:
2 x NX 3k (old ones)
2 x FI switches
1 x UCS chassis
1 x SAN (4 x 10GB)

The SAN (SFP-> fibre) and UCS (twinax) connects into the FI switches which only has SFP ports. The FI switches then uplink to the 3k core on 10GB twinax.

For the new stuff:

I'm considering running the new servers (6 of them) and the SAN into one pair of core switches but I am concerned about the storage buffer issue that was present some years ago - I dont know if this issue is present today.

The other option in my mind are a separate pair of switches for the storage VLAN traffic only, it's iSCSi.


With the SAN it has 4 x 10GB nics. The new servers I'm looking for 4 x 10GB copper nics each. This equates to 28 x 10GB ports. (OR it comes to 16 x 10GB storage ports)


If I use separate SAN switches then I could use SFP ports there and use twinax or fibre. But if I only have 2 x core switches then I'm thinking to make everything 10GB copper so that I don't need to worry about the ports on the switch side.

What is the consensus these days around this? Has anyone solutionised any similar requirement in terms of simple HA design with those few main components for a location (server, storage, network)?

wintermute000

#1
Just get a pair of mid-range N9Ks and don't worry about it. 48x 10 or 25Gb and 6x 40 or 100Gb, throughput is going to be the least of your concerns.

93180YC-EX has 40Mb of shared buffer which is roughly 3x that of your 3Ks. N3Ks were an oddity, they were actually built using merchant silicon to for a hyperscaler originally. That's why they're so strange and have so many weird caveats compared to 'mainline' Nexus switches.

N9Ks would be totally fine as L3 switches as well but they won't have any of the campus / security features so it might be worth splitting your campus environment out to Cat9300s and routing between them.

If you're security conscious you would also seriously want to consider a decently specced (10Gb+) FW as the real layer-3 core. Hang the WAN / internet off the FW and then both the Catalyst and Nexus environments are stub zones off it. Keep the N9K and Cat9Ks layer-3 so you keep maximum internal throughput (storage etc.) but since they both have only 1 way out (through the FW) you can then stick to basic L3 features for licensing e.g. OSPF stub or even static.   


Either way I'd definitely separate the DC and campus environments. Like if you use the N9K as your universal L3 core, then later you want to do some ISE stuff or whatever and presto you need to have it talk to a Catalyst.

I'd stick with fibre personally. Servers are going to 25Gb as standard.

deanwebb

I'm seeing fibre as the default in more and more customer DCs. I'm with Wintermute on that call.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

Dieselboy

Thank you Wintermute and Dean!

OK you've convinced me :) - so for the 1GB copper stuff I have, I guess I'd need to get a pair of distribution-type switches to run the firewall, voip router 1GB stuff. No real issue.

Security is my P1 concern, so I like the idea of having the firewall do the layer 3 routing but I've steered against it in the past for throughput reasons. Do you mean that the firewall would "own" default gateways for the internal subnets (and run L2 on the core and catalyst access switching)?
Do you have a model of Palo Alto :mrgreen: that is your go-to in this?

In terms of campus access switching, I have an old 2960S which I'm looking to replace at some point in the near future.

wintermute000

#4
Nah, I'd keep server SVIs on N9K and campus SVIs on Cat9K, but stick the FW in between them. That way it only needs to handle inter-segment traffic, not inter-VLAN.
       <WAN>
         |
<N9K>--<FW>--<Cat9K>

I'd roll with a PA 3260 or a Fortinet 200F. I'd also use the FW natively for all BGP / WAN routing, there is no reason to stick an ISR in front of it.

OFC you can do ALL the L3 on the FW, but i'd imagine that means saying goodbye to 10Gb+ inter-VLAN routing, as well as any future whiz-bang that requires ecosystem integration into the L3. That's the attraction of 'modularising' things like I suggested, you keep any and all options option, and reduce the FW load whilst still securing the key flows between different security classifications (users, servers and WAN).

Dieselboy

Nice, thank you for that! I'd like to buy you a few coffees - please could you send me the details? I use comm bank so if you PM me your mobile number I'll do this tonight (or paypal or whatever you prefer).

I get your idea now and it makes sense. In fact I can begin to implement this today with what I have. My concern was traffic between user and server throughput but users have 1gb max connection anyway.

Thanks again. This chat has been really helpful to me. Even just to voice this out and get some feedback from you really helps, so thanks. I'm the sole Network guy at my place so don't often get to discuss specifics like this.

wintermute000

hahahaha nah its all good mate. I'm changing roles anyway shortly (I'll PM you)

wintermute000

#7
final tip, use VRFs to separate DMZ servers and internal servers to different FW zones

ditto for internal vs contractor vs guest
RJ45 SFPs aren't that expensive I wouldn't bother with a dedicated RJ switch unless you had LOADS
Just one final weird thing, you will have to pick one of the switch pairs (the Catalyst due to RJ?) to act as a 'WAN switch' (assuming its a HA firewall pair). Not a big deal, just looks a bit messy.

deanwebb

VRFs can do some amazing, stretchy things, that's for sure...
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

Dieselboy

Thank you :)

93180YC-EX seems to be a good option then. They also do vxlan so maybe I can re-work my DR "solution" using vxlan. Compared to what I'm doing now which is duplicating the L3 networks in the DR site without any switching between them between sites. During DR time I just enable the DR networks. It's a "DR" or "no DR" type solution.

RJ45 SFPs are about $400 each. I'll look for and try to get the non-cisco ones. I do this with memory DiMMs and save 1000s with no issue.

I have a meeting with PA next Friday but had a quick call with them this morning. I think I'm looking at a pair of 850's. PA said people usually go for the pair of 850s or a single 3220.

Thanks for all your help :)

wintermute000

I would suggest you look at throughput numbers carefully, ~2.1Gb app-id FW / 1.1Gb IPS on the 850 is not great for a core firewall (as opposed to purely WAN).

The 3200s are a different weight class and more appropriate, and yep cost a bundle more.




Dieselboy

I'm with you, I took those numbers as individual - meaning 2.1gb app-ID, OR 1.1GB IPS but not both together. My client to server traffic usage is not that high. But even saying that, going forward I expect it to increase.

Would you install a single 3200 instead of a pair to save costs? PA said they tend to sell a single 3200. It has dual PSU's but it's still a single box.

wintermute000

#12
Purely for functionality I would settle for a single 3220 if you are setltled on Palo. You might need HA later but if you can't run > 2Gb from day dot you will run into trouble.

Now I'm biased as you know... lol... but you should take a look at the Fortinet options as well since both would be new to you so its not like there's an additional transition cost (heck even Cisco would be new, since Firepower is in no way an ASA, unless you run ASA code and friends don't let friends run ASAs in 2021). And to be completely fair I have always put that option in the mix, even before I became... biased. In your market segment its pretty dominant, all I'll say is that its also magic quadrant and has been for years.  I would suggest you ask your distie for a Forti 200F (vs 850) or higher quote, look at the both tech/commercial figures, and then see if its worth pursuing.

And yep the vendors all quote individual. Its certainly not aggregate lol.

Feel to reach out via PM if you want a hand, I don't want to be overly kool aiding. FWIW I feel this thread (and to be blunt, reddit's overall consensus across multiple networking/vendor subreddits) is fair.
https://www.reddit.com/r/fortinet/comments/l5smy1/ssl_decryption_numbers/

deanwebb

DC numbers ALWAYS increase. You will want full 10G capability before long because developers.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

Otanx

I wouldn't do a single core firewall. That means any patching takes everything down. With a pair at least I can fail over, and reduce the outage to a couple minutes or less. Also any failure for RMA means an outage at minimum until FedEx shows up the next day.

I will second Fortinet. Firepower never seemed to work very well for us. There was also some limitation, or bug. ASAs were rock solid, but Cisco failed on the follow through.

-Otanx