Mgmt in a typical Enterprise

Started by NetworkGroover, November 10, 2021, 12:19:10 PM

Previous topic - Next topic

NetworkGroover

In your experience, how does management in terms of VLANs and subnets look in a typical enterprise?  Are there separate management VLANs for different parts of the network (Campus vs. DC, etc.)?  Is it typically further segmented beyond this?  Like a dedicated VLAN for managing APs, versus managing other devices?

How many management VLANs do you believe typically exist in your experience?
Engineer by day, DJ by night, family first always

Otanx

For our larger campus we do L3 down to the access switch, and management is done in band using loopbacks for those switches. Each closet also has an opengear console server with a fiber interface. The opengear is patched all the way back to our data center. This was originally a little overkill because we all worked at that campus, but they were cheap, and with COVID has helped us not have to drive in on a couple occasions.

Data center we have a management switch in each rack. Routed design so each rack is a /24, and that covers any management interfaces for that rack. This include dedicated switch management interfaces for the prod switches, ilo, imm, idrac, whatever. The management switch itself uses a loopback. Those switches all terminate to a management core, and that core is linked to our core firewall. For all the network gear without dedicated management interfaces supporting "prod" connections we have a vlan that they get IPs in. A few management servers live here as well. tacacs, RANCID, jump box. That vlan also terminates into the core firewall.

Remote branches we just have a single firewall and switch so everything is just in band. Nothing fancy.

-Otanx

icecream-guy

Those sites that are not L2 adjacent, everything across the enterprise has it's own management VLAN.  ACL's make the rules for us device managers to access the equipment. different groups may have their own management networks for their own devices, based on security/paranoia. but in my place, managing devices is managing devices, we lump as much as we can into a site. to save networks, routing, route summarization, VRF's etc.

VRF's brings up another issue,  does one put the management networks into a VRF? or should one put everything else into a VRF, and have the management networks in the global routing table?

this number of management networks depends on it's size and site locations

:professorcat:

My Moral Fibers have been cut.

NetworkGroover

Engineer by day, DJ by night, family first always

deanwebb

I see some tight segmentation in some customers, with the management IP range totally sequestered from prod. In other places, every .1 is the switch, and that's how you reach them, be it default gateway or management IP address. I've also seen where management is only done on a loopback address, and all the loopbacks for the enterprise are on a separate IP range (like 10.0.0.0/8 for prod and 172.16.0.0/16 for loopbacks) but the loopbacks themselves are reachable from the prod network.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

icecream-guy

Quote from: deanwebb on November 11, 2021, 01:23:11 PM
I see some tight segmentation in some customers, with the management IP range totally sequestered from prod. In other places, every .1 is the switch, and that's how you reach them, be it default gateway or management IP address. I've also seen where management is only done on a loopback address, and all the loopbacks for the enterprise are on a separate IP range (like 10.0.0.0/8 for prod and 172.16.0.0/16 for loopbacks) but the loopbacks themselves are reachable from the prod network.

loopback is good, those interfaces never go down, (unless that device goes down, then you would have bigger problems)
:professorcat:

My Moral Fibers have been cut.

deanwebb

Quote from: icecream-guy on November 12, 2021, 07:08:48 AM
Quote from: deanwebb on November 11, 2021, 01:23:11 PM
I see some tight segmentation in some customers, with the management IP range totally sequestered from prod. In other places, every .1 is the switch, and that's how you reach them, be it default gateway or management IP address. I've also seen where management is only done on a loopback address, and all the loopbacks for the enterprise are on a separate IP range (like 10.0.0.0/8 for prod and 172.16.0.0/16 for loopbacks) but the loopbacks themselves are reachable from the prod network.

loopback is good, those interfaces never go down, (unless that device goes down, then you would have bigger problems)


TRUTH. If I was head of networks, I'd go with loopbacks on every switch and the loopbacks would be in their own distinct IP range so that when we see an address starting with 172.16, we all go, "hey, that's a loopback IP!"
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.