Networking-Forums.com

Professional Discussions => Everything Else in the Data Center => Topic started by: deanwebb on September 09, 2020, 07:06:54 AM

Title: Fiber vs Copper in the Datacenter
Post by: deanwebb on September 09, 2020, 07:06:54 AM
Normally, when I say that my $VENDOR devices have both copper and fiber interfaces, we only use the fiber for high-bandwidth functions, such as a SPAN port, and the customer gleefully plugs in a 1G copper cable into a corresponding interface for normal traffic. The device does not use a lot of bandwidth, anyway, so that 1G is overkill...

Yesterday, I had a customer that actually *objected* to using copper and wanted to manage the device on a 10G port. Is a lack of 1G density in the datacenter becoming a thing now? Or is this customer unique in having lots more 10G than 1G interfaces available in their datacenter access switches?

On a side note, he said that the 10G interface would be 10 times better than 1G... I had to disagree in that if we were already not using all the 1G pipe, we were going to not use even more of the 10G pipe.  :smug:
Title: Re: Fiber vs Copper in the Datacenter
Post by: Otanx on September 09, 2020, 09:35:09 AM
We don't support 100M for production interfaces in our DC. Most everything is 10G now (and fiber), but 1G copper is still common. Especially on hardware appliances. We do handle 100M for the management network because some of the management cards don't do anything higher. There could be a few reasons to not want to support 1G, but designing your data center that way 100% would be a poor design choice today. As an example something like the Arista 7280CR3-32D4 which has 32x100G and 4x400G interfaces. Say you have high bandwidth requirements, and you are using these for top of rack. You can break out the 100G interfaces to 10G. Those breakouts are fiber, or DAC. So the end device needs either a fiber, or SFP+ port. I don't think it would be possible to support a copper 1G device off of this switch. However, the thing is if you are doing this for top of rack you are going to have another switch to support the ILO, IMM, IDRAC, etc. that are all 1G or sometimes 100M, and I have not heard of a SFP IDRAC.

Now, I would ask if we could do a fiber interface for the management interface just because I can do a lot more fiber density in a rack. Using 52U racks with 48 1U servers each with 3 network, and 2 power is a lot of cable. Doing fiber where I can helps make those cable bundles smaller. However, if you said you couldn't we would just use copper. Not a big deal.

-Otanx
Title: Re: Fiber vs Copper in the Datacenter
Post by: Nerm on September 11, 2020, 06:38:04 PM
We have moved our data centers to as much fiber as possible. Multiple advantages over copper. We do however still support copper when required.
Title: Re: Fiber vs Copper in the Datacenter
Post by: deanwebb on September 14, 2020, 04:06:29 PM
Quote from: Nerm on September 11, 2020, 06:38:04 PM
We have moved our data centers to as much fiber as possible. Multiple advantages over copper. We do however still support copper when required.

If there's a choice between the two, do you push for going with fiber?
Title: Re: Fiber vs Copper in the Datacenter
Post by: Nerm on September 17, 2020, 07:05:29 AM
Not a push but a mandate. If fiber is a choice it is the choice so to speak. The only exception to this is management interfaces where fiber is rarely a choice anyway.
Title: Re: Fiber vs Copper in the Datacenter
Post by: deanwebb on September 17, 2020, 10:22:33 AM
Quote from: Nerm on September 17, 2020, 07:05:29 AM
Not a push but a mandate. If fiber is a choice it is the choice so to speak. The only exception to this is management interfaces where fiber is rarely a choice anyway.

Lol, so that's what happened to us.

"Can you manage on your fiber?"
"Sure, but-"
"OK, we manage on fiber."
Title: Re: Fiber vs Copper in the Datacenter
Post by: Otanx on September 17, 2020, 11:43:49 AM
If the interface was free then we would have done the same. If the box has four SFPs, and we are feeding SPANs to two, we would use one of the extras for management if the box supported it. We would also have asked about bonding the management interface for fail over. However, we find it pretty uncommon that you can change the management interface at all. Usually it is the RJ45 port, no bonding.

-Otanx