Is there such thing as a minimum distance on a cable, in which is needed to perform a CRC check?
is it different between cable/fiber? I am not aware of any limitations for CRC
So shooting from the hip. I would think for full duplex copper there is no minimum distance. Half duplex there would be a minimum distance based on how long a NIC can transition from sending data to listening for the collision. Optical cables the minimum distance is dependent on the optics. The transmitter in some optics have a power level high enough to damage a receiver, and attenuation over distance brings the level down so the receiver can see it. Even before you get to power levels that will damage the receiver you can still have a power level high enough to blind the receiver without damage. There is a NANOG presentation that covers all the optical stuff somewhere. It is a very interesting read.
-Otanx
I've heard 3' as the minimum length on full duplex copper.
Based on this (http://en.wikipedia.org/wiki/Ethernet_physical_layer#Minimum_cable_lengths), not for 10BASE-T, 100BASE-T, and 1000BASE-T.
For fiber, what Otanx said....though I still wonder if there are optics out there that will tell the sender that its Tx power is too strong and to lower its...that's a question for javentre :)
Quote from: killabee on February 24, 2015, 07:20:29 PM
Based on this (http://en.wikipedia.org/wiki/Ethernet_physical_layer#Minimum_cable_lengths), not for 10BASE-T, 100BASE-T, and 1000BASE-T.
For fiber, what Otanx said....though I still wonder if there are optics out there that will tell the sender that its Tx power is too strong and to lower its...that's a question for javentre :)
LOL, it was an old guy that said it. :)
I had this discussion with a number of folks back in 2008, regarding copper cables. I thought it was 1', other folks agreed, but no one could point to something as proof. I was unable to find any standards based documentation which stipulated a minimum cable length.
With optical links, if you have high power optics (ER/ZR) with short spans, you often need to use pads to attenuate the signal so you don't burn out the receivers. Most of the Cisco 10G-SR transceivers have a maximum TX that is weaker than the maximum RX, so you'll never need to attenuate them, assuming the optics are in spec. For example, most Cisco 10G SR optics have a maximum TX of -1.2 but the maximum Rx is -1.0, so you're good. For 10G LR, the max TX and RX are the same, so you're fine most of the time.
Quote from: killabee on February 24, 2015, 07:20:29 PM.I still wonder if there are optics out there that will tell the sender that its Tx power is too strong and to lower its...that's a question for javentre :)
You'll want to read about active DWDM systems, specifically something called APC:)
http://www.cisco.com/c/en/us/td/docs/optical/15000r9_0/dwdm/reference/guide/454d90_ref/454d90_networkref.html#wp335800
What I am most concerned about is 2 ASAs running 1G with GLC-SX-MM transceivers with 62.5/125 fiber. The whole reason I am curious is we have 2 ASA 5525x's which are using gi1/5 (1G fiber) for the failover interface between the two. Right now they are literally 2u apart from one another. Currently we have a 3m cable interconnecting the two, but if I could get a .3m cable and there would be no issues with CRC/failover i would prefer to go the route with no slack.
You're fine.
1000SX is even better than 10G. TX maximum is -3 and RX maximum is 0.
NANOG fiber presentation - https://www.nanog.org/meetings/nanog48/presentations/Sunday/RAS_opticalnet_N48.pdf
-Otanx