Why does DOCSIS hate NTP?

Started by Uh-Oh, December 19, 2015, 11:35:14 AM

Previous topic - Next topic

Uh-Oh

I realize this is out of the realm of common discussion here but maybe some have experience with it. Docsis uses the Time protocol (RFC 868). Maybe there was a reason initially but through all the years and revisions it has hung around. It works, does the job just fine but it seems silly considering NTP is everywhere.

When Docsis 3.0 was introduced it came with the new (to me at least) option of moving the downstream rf ports from the line card to an external QAM modulator. Enter the conundrum. The QAM modulator must be in sync with the CMTS. So now we need a reference clock between the two devices. If both devices are in close proximity then it's a trivial task but if they are miles apart the problem does become more complicated. Some reportedly bright people came up with the idea of the Docsis Timing Interface. This is a GPS referenced clock source (existed long before these geniuses) using some asinine protocol (as if there weren't enough protocols already).

First, I have a hard time believing that NTP along with some processing smarts could not handle the problem. Maybe not your "generic" everyday NTP but somewhere up the ladder its gotta be pretty accurate and well synchronized for my generic NTP to work as well as it does. NTP already existed, is well tested and used everywhere.

My other gripe is that the synchronization solution is the same whether the external modulator is in the same rack as the CMTS or 50 miles away. Instead of a couple connectors and length of wire to distribute the reference clock locally (the CMTS already generates a suitable reference clock but it doesn't share) I have to buy a Docsis Timing Server. Yay!

Now I realize I've been poking fun at the designers/engineers who came up with this stuff and maybe there is some reason that what they came up with is the best way to do it. From my vantage point it looks like the requirement list went something like...

1: Its gotta be expensive
2: Must be convoluted. Simple solutions cannot work even if it would account for the vast majority of installations.
3: Must be new. Existing technology cannot be leveraged to solve this problem.
4: Probably needs to keep two devices in sync.

Sorry that turned into a rant. I like the M-CMTS idea but the timing solution absolutely sucks. We serve a lot of very small rural communities. This gets on my nerves every time I have to look at one of these things.

Have a nice day!

srg

How do you practically locate the EQAM further down the network, you'll still have to terminate the upstreams back at the CMTS? We're heavily into M-CMTS, but the CMTS' and EQAMs are all located in the same site, with the external DTI just locally connected.

CCAP will again see the QAM 'inside' the CMTS as with I-CMTS, but here you have the notion of remote-PHY as well which I'm guessing will suffer from the same problems you are describing.
som om sinnet hade svartnat för evigt.

Uh-Oh

The remote downstream was the reason I have seen cited for the invention of the DTI sever. I would never claim it to be practical. With the edge qam local to the CMTS there is no reason I can think of for needing a DTI sever. The CMTS is already generating a clock that would be trivial to tap and distribute. If that was too must trouble they could have left the DTI part out and just used plain old reference inputs that could use standard off the shelf and relatively inexpensive reference sources. They took a 10Mhz reference requirement and made a complete mess of it.

Remote-phy may be different. I'm not very familiar with all the ins and outs of it but my limited understanding is that the downstream and upstream chips both reside in the node. The critical timing portion is keeping the down and up in sync with each other. My (probably wrong) assumption is that having both in the node means they can share an oscillator in the node and be happy. The upside is that a timing server shouldn't be needed. The downside will be the price of nodes.