Access control & multicast (was RE: [AG-TECH] AG Security)

Markus Buchhorn Markus.Buchhorn at anu.edu.au
Thu Jul 25 19:45:23 CDT 2002


At 10:30 PM 24/07/2002 -0500, Bill Nickless wrote:
>(This is a very interesting discussion.  Please excuse my high latency on response; I plead infirm health!)

Sorry to hear that! Hope you get better soon!

My excuse is just "too much work" :-)

>As I understand it, you're protecting against a very limited threat model: to wit, keeping unauthorized joiners on their subnets from attaching the to the distribution tree: It doesn't cover:
>
> - Local router compromise

Quite right, good point. Very few things at the network layer can cope with that. So it would mean that router-router traffic would also need some controlled-access mechanism. However, in terms of probabilities, solving the problem at the IGMP edge might take out a very large percentage of most attacks? 

Ok, so you'd also have to have some trust mechanisms between ISPs and downstream customers who control their own routers, perhaps (just) at domain boundaries.... Hmmmm. Whatever token is used/provided by the IGMP join could be sent upstream and verified at each router - so you'd want to have that scale sensibly, which suggests an in-band authentication mechanism (e.g. a challenge packet embedded in the stream - just once or regularly?). More memory and/or traffic involved, but they're both cheap now ;-) Does it scale sensibly.... ? Sorry - just mumbling to myself...

> - Legitimate listener traffic sniffed by attacker on the local subnet

Ok. There you've already addressed the bandwidth-access issue, so that's not relevant. In terms of sniffing traffic, it's also a case of making that as hard as for unicast (ignoring "just" content encryption for now). Using a switched LAN and multicast VLANs with AAA mechanisms should help there.

We can only make it hard for the bad guys, we can probably never totally prevent it :-( Content-encryption will also only work so far - using billion-bit encryption doesn't work well on real-time traffic on today's CPUs :-) Decoding it after the fact though is "much" easier.

>It also requires inter-domain AAA to be operating properly.  That's a pretty tough requirement; last I knew inter-domain AAA was an IRTG topic.

If it was easy, we'd already have it :-). There's some interesting stuff coming out of the I2-VidMid group in terms of schemas and mechanisms for H.323 (and SIP I think). It's not a total solution, but it's a start to define something and try it out.

<change topic to rate limit>
>Based on what I've seen in the reliable multicast working groups at the IETF, it appears that the Digital Fountain approach for rate control is most dominant.  It's a very clever idea--let me see if I can summarize it:

[..and he does... Good stuff!]

>Now: consider a video codec that provided multiple streams of output, at different bandwidths.  The source would transmit the low-bandwidth packets to the group first, followed by the intermediate bandwidth packets, followed by the high-bandwidth packets. 

These are sent to different groups, right?

>As you can imagine, there are a number of variations on this basic theme.  You could dispense with the active joins and leaves, simply letting receivers choose which multicast group to receive from based on available bandwidth (as determined from loss statistics).  An integrated AG control system could watch the audio traffic loss statistics and reduce the incoming multicast bandwidth of video traffic as needed.

You'd need to be careful of "integrated" codecs like mpeg/qt/avi which bundle the audio and video together. Or avoid them entirely :-)

>The key idea for the AG, however, is for the video sources to encode their data at multiple levels of complexity, and transmit them all.  Receivers then should be able to choose what level of bandwidth they wish to receive from each source.

Some commercial (streaming) products do exactly that. It gets interesting with streaming environments where you have to store the original high-quality version, as well as several compressed versions, and manage them all (from an archive perspective). In a real-time/live situation that gets simpler. There's a few layered-encoding techniques around (e.g. in DCT schemes you ditch higher-frequencies). A very simple mechanism, with RLE say, is to send a QCIF frame first, then the missing bits to make it CIF, then the missing bits to make it PAL, then ...

Hmm - how would you handle recording/caching? It would have to collect all the streams, to be able to offer the same options downstream later in time...

Cheers,
        Markus


Markus Buchhorn, ANU Internet Futures Project,        | Ph: +61 2 61258810
Markus.Buchhorn at anu.edu.au, mail: Bldg #108 - CS&IT   |Fax: +61 2 61259805
Australian National University, Canberra 0200, Aust.  |Mobile: 0417 281429




More information about the ag-tech mailing list