FW: The return of the report

Ivan R. Judson judson at mcs.anl.gov
Thu Sep 19 11:47:53 CDT 2002


Here's my draft.
--Ivan
----------

Hi,

I'm going to try and address the points below as completely as I can but
please feel free to ask questions if something is unclear.

As a preface, I think it's important to understand that our perspective
about the Access Grid. The AG is an integrative technology which isn't
trying to eliminate other systems like H.323 or vrvs, rather it's trying
to provide a users with rich collaborative resources. The AG provides
these collaboration resources through a different mechanism than other
systems. Rather than put a long explanation in here, I've summarized the
comparison in the attached table.

It is difficult to precisely address the issues raised by Philippe
because the question "What is VRVS?" gets answered differently in
different places. If it is infrastructure like router software mentioned
below, then it's incomparable to applications, but if it's an
application then it has to bear the comparison.

Alternatively it would appear from the vrvs web pages, and from various
comments below, that vrvs is not a studio based solution, so it could
get completely assumed by the non-studio based solution section. Also,
it should be clear that desktop H.323 is non-studio based as well, but
that's probably assumed.

Comments appear below.

--Ivan

----------------
>     o 2.1: Technically speaking, I don't understand the
> planned usage of 
> openMCU to enable interoperability between H.323 and AG. 
> First of all, 
> VRVS has done it -- completely. Second, it is definitely not 
> the right 
> technical solution. I already informed the AG guys about it but looks 
> like they want to insist on using it without first understanding the 
> openMCU architecture and its limitations.. 

Interoperability between the AG and H.323 means much more than bridging
data streams. What we mean by interoperability is that there exist a
solution that lets H.323 endpoints connect to Access Grid Virtual Venues
and collaborate in the richest way afforded to them from the endpoint. 

Since the Virtual Venues decide the media configuration information
(which is analagous to the audio and video connections in H.323), the
OpenMCU needs to be able to find out the media configuration from the
Venue. Also, AG 2.0 does not require specific audio and video formats,
while it will probably still suggest some least common denominator, the
Venue will negotiate the capabilities of the users, if there is a
mis-match in capabilities network services can be used to create a
match. These network services will be found through the Virtual Venues.

We've looked at the OpenMCU architecture, and have come to the
conclusion that the best solution for the AG is to contribute to the
OpenH323 Project by building an AG enhanced OpenMCU, building this
OpenMCU involves:

1. Adding RTP/UDP multicast to the Open H323 base library, this is part
of the H.323 standard, and
2. Enable the OpenMCU to find Virtual Venues, and the network services
in them
3. Build network services that allow the modification of the audio,
video, and data to be appropriately formatted for the H.323 endpoint.

The issues of closed or open source seem to be tangential. We're
building the AG open source because we feel it has a greater value that
way. We prefer the OpenMCU because it too is open, and therefore
modifications and extensions benefit a wider audience. It should be
clear however, that these modifications could be applied to the bridging
that is happening as part of VRVS.

>     o 2.4: The interoperability between AG and VRVS is
> already done. The 
> further deployment of AG/VRVS in UK and other countries could be done 
> easily, as well as full support for more AG virtual venues. 
> Something that people didn't get is that  the current AG/VRVS Gateway 
> has never been in the "Funded Tasks development list". We 
> simply did it: 
> because there was a need,  and because our architecture was flexible 
> enough so that we were able to do it with minimal resources. This is 
> just one of several examples where people do not quite realize the 
> potential of our realtime software infrastructure, and 
> conversely assume 
> limitations in the VRVS system that don't exist. 
> In this case if people think that VRVS/AG gateway improvements are 
> needed right now, then we could do it in return for a 
> moderate amount of 
> funding -- to improve the gateway GUI, to deploy it, and to provide a 
> bit of support. The funding is only needed because we would 
> have to push 
> other tasks on our long in-progress development list; or (better) we 
> would take on another engineer to keep all of our milestones 
> (including 
> the VRVS/AG ones) on schedule. 

Here again, it seems the issue is what interoperability means. The
Access Grid is more than just the audio and video streams getting sent
to everyone, there's an entire collaboration infrastructure centered
around the Virtual Venues. We are providing very shortly (as part of
2.0) client specifications that will allow the VRVS team to write a
client for the virtual venues, however there seems to be fundamental
misunderstanding about how the AG works. The number of venues is
expected to grow rapidly, any tools that wish to take advantage of the
venues value will have to adhere to the interfaces.

Again, perhaps the document needs to address exactly what is meant by AG
to VRVS interoperability. It is the case that when VRVS is using the
mbone tools, not H323, that the interop works fine. 

I just fired up vrvs in the lobby; it shows each individual video stream
as a seperate user - this is another interoperability difference - yes,
the streams show up, but they mean something a little different in the
AG. I also find that it is forwarding all the streams. 

>    o 2.4: I don't understand "It is not possible for an
> Access Grid to 
> participate in a VRVS session". This is not true. And the solutions 
> proposed do not take in account the current technique used in the 
> gateway. 
> It would be better to ask us is there is a presumed  problem: (a) to 
> find out if it is already done, (b) to allow us to propose solutions, 
> which in the great majority of cases will be quite 
> straightforward for 
> us. 

I think perhaps what this section is addressing is the breakage of
video-follows-voice when the video is not emanating from the same host
as the audio.

>     o 3.4, H.323/H.320:  These are Not broadcast quality as
> mentioned 
> here. CIF (352 X 288) is half the resolution of broadcasts. In 
> continuous presence (4 videos on a screen) one has QCIF  for 
> each video 
> which is 4 times less resolution per video than  broadcast. 

This isn't a particularly AG specific point, but considering the
original text says 'near broadcast quality', I think the original text
is true.  However, it might be valuable to point out later in that
paragraph that when you are not using a single full screen video
solution in H.323, you give up that 'near broadcast quality', since you
have to scale the video down smaller resolutions. (This is an obvious
point to all of us, but it might be worth making explicit.)

As a sidenote, AG 2.0 has a new node design, which will allow nodes to
exist on a wider range of platforms. While the pull has been to allow
desktops to collaborate on the AG (which this will enable), the real
goal is to open the possibility of using much higher quality audio and
video solutions. When we deploy stereo, quad, or even 5.1 audio things
will be much different and when we test out HD video, we will be
creating a huge disparity in capabilities between users on the WAP
phones and users with HD video, we want to enable all of them.

>     o 3.7 AG: A solution purely multicast based is surely not
> "the only 
> fully scalable solution for large-scale collaboration" as it is 
> mentioned in the report. It has been demonstrated since the  early 
> 1990's that the opposite is true. In contrast, it has been  
> recognized 
> (e.g. at high performance network meetings organized by DOE 
> this Summer) 
> that using unicast tunnels where needed to interconnect mulitcast 
> domains  is demonstrably scalable. This is why VRVS has  
> spread to 60+ 
> countries, and precisely why VRVS  is architected the way it is. 
>  The claim that pure multicast is scalable in this section is  in 
> contradiction with the following sentence, that mentions  
> that it will 
> take few years for having all UK academic LANs  with multicast 
> capability. We must point out that multicast at network level, with 
> large TTL is now considered dead by leading experts such as Linda 
> Winkler and Matt Matthis. VRVS's unicast tunnels inter-connecting 
> multicast domains  has been recognized as the "obvious" globally 
> scalable  solution; in contrast to what is said in this report. 

I think these points might be better stated under the 3.7 heading about
networking. The statement above begins reasonably, but the last part
about what is characterized in the report is misleading. The AG
currently relies on multicast infrastructure, this was a decision we
made early on to try and provide an application that could drive the
wide scale deployment of multicast. AG 2.0 will still default to using
multicast, but it will incorporate technology for seamlessly connecting
separate multicast clouds, as well as creating virtual network
topologies from unicast tunnel meshes. These features are planned on
being built using technologies like the QuickBridge (which could be the
basis of a very nice OGSA bridge service). The point is, AG 2.0 has to
operate robustly in the face of various network conditions, one of which
is multicast reliability.

AG 1.0 had significant network requirements for many institutions,
multicast and bandwidth being the two major factors. As I've described
we have to address not only multicast but the bandwidth issue as well.
Our plan to incorporate capabilities negotiation addresses both of these
issues. It provides ways to enable all participants to interact with the
richest collaboration for each user. Multicast tunnels, virtual
networks, VRVS bridges, transcoders, video selection services, and other
network services are possible in AG 2.0 and we don't plan on
implementing all of them ourselves. We plan on incorporating critical
services in the core, but having a set of standards and interfaces that
allow others to create services for their specific situation.

>     o 3.9 AG:  An ACL list do not provide a secure way to operate.
> Multicast is definitely not the best technique for providing security 
> since almost anyone on the network can sniff the packets. In this 
> particular case, the only solution is encryption from the 
> applications 
> and today this could be done via the Mbone  tools (Vic/Rat). 
> This is not 
> particular to the AG, but only related to the Mbone tools. In 
> addition, 
> the encryption   provided in the Mbone tools is very basic, 
> and anyone 
> enough competence will be able to decrypt it. 

It's difficult to address this point directly because we're on the verge
of AG 2.0, which incorporates the Globus Toolkit 2.0 and leverages the
security provided by it. Globus' wide adoption, commercialization, and
standardisation ensure that leveraging the security mechanisms they
provide will provide a system that has had security scrutinized by
security experts, which we are not. Additionally, the data the mbone
tools send is encrypted with cryptographically sound algorithms
including DES and AES. We may need to do analysis to see where the mbone
tools are open to attack, but the actual encryption is very sound.

>     o 3.13: I don't understand what is meant by "The AG configuration
> applied at the e-Sciences Centers makes them unsuitable for VRVS 
> conferences". Are the AG nodes there different from all the other AG 
> nodes elsewhere; if so, in which ways ? 

This is a hardware and software configuration issue, where the audio and
display machines are separate, I believe. The issue is VRVS is really a
non-studio solution, more closely comparable to netmeeting than the AG,
therefore it takes more work to make an AG Node (in 1.0) support VRVS.

>     o 5.2: I am surprised by the cost for the AG. Platinum 30,000 LS
> (UK), One could equip a whole site with desktop cameras !. 

If one wants to make a cost comparison you have to analyze the
requirements. An entire site of cameras would not provide even a
fraction of the value of a single AG node, nor a single H.323 box. The
Access Grid provides collaboration, it's not a desktop video
conferencing system.

>     o 8.1:  Again, we see an incorrect comparison of  VRVS (a
> realtime 
> software infrastructure integrated with many clients) with the 
> application tools themselves: Netmeeting, QT, CUSeeMe,...etc..  Once 
> again this is a failure to understand that VRVS is not a client 
> application, but rather provide the infrastructure upon which  many 
> client applications can run, and interoperate. It is like  comparing 
> Cisco routers and the Cisco IOS software, with FTP  or Telnet. 
> ----------------------------------------------------------- 

This is a report about multisite videoconferencing (although I'd rather
it was collaboration), I'd rather see VRVS's videoconferencing
capabilities clearly outlined so we can directly compare them with the
other solutions presented in the report than even suggest removing it
completely.

--Ivan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: comparison.xls
Type: application/vnd.ms-excel
Size: 20480 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/ag-dev/attachments/20020919/e5a37188/attachment.xls>


More information about the ag-dev mailing list