[AG-TECH] AG 2.0 and VRVS vic and rat and other ramblings
michael.daw at manchester.ac.uk
Fri Oct 22 08:47:49 CDT 2004
This might be a good opportunity to highlight some work we've done in the
UK. Firstly we made a comparison of VRVS, AG and H.323 in this report (now 2
years old), which I chaired:
Next, this led to a number of other reports, one of which (report 4 in the
second list) examines what would be required to establish a common code base
for vic and rat for VRVS and the AG. There is a good chance that this will
be funded very soon by agencies within the UK:
I hope you find these interesting.
> -----Original Message-----
> From: owner-ag-tech at mcs.anl.gov
> [mailto:owner-ag-tech at mcs.anl.gov] On Behalf Of Ivan R. Judson
> Sent: 21 October 2004 21:14
> To: 'Douglas Baggett'
> Cc: ag-tech at mcs.anl.gov; 'Thompson, Kevin L.'; jkoss at nsf.gov;
> dgatchel at nsf.gov
> Subject: RE: [AG-TECH] AG 2.0 and VRVS vic and rat and other ramblings
> > Thanks for the good description! Of course I don't want you
> to get the
> > wrong idea from my post. I think the direction of the middleware
> > framework is the correct direction and it seems to be
> coming along. As
> > a grid operator I look at what is used every day. We all know that
> > video/audio conferencing is THE major application that is
> being used.
> > >From the grid operator's perspective of trying to satisfy
> users with
> > a better system, the video and audio are THE most important
> > application. I understand completely that the audio and
> video are not
> > the thrust of the work being done, the middleware portion is.
> No problem; just wanted to make sure we're on the same page :-)
> > Please correct me but it seems what you are saying in your post is
> > that the framework when completed should provide a consistent
> > interface where compatible video/audio applications could
> be plugged
> > in AND assuming everybody in a particular meeting is using
> > software we would be able to take advantage of the
> development that's
> > being done in video and audio.
> Right. The plan would be to enable this environment, then
> "let the market/users" push the platforms to provide
> interoperable data. The protocols are available and in use,
> but the choice of codecs (mostly because of patent/IP issues)
> is still quicksand.
> > I stand corrected on the open source issue. I thought VRVS was open
> > source (which means I suppose that vic and rat must be
> under some sort
> > of license that is similar to the BSD license possibly?).
> Right, vic,vat,rat are BSD, from either Berkeley Lab,
> University College London or both. VRVS is closed and owned
> by Caltech.
> > It looks like we have some smart people in related projects
> of lesser
> > scope (VRVS, OpenMash, ect...) that are concentrating on the
> > particular application of video and audio. It's seems to me to only
> > make sense to work with those folks in improving the video/audio
> > aspect of the AG that is not nessessarily being developed
> by the core
> > AG team so that we can use them with the middleware ( I
> don't want to
> > belittle ANYTHING that people like Bob Olson, yourself and
> others have
> > done to fix vic and rat and add features where nessessary). :)
> We entirely agree here. I've been working with folks from
> UCL, LBL, OpenMash, I2, VRVS etc to try and converge on a set
> of tools. It seems like downward pressure from our middleware
> isn't sufficient motivation. I don't know what might be
> needed to encourage better convergence, but we'd be all for it :-)
> > I also don't want to imply that there isn't any talk between AG and
> > the other communities. It's just as a node operator I don't
> see much
> > movement with audio/video. It's been almost 3 years since I
> set up our
> > node and Vic and Rat work almost exactly the same as it did back in
> > 2001. There have been some great fixes...many thanks for that
> > (Really!)
> That code was *very* well written by the original authors.
> We've made very modest changes to it, and in fact we're
> working with UCL to "give back" our changes and use "standard
> distributions" of vic and rat.
> > What led me to ask the first question was how well the VRVS
> version of
> > Vic worked and the fact that they were working on adding MPEG4 and
> > HDTV. And of course the fact that InSors also added support
> for H.264
> > (but only between InSors clients). I'm sure we (the node community)
> > would love to see these improvements, but the question is
> how to get
> > the development process going so we can see the features
> > developed/deployed in a way that everybody can take
> advantage of them.
> > Especially in a way that can take advantage of the AG 2.x
> framework in
> > the way you've described.
> I haven't had time to grab the latest vrvs tools (3.x+) and
> look at what's been done; I would love to see them offer
> their modifications back to the "home" source maintainers
> (like we're planning on doing). The structure is in place for
> us all to improve the one code base and then all benefit from
> the new features and bug fixes. I think we're seeing enough
> activity to try and make that model work again. Pressure from
> your side (end-user and of course large important institution
> :-) on various projects to "give back"
> modifications might go a long way.
> > Do you think that once the middleware portion is complete
> and robust
> > that we will see the community dump vic and rat for
> something better?
> > (like some of the software you mentioned?) Should the
> community talk
> > about what we should move to when the middleware is ready for it so
> > there is some common video denominator during meetings?
> I think a "common denominator" will always be an elusive
> target. Market forces cause the big two platforms to
> differentiate platforms by locking each other out of "high
> value" IP. Codecs are one space that happens a lot, software
> patents might make that worse, who knows. I think as time
> goes on we'll have a "majority denominator" and smart
> middleware that transcodes the "minority users" (who might be
> on via VoIP, or handheld, or something more
> interesting) streams into something they can handle. The
> transcoding will cost latency, but if it's designed correctly
> the users who require special services only pay the latency
> bill. This encourages them to join the majority :-).
> I'm not sure the collaboration community should really steer
> codec choices, since the scenarios are so wildly different
> for each possible use. I think keeping the situation in mind
> and taking every opportunity to encourage "consolidation of
> media tools work" is an excellent plan.
More information about the ag-tech