[AG-TECH] DVTS demo Fri 3:30 Eastern Time
Gurcharan S. Khanna
gurcharan.khanna at rit.edu
Wed Apr 15 15:23:47 CDT 2009
I am dubious about being able to avoid echo just by mic/spkr placement,
but i guess
i should try it myself first.
But my real question is how to use the multiplexed audio in the DV
multiple DVTS sessions. If i have ten DVTS streams going to ten
addresses, i can start up ten instances of dvts to render them. but i'm
unsure what happens
to the audio. how to aggregrate that into a common audio conversation?
one reason i am asking is because i notice that when using to do audio
with DVTS, often
the video is behind the audio and the lack of lip sync is noticeable. i
assume this is due
to how the operating processes audio and video separately. if so, what
can be done? can
i introduce an artificial delay into the rat audio to bring it closer
thanks for any thoughts,
PS also, has anyone scripted starting up multiple DVTS instances
pointing at hardwired
multicast addresses? (outside of using the AG). -gsk
William Alberto Romero Ramirez wrote:
> I use a sound mixer and unidirectional-cardioid mics (Shure SM58, Sennheiser e901) in a specialiced computer for audio through the AG Node Service.
> You can send the camera's audio to the mixer in order to forward audio to an AG session. An unidirectional-cardioid mic rejects sounds from the rear part, so you can avoid the feedback (the cause of echo) if you set the mic behind the speakers.
> If you need any additional information, please feel free to contact me with any questions you may have.
> William A. Romero R.
> AG-MOX system manager
> Universidad de los Andes
> Bogotá, COLOMBIA
> ----- Mensaje original -----
> De: "Gurcharan S. Khanna" <gurcharan.khanna at rit.edu>
> Fecha: Jueves, 9 de Abril de 2009, 2:24 pm
> Asunto: [AG-TECH] DVTS demo Fri 3:30 Eastern Time
> A: "ag-tech at mcs.anl.gov" <ag-tech at mcs.anl.gov>, wg-multicast <wg-multicast at internet2.edu>
>> i will be conducting a demo using DVTS inside of Access Grid
>> tomorrow for a
>> Deaf/HoH faculty here plus other deaf/hoh users. the goal is
>> partly to
>> show several
>> high quality videos from a single location where each person can
>> clearly seen,
>> especially close-up, using sign language.
>> the other part is showing connecting many sites with the same
>> quality videos.
>> so, we will be simulating 4 users in one room on campus viewing
>> 5 users
>> in another
>> room on campus using DVTS in AG.
>> if anyone from a remote site would like to drop to demonstrate
>> remote viewing
>> quality using DVTS please do so!
>> since we're using AG, we will be using RAT for audio.
>> question: DVTS users: what do you usually do for audio in a
>> are there ways to split off the DV audio stream and feed it to
>> how does echo cancelling work with DV cameras as sources????
>> thanks for your feedback,
>> PS this demo will held primarily in the RIT Venue of the ANL
>> Server. -gsk
>> Gurcharan S. Khanna, Ph.D.
>> Director of Research Computing
>> Office of the Vice President for Research
>> Assistant Research Professor, Ph.D. Program
>> Golisano College of Computing and Information Sciences
>> Founding Director, Interactive Collaboration Environments Laboratory
>> Center for Advancing the Study of Cyberinfrastructure
>> Rochester Institute of Technology
>> IT Collaboratory Bldg. 17, Room 3119
>> Rochester, New York 14623-5603
>> Phone: 585-475-7504 ~ Cell: 585-451-8370
>> Email: gurcharan.khanna at rit.edu
Gurcharan S. Khanna, Ph.D.
Director of Research Computing
Office of the Vice President for Research
Assistant Research Professor, Ph.D. Program
Golisano College of Computing and Information Sciences
Founding Director, Interactive Collaboration Environments Laboratory
Center for Advancing the Study of Cyberinfrastructure
Rochester Institute of Technology
IT Collaboratory Bldg. 17, Room 3119
Rochester, New York 14623-5603
Phone: 585-475-7504 ~ Cell: 585-451-8370
Email: gurcharan.khanna at rit.edu
More information about the ag-tech