[AG-TECH] Real Player

Lawrence A. Rowe Rowe at bmrc.berkeley.edu
Fri Dec 27 16:12:49 CST 2002


"Jonathan C. Humfrey" wrote:
> 
> Does anyone have experience webcasting live using Real Player?
> 
> Thanks,
> Jonathan

(Jonathan, this is a lot more than I suspect you were expecting, but
there were several messages on the AG tech's list recently that relate
to similar things so I have pretty much done a memory dump of a lot of
ideas and resource links.)
---

Hi -

We have a lot of experience producing webcasts using Real Networks.
First, let me introduce you to Peter Pletcher who is our systems
manager. Peter knows how everything works in detail, and he can fill you
in on more details.  There are also a lot of details on our website. In
particular, you should check out 
	http://bmrc.berkeley.edu/papers/bibs-report.html
which has a report on the lecture webcasting system we developed and
that the Berkeley compus continues to run.  There are a lot of details
about the system in the paper - particularly the electronic program
guide, the program database, and the software and hardware architecture
we use to produce the webcasts.  

This system is used to produce over 35 hours of lecture webcasts per
week - that's approximately 15 classes. These webcasts are viewed over
40K times per month.  The vast majority of the viewings are on-demand
replays, that is, students watch the material asynchronously when
studying for exams. Moreover, relatively few people watch the videos all
the way through - only 10% of the plays watch the entire lecture. The
remaining 90% are distributed as follows: 50% 10 mins or less, 10% 10-20
mins, 10% 20-30 mins, 10% 30-40 mins and 10% 40-50%. (Note: most
Berkeley lectures are 50 minutes.) This result confirms similar results
observed by Yong Rui at MS Research when they analyzed a similar seminar
webcasting system.  More details on the educational use of this
technology by students were presented in a Berkeley MIG Seminar this
past semester - see
   http://bmrc.berkeley.edu/bibs/instance?prog=1&group=25&inst=853
for the replay.

You can do the single stream productions described below using any
commercially available webcasting software (i.e., Apple Quicktime, MS
Windows Media, or Real Networks RealOne).

Now to the answers.  I have divided the discussion into the following
topics: 1) basic single stream production, 2) sophisticated single
stream production, 3) multiple stream production, 4) multiple
transmission production, and 5) automation and control.  

BASIC SINGLE STREAM PRODUCTION

The simplest thing to do is:

1. have a camera and a wireless mic
2. [capture machine] have a linux box with RN producer - you need a good
video capture card in this machine.  Peter can suggest the best boards
to buy today - they change all the time.
3. [server machine] have a linux box with a RN server - typically in a
machine room
4. you probably should have a cheap audio mixer connected to the
wireless mic base station.  put output of mixer into audio input on
capture machine.
5. connect video output of camera to video input on capture machine.
6 run RN producer with appropriate stream settings. we typically use the
multiple rate settings with higher bit rates (e.g., broadband - 250+
Kbs, ISDN - 128 Kbs, and modem - 50 Kbs).  Peter can tell you which
codec we use - the older one is better than the newer ones - at least
until the most recent release.

Now, users connect to the live stream using a URL to the server machine.
We also archive the compressed a/v - typically on the capture machine
although you can also do it on the server machine.  For a variety of
reasons, we also have a VCR at the capture machine and record the a/v
program on videotape.  The videotape is good back-up in case something
goes wrong with the computers.  In the old days, we would lose 20-30% of
the live webcasts due to a variety of reasons (e.g., software crash,
network problem, a/v signal problems, phase of the moon, etc.).  These
days we rarely have that many problems - maybe 1-2% of the time.  The
Real folks have fixed a lot of bugs in their software.

You can physically locate equipment and people in any of the following
ways:

1. put operator and computers in the room with the event
2. put the computers in the room, but operate remotely - just use remote
login.
3. put computers in machine room and run a/v wires to the computer.
operator can be in room or remote.

the major advantage of putting the equipment outside the room is:

1. noise - you'll need an a/v closet or a quiet rack (peter can give you
details on cheap small ones and big ones that are more expensive).
putting machines in outside room also eliminates this problem
2. distractions - operator in the room uses space that might be scarce
and may distract the other participants
3. reduced cost - one operator should be able to produce several
webcasts - depends on camera switching and movement.

Needless to say, you can easily setup a roll-around cart with all the
required material - and you can even use a wireless network link to
transmit since the data rates are pretty low.

SOPHISTICATED SINGLE STREAM PRODUCTION

The production above gives you one stream from one camera.  Most times
you want to show different images (e.g., presentation material,
different camera views of the speaker and the audience, etc.)  To do
this, you will need a pan/tilt camera (or camera operator) and possibly
an audio/video routing switcher.  Many vendors sell pan/tilt cameras at
a variety of price points - see Sony, Panasonic, Canon, etc.  All of
these cameras have rs232 interfaces which you can connect to the capture
computer and write simple software to control remotely - we have Tcl/Tk
apps that use a lightweight text-oriented RPC package called TclDP that
can be used to control all equipment remotely. (TclDP is similar to
XML/RPC but we developed it over 10 years ago and have been using it in
all of our research since then.  You could modify our software to use
XML/RPC with modest effort.) Our software for controlling Canon VCC
cameras is available at the Open Mash source repository. 

You can incorporate multiple cameras by using a video routing switcher
(essentially, a cross-bar switch that connects multiple audio/video
sources to your capture card.  This also allows you to incorporate other
a/v devices (e.g., VCR, scan converter, etc.).  You should get an a/v
routing switcher that does frame-accurate switching (i.e., cuts from one
video frame to another in a smooth operation - i.e., no breakup of the
video) and that has an rs232 interface.  again, most routing switchers
have rs232 interfaces and frame accurate switches are getting cheaper
all the time.  We also have Tcl/Tk/TclDP software for controlling
routing switchers (Knox and AMX).

The idea now is that the producer/director - fancy video broadcasting
terms for the operator - switches between cameras and controls the
pan/tilt cameras.  This makes for a much more complex production
environment and, given current technology, requires several people to
operate - or, a very sophisticated and experienced producer/director. 
We have done experiments with an automated producer/director, see the
paper on the Virtual Director written by one of my students
    http://bmrc.berkeley.edu/papers/2001/158/
or similar work by Mukhopadhyay and Smith at Cornell, Yong Rui and
colleagues at MS Research, or the AutoAuditorium work.

Typically, these productions use multiple people to setup/tear-down and
produce a webcast or they use permanetly installed equipment with a
traditional control booth connected to the room.  I have seen some very
interesting two camera productions done with a cart and a single
operator (one camera was permanently positioned for a wide angle stage
view and one camera was manually operated to produce closeups and follow
the action; both cameras were connected to a small production swithcer
which the operator could switch between using presets to transition
between - either jump cuts or fade through black).

MULTIPLE STREAM PRODUCTIONS

The BIG PROBLEM with both of these solutions is that you get only 1
stream - you have to do compositing of multiple images, typically the
presentation material and the speaker/audience, either by sending the
slides separately using something like distributed PowerPoint or VNC or
by compositing the images in the video domain using some special-effects
processor.  These aren't too expensive (e.g., $3-5K) but they are yet
another piece of equipment that must be acquired, set-up, maintained,
and operated.  

The other problem is that scan converting computer-produced images
produces very poor quality video streams.  It's pretty easy to see why:
you take a typical RGB image (e.g., 1024x768), scan convert it to NTSC
video (i.e., 720x480, 640x480, or thereabouts), shrink the image to CIF
size (i.e., 320x240, 357x288, etc.), and then compress it.  You're
throwing away a lot of data and it shows.

Sadly, the only way to produce multiple streams is to use a different
webcasting technology (e.g., Access Grid, Ncast Telepresenter, etc.).
One advantage of the Telepresenter is that it has an RGB digitizing
board incorporated into the product so you get direct capture and
compression of the RGB image rather than first converting to NTSC which
produces much improved images.

MULTIPLE TRANSMISSION PRODUCTIONS

Jennifer Teig von Hoffman asked in an earlier email about producing
multiple transmissions.  By this I mean producing several versions of
the same program content using different technologies (e.g., AG, RN, and
H.323 for example).  We mix and match several technologies and have
worked hard to simplify the production in order to reduce the cost and
avoid problems.  Generally speaking you can do it by splitting the a/v
signals in the analog domain (or digitial if you are using a digital
signal) and feeding the signal into different computers using different
capture and coding technology.  The nice thing about a routing switcher
is that it can send one source to multiple destinations so you can setup
your broadcast center to slave several capture machines to the same
signal.

The other technologies we use or have used in the past include: copying
RTP streams between different multicast sessions using vgw, transcoding
Mbone streams to RN streams using a software transcoding process (tgw)
that we wrote, and decoding a streams and passing the decoded signal to
another capture machine.

Setting up, maintaining, and operating all of this equipment and
software is a challenge as anyone reading this list who operates a
moderately complex a/v facility, like an AG node, knows all too well.

CONTROL AND AUTOMATION

Now, if anyone is still reading, the problem with these complex
environments whether it be for a multiple camera, multiple transmission
webcast or an AG conference with multiple rooms being operated by one
producer/director is controlling all the equipment and automating as
many of the decisions as possible using intelligent software.  Our AG
room uses the Presentation Machine console to control all a/v equipment
and computers in the node.  Our node is a bit more complex than some
since we have 6 cameras, composite video and RGB routing switches,
several capture/decode computers (e.g., 2 dual-processor Linux boxes and
an RTPtv box), and a DVD/VCR combo player and scan converter. We can
produce 4 concurrent video streams plus mixed audio from the room. All
of this equipment is controlled by Tcl/Tk GUI interfaces including the
projector input (e.g., video versus rgb) and the echo canceller/mixer.
Since it is all accessed using TclDP, we can operate the node remotely -
essentially from any computer on the Internet - and we can script any
operation (i.e., a program can drive the pan/tilt cameras, switch video
sources using the routing switchers, and set the sound levels in the
mixer).

The webcasting infrastructure we operate is a bit more complex. (Note:
this is different than the campus webcasting system described above.) We
have three classrooms that produce four video signals and one stereo
audio signal that are transported using a variety of a/v technologies
(e.g., an extremely low cost catv system - roughly $200 per end for a
signal, NTSC over Cat5, video modulated over fiber, etc.) from the
classrooms to a master control routing switcher in a broadcast center
(think machine room).  Sometimes we have a routing switch in the
classroom and sometimes not.  So, the automation software has to handle
routing of a virtual circuit network (i.e., the routing switchers), the
computers connected to the a/v sources which might be located in the
classrooms or in the broadcast center, and a variety of a/v equipment
including VCR's (actually VTR's - frame accurate broadcast recorders),
video effects processors, cameras, etc.  Our biggest problem was writing
software to control this mess.  (Aside: some would no doubt argue that
we should have installed everything using Internet compressed video -
i.e., RTP, etc.  We could have done that, but it would have been a lot
more expensive and likely less reliable than the mixture of traditional
a/v equipment and computer-based solutions.)  

So, this is why we began development of the INDIVA middleware to
simplify the management and operation of these sorts of mixed a/v and
streaming media environments.  We have completed a very rough prototype
and written a couple of papers and proposals about this work - see
http://bmrc.berkeley.edu/projects for more details.  I also did a MIG
Seminar on this work at the beginning of the fall semester - see
  http://bmrc.berkeley.edu/bibs/instance?prog=1&group=25&inst=849

BUT, the future of this work is in doubt due to funding issues and my
pending retirement.

Another approach to automation and control is to use one of the
commercial a/v control systems (e.g., AMX, Creston, etc.).  These
systems have some nice equipment, but are fundamentally PC's with a
proprietary programming and signalling interface.  We have interfaced to
AMX systems but find it easier to write and control the interface
directly.  We have even used a board with IR senders/receivers that we
use to control a satellite set-top box from a Linux system.

THE END

Oops, this is way too long, but I hope it answers questions that several
people might have.  We have written papers about everything we've been
doing for the past 10 years.  These are available on the BMRC web site
(http://bmrc.berkeley.edu/) and/or the Open Mash web site
(http://www.openmash.org/).
	Larry
p.s. We publish source code to all of our software and distribute it for
free.  You're welcome to take any of it and do whatever you want.
-- 
Professor Lawrence A. Rowe          Internet: Rowe at BMRC.Berkeley.EDU
Computer Science Division - EECS       Phone: 510-642-5117
University of California, Berkeley       Fax: 510-642-5615
Berkeley, CA 94720-1776            URL: http://bmrc.berkeley.edu/~larry



More information about the ag-tech mailing list