[AG-TECH] Idea For Closed Captioning for the Hearing-Impaired on the AG

Allan Spale aspale at evl.uic.edu
Tue Oct 2 13:34:13 CDT 2001


Bob,

The software and hardware items that we are using are the following:


- Microsoft Powerpoint for graphics and titling (although any software
will do)

- Control machine (as long as the levels for the microphones are set, you
could use the control machine mostly for titling and graphics) 

- Video card with video out (this is how the signal is getting out of the
control machine...S-Video is preferable to composite/RCA video) 

- Video mixer (we are using a Videonics Digital Video Mixer MX-1;  but any
mixer that has chroma-key and composite functions will do; you can buy
character generators, but if you have a computer that can do fonts and
animations, better to consolodate on that side than buy separate units, in
my opinion) 


The signal from the video card on the control machine is being sent to the
digital video mixer.  Once it is in the video mixer, you can compose it on
any other video stream that you have plugged into the video mixer.  To
facilitate easier configurability of what video signals are going where,
EVL has a video patch panel.  In this way, it is not necessary to keep
track of where wires are physically going, and determining where video
signals go is much more configurable.

Now, for graphics and character generation.  Create a slide show that will
contain all of your titles.  For text areas, use Word Art.  It is scalable
and will permit slides to be changed more easily and more quickly than
guessing font sizes and changing font types.  When you do use Word Art for
multiple text lines, type them in the notes window below the slide first
so that you know how far to "stretch" each text line.  This will reduce
the chance of having a line looked "stretched".

One neat trick that I learned is with the chroma-key.  If you are doing
animated graphics (which can also be done in Powerpoint), you should use
chroma key effects instead of composite effects.  For chroma key, you
create the "blue screen" or whatever solid color screen on the slide.
Then put your animation "in front of" the blue screen.  This actually
works fairly well.  So, you can have animated titles.  Or, as mentioned
before, set a word processor background to blue, split the screen (as in
Microsoft Word), and type in the lower portion of the screen and get
real-time closed captioning or language translation.  Also, you can
download a ticker program from http://www.cooltick.com, and you have
information that you can share with people over the AG.

Let your imagination wander with the possibilities.

Hope this helps.


Allan
evl at uic
node-op


On Tue, 2 Oct 2001, Bob Huebert wrote:

> Hi Allan,
> 
>    Kudos on the graphic overlay work! We would very much like to 
> incorporate this within our node at ARSC/UAF. Could you provide some 
> pointers on where to begin? From what I can tell, you are running 
> this on your vidcap machine? Any help in figuring this out will be 
> greatly appreciated.
> 
> thanks
> 
> -bob
> 
> At 4:45 PM -0500 9/25/01, Allan Spale wrote:
> >Hello,
> >
> >EVL has been using its digital video mixer to do many things.  EVL has
> >used the video mixer's chroma key function and video output from the
> >control machine to do a message crawl (the type of thing you would see at
> >the bottom of the screen when severe weather occurs).  EVL has also been
> >using a compose function of the video mixer to put various items on a
> >video window like someone's name, where they are from, and maybe a "logo"
> >of the university.  To do this, Microsoft PowerPoint and a ticker program
> >called Cool Tick have been used.
> >
> >I have a new idea for improving on the usefulness of a video mixer and
> >video output from a computer: real-time closed-captioning for people with
> >hearing impairments.  All that you need is a split screen in Microsoft
> >Word, a colored background (like blue), and the video mixer.  Someone can
> >just type what they hear, and it will appear in the bottom portion of the
> >video window.
> >
> >This could even be expanded to do real-time language translation.
> >
> >Of course, having the proper tools is half the battle, so a person with
> >some sort of stenographer's machine would be better suited to do this work
> >than someone typing on a keyboard.  Another step would be to have some
> >sort of voice recognition software that would "type" the words for the
> >speaker (assuming the regular restrictions of voice recognition software
> >to-date).
> >
> >The video containing this idea will remain in the lobby for as long as
> >possible so that people can view how this is being done.
> >
> >
> >Allan
> >evl at uic
> >node-op
> 
> -- 
> _______________________________________________________________________
> Bob Huebert                                 email: huebert at arsc.edu
> HPC Visualization Systems Analyst           voice: (907) 474-5751
> Arctic Region Supercomputing Center           FAX: (907) 474-5494
> University of Alaska Fairbanks                WWW: http://www.arsc.edu/
> 




More information about the ag-tech mailing list