Sub-spaces and Re: [AG-TECH] AG Lobby Policies

Chris Greenhalgh cmg at Cs.Nott.AC.UK
Mon Mar 18 05:26:03 CST 2002


Tony Rimovsky wrote:

>Culturally, the AG follows a mud paradigm, using virtual geography and
>the concept of space as opposed to the TV channel model that IRC or
>even SDR use.
>
>I always thought it would be cool to extend the venues concept along
>the same path as moos.  Give people the tools to create their own
>spaces, then let them set context based on what they create. 
>
I think AG has a crisis of modality ;-)

MOOs support concurrent conversations in the same room through people 
selective reading of text.
Realtime audio combines sources differently than text does - get 
overlapped, and we are not so good at listening to two completely 
different conversations at once. In  the physical world we use space to 
'tune in' to one conversation, i.e. move away so that energy dispersion 
reduces subjective volume, use binaural listening skills to 
preferentially attend to particular audio source(s).

In the AG its all or nothing, with no 'distance attenuation' and no 
spatialisation (and the echo cancellers would choke if even if you tried 
it at the s/w level).

Option 1: jump to a new room to talk => the lobby is quiet and empty

Option 2: talk in the lobby, current audio management => (roughly) only 
one active conversation per space, relatively intrusive
Option 2b: add a button to RAT to make it is easy to temporarily turn 
down the volune a lot so you just get a background buzz when not paying 
attention rather than turning it off completely :-)

Option 3: introduce further relationships between spaces and/or intenal 
structure within spaces (e.g. 'lobby', incorporating a set of 'tables') 
and manage audio in an overlapping but volume-managed way => e.g. 
'adjacent' spaces receive audio but play it at a much reduced level 
(e.g. 5%) => if your local space is quiet you can hear that stuff is 
going on 'nearby', but conversation in your local space is easily 
attended to as primary.

Of course I like option 3, but the devil is in the (session management 
and data distribution) detail. e.g.
 * is each 'local space' as separate multicast session? if so the audio 
tool must join multiple sessions simultaneously, and be told to turn 
down playout on all but one. The controlling app needs to know which 
sessions these are...
 * are the local spaces all part of the main space session? if so there 
must be more communication going on to indicate which sources are in 
which subspaces, and the audio tool must be told to turn down all 
sources in other sub-spaces. And of course people need to be able to 
navigate between subspaces.

And if that wasn't bad enough, how do you map from hearing a 'distant' 
conversaion, to finding that conversation? I.e. which subspace is it in 
and how do you go there (quickly)?

In Collaborative Virtual Environments this kind of local navigation is 
all tied up with a 3D Cartesian spatial navigation metaphor rather than 
the DAG model of a Moo; when you add audio spatialisation (at least 
stereo panning) and visual indications of speaking (e.g. virtual speach 
balloons) then navigation to audio sources becomes more natural. We have 
worked quite a lot on this kind of thing [e.g. MASSIVE-1, MASSIVE-2, 
Spatial model of interaction - http://www.crg.cs.nott.ac.uk/~cmg], but 
even in CVEs it can feel like a lot of work just to manage 
conversation/interaction.

Cheers,
Chris





More information about the ag-tech mailing list