[AG-DEV] UCL common lib rtp_recv & frame timing
Derek Piper
dcpiper at indiana.edu
Thu Jul 29 12:27:01 CDT 2010
Hi,
I've been reading the discussion about this. It reminds me of my
development of AGVCR, since I had to be able to process a lot of UDP
packets with that program. One thing I found is that the duration of any
sleep is often not accurately performed, that is to say that it is
possible to over/under sleep so for playback to programs like vic that
are sensitive to timing there is a calibration function that runs to
determine a corrective factor to apply to intermediate packet delays so
that it is as close to the originally recorded timing (microsecond
resolution on Linux) as possible.
You might want to glance over the AGVCR sourcecode. I'm not sure if it
will solve your problem but certainly AGVCR handles UDP packets with
sensitivity to timing. agvcr-funcs.c playerSleep() function handles the
intermediate packet delays. For every packet sent it is constantly
keeping track of what its timing SHOULD be compared to what was slept
for. That helps keep things on track and tries to take into account the
variablity of usleep().
Derek
Andrew Ford wrote:
> Hi,
>
> After some further investigating (mostly timing rtp_recv, rtp_recv_data
> and process_rtp) it seems like there is some correlation between the
> time udp_select takes and the frame delay (though it's not completely
> consistent). Confusingly, if I set the timeout to 0 rtp_recv predictably
> takes very little time but the frame delay is still there, and I
> couldn't find any other places where the time went (path seems to be
> rtp_recv -> rtp_recv_data -> process_rtp -> callback). Is anyone
> familiar with performance issues related to select(), or how any other
> applications do fast UDP reading?
>
> Thanks,
> --Andrew
>
> 2010/7/28 Andrew Ford <andrew.ford at rit.edu <mailto:andrew.ford at rit.edu>>
>
> Hi Rhys,
>
> I've tried calling rtp_recv once with both 10ms and 1ms timeout
> times (as well as no timeout at all) and it looks like it doesn't
> make a difference. I also tried changing the system UDP buffer sizes
> to see if that had any effect, but it doesn't seem to.
>
> --Andrew
>
> 2010/7/27 Rhys Hawkins <rhys.hawkins at anu.edu.au
> <mailto:rhys.hawkins at anu.edu.au>>
>
> On Tue, 27 Jul 2010 14:12:37 -0400
> Andrew Ford <andrew.ford at rit.edu <mailto:andrew.ford at rit.edu>>
> wrote:
>
> >
> > timeout.tv_sec = 0;
> > timeout.tv_usec = 10000;
> > for (int i = 0; i < 1000 && rtp_recv(session, &timeout,
> _timestamp); i ++) {
> >
> >
> > timeout.tv_sec = 0;
> > timeout.tv_usec = 10;
> >
> > }
> >
>
> Hi Andrew,
>
> All that code is doing is a crude version of:
>
> while (elapsed_time < 10ms) {
> process incoming packets
> }
>
> You could try and replace the for loop with just one call to the
> rtp_recv
> function, ie:
>
> timeout.tv_sec = 0;
> timeout.tv_usec = 10000;
> rtp_recv(session, &timeout, _timestamp);
>
> and see if that fixes your problem.
>
> The code above was to handle an issue with DV decoding in that
> decoding an
> entire frame of DV took a certain length of time and during that
> time the
> UDP buffer (kernel side) could overflow causing loss of data. I
> don't think
> it should be the cause of your problems, but things are pretty
> foggy that
> far back in time so it may have other ramifications I've
> forgotten about.
>
> Cheers,
> Rhys
>
>
>
--
Derek Piper - dcpiper at indiana.edu - (812) 855 5560
System Administrator / Informatics Specialist
Molecular Structure Center (IUMSC), Chemistry
Indiana University, Bloomington, Indiana
More information about the ag-dev
mailing list