* Add debug log for RTCP sender report.
Temporary to collect more data. Hitting scenarios under congestion
where the sender report gets off sync. Need some data to pore through
and understand and implement changes.
* Debugw
* WIP commit
* WIP commit
* WIP commit
* Some clean up
- Removed a chatty debug log
- some spelling, punctuation correction in comments
- missed an `Abs` in check, add it.
* Don't update dependency info if unordered packet received
* Trace all active svc chains for downtrack
* Try to keep lower decode target decodable
* remove comments
* Test case
* clean code
* solve comments
It is possible that publisher paces the media.
So, RTCP sender report from publisher could be ahead of
what is being fowarded by a good amount (have seen up to 2 seconds
ahead). Using the forwarded time stamp for RTCP sender report
in the down stream leads to jumps back and forth in the down track
RTCP sender report.
So, look at the publisher's RTCP sender report to check for it being
ahead and use the publisher rate as a guide.
Actually, was not filtering the not last sender report error before.
Previous PR did that. This PR restores the old no last sender report
filter. Both are filterable errors.
1. Completely removing RTT and jitter from score calculation.
Need to do more work there.
a. Jitter is slow moving (RFC 3550 formula is designed that way).
But, we still get high values at times. Ideally, that should
penalise the score, but due to jitter buffer, effect may not be
too bad.
b. Need to smooth RTT. It is based on receiver report and if one
sample causes a high number, score could be penalised
(this was being used in down track direction only). One option
is to smooth it like the jitter formula above and try using it.
But, for now, disabling that also.
2. When receiving lesser number of packets (for example DTX), reduce the
weight of packet loss with a quadratic relationship to packet loss
ratio. Previously using a square root and it was potentially
weighting it too high. For example, if only 5 packets were received
due to DTX instead of 50, we were still giving 30% weight
(sqrt(0.1)). Now, it gets 1% weight. So, if one of those 5 packets
were lost (20% packet loss ratio), it still does not get much weight
as the number of packets is low.,
3. Slightly slower decrease in score (in EWMA)
4. When using RED, increase packet loss weight thresholds to be able to
take more loss before penalizing score.
Two things
- Somehow the publisher RTCP sender report time stamp goes back some
times. Log it differently. Also, use signed type for logging so
that negative is easy to see.
- On down track, because of silence frame injection on mute, the RTCP
sender report time stamp might be ahead of timestamp we will use
on unmute. If so, ensure that next timestamp is also not before
what was sent in RTCP sender report.
The PID controller seems to be working well. But, it is unclear where
it can be applied as some of the data shows significant jumps
(either caused by BT devices or possibly noise cancellation/cpu
constraint) and although PID controller is slowly pulling things
to expected sample rate, it could be a bit slow.
Unfortunately, cannot munge too much in a middle box.
However leaving the controller in there as it is doing its job
for cases where things slip slowly.
Changing things to log significant jumps (more than 200 ms away
from expected) at Infow level.
Also, recording drift and sample rate in RTP stats proto and string
representation.
With short term measurements, the adjustment itself was causing
some oscillations and drift tend to settle at some small value
and oscillated around it due to push/pull affecting small window
measurement.
* Experimental flag to try time stamp adjustment to control drift.
There is a config to enable this.
Using a PID controller to try and keep the sample rate at expected
value. Need to be seen if this works well. Adjustment are limited
to 25 ms max at a time to ensure there are no large jumps.
And it is applied when doing RTCP sender report which happens
once in 5 seconds currently for both audio and video tracks.
A nice introduction to PID controllers - https://alphaville.github.io/qub/pid-101/#/
Implementation borrowed from - https://github.com/pms67/PID
A few things TODO
1. PID controller tuning is a process. Have picked values from test from
that implementation above. May not be the best. Need to try.
2. Can potentially run this more often. Rather than running it only when
running RTCP sender report (which is once in 5 seconds now), can
potentially run it every second and limit the amount of change to
something like 10 ms max.
* remove unused variable
* debug log a bit more
* Keep track of expected RTP time stamp and control drift.
- Use monotonic clock in RTCP Sender Report and packet times
- Keep the time stamp close to expected time stamp on layer/SSRC
switches
* clean up
* fix test compile
* more test compile failures
* anticipatory clean up
* further clean up
* add received sender report logging
* Keep track of expected RTP time stamp and control drift.
- Use monotonic clock in RTCP Sender Report and packet times
- Keep the time stamp close to expected time stamp on layer/SSRC
switches
* clean up
* fix test compile
* more test compile failures
data.
Without the check, it was getting tripped by publisher not publishing
any data. Both conditions returned nil, but in one case, the receiver
report should have been received, but no movement in number of packets.
* Decode chains
* clean up
* clean up
* decode targets only on publisher side
* comment out supported codecs
* fix test compile
* fix another test compile
* Adding TODO notes
* chainID -> chainIdx
* do not need to check for switch up point when using chains, as long as chain integrity is good, can switch
* more comments
* address comments
Hopefully temporary while we can find a better solution.
Adds 36 KB per SSRC. So, if a node can handle 10K SSRCs (roughly 10K
tracks), that will be 360 MB of extra memory.
* Update go deps
Generated by renovateBot
* use generics with Deque
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: David Zhao <dz@livekit.io>