* Perform unsubscribe in parallel to avoid blocking
When unsubscribing from tracks, we flush a blank frame in order to prepare
the transceivers for re-use. This process is blocking for ~200ms. If
the unsubscribes are performed serially, it would prevent other subscribe
operation from continuing.
This PR parallelizes that operation, and ensures subsequent subscribe
operations could reuse the existing transceivers.
* also perform in parallel when uptrack close
* fix a few log fields
* Avoid reconnect loop for unsupported downtrack
If the client subscribes to a track which codec is unsupported by the
client, sfu will trigger negotiation failed and issue a full reconnect
after received client answer. If the client try to subscribe that track
then it will got full reconnect again. That will cause a infinite
reconnect loop until the client don't subscribe that track. This PR
will unsubscribe the error track for the client and send a
SubscriptionResponse that contain the reason to indicates the track's
codec is not supported to avoid the reconnect loop.
* Return max spatial layer from selectors.
With differing requirements of SVC and allowing overshoot in Simulcast,
selectors are best placed to indicate what is the max spatial layer when
they indicate a switch to max spatial layer.
* fix test
* prevent race
It is possible that publisher paces the media.
So, RTCP sender report from publisher could be ahead of
what is being fowarded by a good amount (have seen up to 2 seconds
ahead). Using the forwarded time stamp for RTCP sender report
in the down stream leads to jumps back and forth in the down track
RTCP sender report.
So, look at the publisher's RTCP sender report to check for it being
ahead and use the publisher rate as a guide.
Active TCP was added in pion/ice v2.3.4. This is causing a couple of issues for us.
Active TCP does not make sense for an SFU. Clients are expected to be behind NAT and we should not be dialing them. Instead, LiveKit exposes a TCP port so clients could dial in
Active TCP is causing all iOS clients to become disconnected immediately. This is impacting all version of libwebrtc-based iOS clients (tested from M104 to M111)
* RTCP sender reports every three seconds.
Ideally, we should be sending this based on data rate.
But, increasing frequency a little as a lost sender report
means the client may not have sender report for 10 seconds
and that could affect sync. We do receiver reports once a second.
Thought of setting this to that level too, but not making a big change
from existing rate.
Also, simplifying the RTCP send loop. Don't need to hold and
do the processing after collecting all reports.
* consistent use of GetSubscribedTracks
* Experimental flag to try time stamp adjustment to control drift.
There is a config to enable this.
Using a PID controller to try and keep the sample rate at expected
value. Need to be seen if this works well. Adjustment are limited
to 25 ms max at a time to ensure there are no large jumps.
And it is applied when doing RTCP sender report which happens
once in 5 seconds currently for both audio and video tracks.
A nice introduction to PID controllers - https://alphaville.github.io/qub/pid-101/#/
Implementation borrowed from - https://github.com/pms67/PID
A few things TODO
1. PID controller tuning is a process. Have picked values from test from
that implementation above. May not be the best. Need to try.
2. Can potentially run this more often. Rather than running it only when
running RTCP sender report (which is once in 5 seconds now), can
potentially run it every second and limit the amount of change to
something like 10 ms max.
* remove unused variable
* debug log a bit more
* Keep track of expected RTP time stamp and control drift.
- Use monotonic clock in RTCP Sender Report and packet times
- Keep the time stamp close to expected time stamp on layer/SSRC
switches
* clean up
* fix test compile
* more test compile failures
* anticipatory clean up
* further clean up
* add received sender report logging
With subscription manager, there is no need to tell a publisher
about a subscriber going away. Before subscription manager,
the up track manager of a participant (i. e. the publisher side)
was holding a list of pending subscriptions for its published tracks
and that had to be cleaned up if one of the subscriber goes away.
That is not the case any more.
Also set publisherID early so that subscription permission update has
the right publisherID. In fact, saw an empty ID in the logs and saw
that we still have the disallowed subscription handling which is not
necessary any more.
* hopefully more stable tests
* do eventual checks as some callbacks happen in go routines.
Needs a bit more work to ensure that some conditions do not happen.
But, with goroutines, the amount of wait is always tricky.``
* Support simualting subscriber bandwidth.
When non-zero, a full allocation is triggered.
Also, probes are stopped.
When set to zero, normal probing mechanism should catch up.
Adding `allowPause` override which can be a connection option.
* fix log
* allowPause in participant params
* Use bandwidth requested from last allocation.
With overshoot/opportunistic forwarding, It is possible that
bitrate at target layers is 0. So, use bandwidth requested
from last allocation which shouold have a correct value.
Still need to think about using the latest bit rates to get
the requested bandwidth. It is possible that bitrates have
changed since last allocation. That was the idea behind using
the latest bitrates, but it could return 0. Accounting for it
runs into a few scenarios. Last allocation has number from
last allocation and is a good indicator of the need.
* race
* Update go deps
Generated by renovateBot
* use generics with Deque
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: David Zhao <dz@livekit.io>