* Do not block on down track close with flush.
When publisher removes all subscribers, publisher side should
not be blocked for long. With close with flush, it could happen
if there a lot of bunch of subscribers.
So, when is expected, run it in a goroutine like it is done in
subscription manager.
Not moving the entire `RemoveSubscriber` bit to subscription manager as
there are two bits which are not tracked now
- mime type
- willBeResumed
Those two would have to be tracked in track manager and notified to
subscription manager so that it can act for that mine and if the track
will be resumed or not. As that touch more parts and could get
complicated, doing the simpler thing of cloning behaviour from
subscription manager for now.
* clean up
* code readability
* Integrate logger components
Dividing into the following components
* pub - publisher
* pub.sfu
* sub - subscriber
* transport
* transport.pion
* transport.cc
* api
* webhook
* update go modules
* Add control of playout delay
Add config to enable playout delay. The delay will be limited by
[min,max] in the config option and calculated by upstream & downstream
RTT.
* check protocol version to enable playout delay
* Move config to room, limit playout-delay update interval, solve comments
* Remove adaptive playout-delay
* Remove unused config
* Ability to use trailer with server injected frames
A 32-byte trailer generated per room.
Trailer appended when track encryption is enabled.
* E2EE trailer for server injected packets.
- Generate a 32-byte per room trailer. Too reasons for longer length
o Laziness: utils generates a 32 byte string.
o Longer length random string reduces chances of colliding with real data.
- Trailer sent in JoinResponse
- Trailer added to server injected frames (not to padding only packets)
* generate
* add a length check
* pass trailer in as an argument
* Pacer interface to send packets
* notify outside lock
* use select
* use pass through pacer
* add error to OnSent
* Remove log which could get noisy
* Starting TWCC work (#1727)
* add packet time
* WIP commit
* WIP commit
* WIP commit
* minor comments
* Some measurements (#1736)
* WIP commit
* some notes
* WIP commit
* variable name change and do not post to closed channel
* unlock
* clean up
* comment
* Hooking up some more bits for TWCC (#1752)
* wake under lock
* Pacer in down stream path.
Splitting out only the pacer from a feature branch to
introduce the concept of pacer.
Currently, there should be no difference in functionality
as a pass through pacer is used.
Another implementation exists which is just put it in a queue and send
it from one goroutine.
A potential implementation to try would be data paced by bandwidth
estimate. That could include priority queues and such.
But, the main goal here is to introduce notion of pacer in the down
stream path and prepare for more congestion control possibilities down
the line.
* Don't need peak detector
* remove throttling of write IO errors
* Avoid reconnect loop for unsupported downtrack
If the client subscribes to a track which codec is unsupported by the
client, sfu will trigger negotiation failed and issue a full reconnect
after received client answer. If the client try to subscribe that track
then it will got full reconnect again. That will cause a infinite
reconnect loop until the client don't subscribe that track. This PR
will unsubscribe the error track for the client and send a
SubscriptionResponse that contain the reason to indicates the track's
codec is not supported to avoid the reconnect loop.
* Experimental flag to try time stamp adjustment to control drift.
There is a config to enable this.
Using a PID controller to try and keep the sample rate at expected
value. Need to be seen if this works well. Adjustment are limited
to 25 ms max at a time to ensure there are no large jumps.
And it is applied when doing RTCP sender report which happens
once in 5 seconds currently for both audio and video tracks.
A nice introduction to PID controllers - https://alphaville.github.io/qub/pid-101/#/
Implementation borrowed from - https://github.com/pms67/PID
A few things TODO
1. PID controller tuning is a process. Have picked values from test from
that implementation above. May not be the best. Need to try.
2. Can potentially run this more often. Rather than running it only when
running RTCP sender report (which is once in 5 seconds now), can
potentially run it every second and limit the amount of change to
something like 10 ms max.
* remove unused variable
* debug log a bit more
Added a new manager to handle all subscription needs. Implemented using reconciler pattern. The goals are:
improve subscription resilience by separating desired state and current state
reduce complexity of synchronous processing
better detect failures with the ability to trigger full reconnect
* add prometheus stats for rtt/jitter/packet loss
* add track source to metrics
* better packet loss bins
* add track type to metrics
* remove source from AnalyticsStat
* regenerate telemetry service fake
* compute loss from per stream packet count
Related to livekit/protocol#273
This PR adds:
- ParticipantResumed - for when ICE restart or migration had occurred
- TrackPublishRequested - when we initiate a publication
- TrackSubscribeRequested - when we initiate a subscription
- TrackMuted - publisher muted track
- TrackUnmuted - publisher unmuted track
- TrackPublish/TrackSubcribe events will indicate when those actions have been successful, to differentiate.
* Fix rtcp lost for downtrack used incorrect buffer factory
In buffer factory change(#1173), every pariticipant has its own
buffer factory, can't use publisher's bufferfactory to create
DownTrack
* clean code
* Cache RTPStats and seed on re-use
When a cached down track is re-used, RTPStats was not cached.
This caused sender reports getting out-of-sync with the remote side.
Cache RTPStats and seed it on re-use.
* staticcheck
* Start RTCP workers after peer connection connects
* Move more things into transport module
* Start RTCP workers only on connected
* Test needs PeerConnection() method
* adjust comment
* Prevent track subscriptions/adding receivers after close
With subscribe/unsubscribe queuing, a subscribe may be
attempted after a call to `RemoveAllSubscribers`.
So, renaming `RemoveAllSubscribers` to `InitiateClose`
and maintaining state that track is in the process of closing.
* Mime specific remove
* Remove unused error
* do not add receiver when closing
- Do not update jitter on padding only packet.
Padding only packet may not have proper timestamp.
If it does, it probably has the time stamp of the
last packet with payload. That will also affect
jitter calculation, i. e. wall clock time is moving,
but RTP time is the same.
- Do not send `onMaxLayer` changed on bind.
It was probably racing with update when max layer
is updated when adaptive stream is off. There is
no need to send that update as the default would
be OFF. It will be enabled when adaptive stream
subscription turns it on or when max layer is
set when down track bind happens and adaptive stream
is off.
* WIP commit
* Refactor media loss proxy
* Use DynacastQuality and MediaLossProxy from MediaTrack
* fix test
* Remove unused param
* Remove unused interfaces
* Move interface methods to local
* Split out DynacastManager
* have to add codec to dynacast manager
* RUnlock
* fix restart
* Adding API to force quality and also maintain closed state
* Address PR comments
With rapid changes to subscription settings, use of a goroutine
could end up processing dynacast needs for that subscriber in
a different order. So, record the susbcription needs of a subscriber
in the callback and process the data in a go routine.
* Keep track of pending subscriber operations.
This is required to determine if a receiver does not have
any subscription.
* correct spelling of queuing
* lock around hasPermission
* Move subscribe/unsubscribe queue to participant.
As subscribe/unsubscribe operation can come from both
local media track or remote media track, participant
needs to have it.
* Remove comment
* Stop reneg timer on close
* address comments
As there is a queue to send dynacast update, forcing
an update on unmute should be fine. That will send
the current state. If the subscribers change it,
an update will be sent as necessary.
This addresses the case of subscription changes happening
when the published track is muted. Dynacast updates are
not sent when publisher tarck is muted. If on unmute,
if subscribers do not have any changes, an update is missed
(i. e. the changes that happen when publisher track is muted
is not sent).
* Use a queue for add/remove subscribe operations.
If subscribe/unsubscribe happens very quickly, the subscription
state gets mixed up as things are keyed off of subscriberID.
Use a queue of subscribe operations and process it serially.
* set up callback for down track added
* move the queue on unexpected type
* move the queue if removeSubscirber does not have a subscribed track
* WIP commit
* Clean up
* spelling mistake
* Run subscribed track onBind in a go routine
* Address comments and more safety net
* Cache and restore forwarder state on resume
* conflicts
* mage generate
* WIP commit
* WIP commit
* Remove debug
* Revert to reduce diff
* Fix tests
* Determine spatial layer from track info quality if non-simulcast
* Adjust for invalid layer on no rid, previously that function was returning 0 for no rid case
* Fall back to top level width/height if there are no layers
* Use duration from RTPDeltaInfo