* Make congestion controller probe config
* Wait for enough estimate samples
* fixes
* format
* limit number of times a packet is ACKed
* ramp up probe duration
* go format
* correct comment
* restore default
* add float64 type to generated CLI
* Use net.JoinHostPort to build "host:port" strings for `net.Listen`
net.JoinHostPort provides a unified way of building strings of the form
"Host:Port", abstracting the particular syntax requirements of some
methods in the `net` package (namely, that IPv4 addresses can be given
as-is to `net.Listen`, but IPv6 addresses must be given enclosed in
square brackets).
This change makes sense because an address such as `[::1]` is *not* a
valid IPv6 address; the square brackets are just a detail particular to
the Go `net` library. As such, this syntax shouldn't be exposed to the
user, and configuration should just accept valid IPv6 addresses and
convert them as needed for usage within the code.
* Use '--bind' CLI flag to also filter RTC bind address
The local address passed to a command such as
livekit-server --dev --bind 127.0.0.1
was being used as binding address for the TCP WebSocket port, but was
being ignored for RTC connections.
With `--dev`, the conf.RTC.UDPPort config is set to 7882, which enables
"UDP muxing" mechanism. Without interface or address filtering, Pion
would try to bind to port 7882 on *all* interfaces.
This was failing on a system with IPv6 enabled, when trying to bind to
an IPv6 address of the `docker0` interface. It seems to make sense that
the user-passed bind addresses are also honored for the RTC port
bindings.
* Pacer interface to send packets
* notify outside lock
* use select
* use pass through pacer
* add error to OnSent
* Remove log which could get noisy
* Starting TWCC work (#1727)
* add packet time
* WIP commit
* WIP commit
* WIP commit
* minor comments
* Some measurements (#1736)
* WIP commit
* some notes
* WIP commit
* variable name change and do not post to closed channel
* unlock
* clean up
* comment
* Hooking up some more bits for TWCC (#1752)
* wake under lock
* Pacer in down stream path.
Splitting out only the pacer from a feature branch to
introduce the concept of pacer.
Currently, there should be no difference in functionality
as a pass through pacer is used.
Another implementation exists which is just put it in a queue and send
it from one goroutine.
A potential implementation to try would be data paced by bandwidth
estimate. That could include priority queues and such.
But, the main goal here is to introduce notion of pacer in the down
stream path and prepare for more congestion control possibilities down
the line.
* Don't need peak detector
* remove throttling of write IO errors
- Increase max interval between probes to 2 minutes.
- Use a minimum probe rate of 200 kbps. This is to ensure that
the probe rate is decent and can produce a stronger signal.
* Don't update dependency info if unordered packet received
* Trace all active svc chains for downtrack
* Try to keep lower decode target decodable
* remove comments
* Test case
* clean code
* solve comments
When migrating muted track, need to set potential codecs.
For audio, there may not be `simulcast_codecs` in `AddTrack`.
Hence when migrating a muted track, the potential codecs are not set.
That results in no receivers in relay up track (because all this
could happen before the audio track is unmuted).
So, look at MimeType in TrackInfo (this will be set in OnTrack) and
use that as potential codec.
* Do not process events after participant close.
Avoid processing transport events after participant/transport close.
It causes error logs which are not really errors, but distracting noise.
* correct comment
* Full reconnect on publication mismatch on resume.
It is possible that publications mismatch on resume. An example sequence
- Client sends `AddTrack` for `trackA`
- Server never receives it due to signalling connection breakage.
- Client could do a resume (reconnect=1) noticing signalling connection
breakage.
- Client's view thinks that `trackA` is known to server, but server does
not know about it.
- A subsequence offer containing `trackA` triggers `trackInfo not
available before track publish` and the track does not get published.
Detect the case of missing track and issue a full reconnect.
* UpdateSubscriptions from sync state a la cloud
* add missing shouldReconnect
* Close participant on full reconnect.
A full reconnect == irrecoverable error. Participant cannot continue.
So, close the participant when issuing a full reconnect.
That should prevent subscription manager reconcile till the participant
is finally closed down when participant is stale.
* format
A subscription in subscription manager could live till the source
track goes away even though the participant with that subscription
is long gone due to closure on source track removal. Handle it by using
trackID to look up on source track removal.
Also, logging SDPs when a negotiation failure happens to check
if there are any mismatches.
* Simplify sliding window collapse.
Keep the same value collapsing simple.
Add it to sliding window as long as same value is received for longer
than collapse threshold.
But, add a prune with three conditions to process the siliding window
to ensure only valid samples are kept.
* flip the order of validity window and same value pruning
* increase collapse threshold to 0.5 seconds during non-probe
1. Probe end time needs to include the probe cluster running time also.
2. Apply collapse window only within the sliding window. This is to
prevent cases of some old data declaring congestion. For example,
an estimate could have fallen 15 seconds ago and there might have
been a bunch of estimates at that fallen value. And the whole
sliding window could have that value at some point. But, a further
drop may trigger congestion detection. But, that might be acting too
fast, i. e. on one instance of value fall. Change it so that we
detect if there is a fall within the sliding window and apply
collapse based on that.
Till now only video was using simulated publish when migrating on mute.
But, with `pauseUpstream() + replaceTrack(null)`, it is possible that
client does not send any data when muted.
I do not think there is a problem to do this (even when cleint is
actually using mute which sends silence frames).
If ref is coming in slow (due to pacing), it is possible that
expected is ahead. Pulling next too far towards expected causes
warps in a subsequent report. Keep switches closer to ref.
On a state change, it was possible an aborted probe was pending
finalize. When probe controller is reset, the probe channel
observer was not reset. Create a new non-probe channel observer
on state change to get a fresh start.
Also limit probe finalize wait to 10 seconds max. It is possible
that the estimate is very low and we have sent a bunch of probes.
Calculating wait based on that could lead to finalize waiting for
a long time (could be minutes).
* Simplify probe done handling.
Seeing a case where the channel abserver is not re-created after
an aborted probe. Simplifying probe done (no callbacks, making it
synchronous).
* log more
* Use receiver report stats for loss/rtt/jitter.
Reversing a bit of https://github.com/livekit/livekit/pull/1664.
That PR did two snapshots (one based on what SFU is sending
and one based on combination of what SFU is sending reconciled with
stats reported from client via RTCP Receiver Report). That PR
reported SFU only view to analytics. But, that view does not have
information about loss seen by client in the downstream.
Also, that does not have RTT/jitter information. The rationale behind
using SFU only view is that SFU should report what it sends irrespective
of client is receiving or not. But, that view did not have proper
loss/RTT/jitter.
So, switch back to reporting SFU + receiver report reconciled view.
The down side is that when receiver reports are not receiver,
packets sent/bytes sent will not be reported to analytics.
An option is to report SFU only view if there are no receiver reports.
But, it becomes complex because of the offset. Receiver report would
acknowledge certain range whereas SFU only view could be different
because of propagation delay. To simplify, just using the reconciled
view to report to analytics. Using the available view will require
a bunch more work to produce accurate data.
(NOTE: all this started due to a bug where RTCP was not restarted on
a track resume which killed receiver reports and we went on this path
to distinguish between publisher stopping vs RTCP receiver report not
happening)
One optimisation to here here concerns the check to see if publisher is sending data.
Using a full DeltaInfo for that is an overkill. Can do a lighter weight
for that later.
* return available streams
* fix test
* Perform unsubscribe in parallel to avoid blocking
When unsubscribing from tracks, we flush a blank frame in order to prepare
the transceivers for re-use. This process is blocking for ~200ms. If
the unsubscribes are performed serially, it would prevent other subscribe
operation from continuing.
This PR parallelizes that operation, and ensures subsequent subscribe
operations could reuse the existing transceivers.
* also perform in parallel when uptrack close
* fix a few log fields