Based on user feedback, clarifying that LiveKit CLI is a separate repo and we recommend someone installs. Making it more clear that the commands below are to install LiveKit server.
* Ignore `disabled` when adpative stream is enabled.
Due to interplay of adaptive stream/visibility/dynacast, when adaptive
stream is enabled, subscribed track forces visibility and starts
streaming at low quality. This would trigger a render on client and
trigger a visibility update.
So, even if a migration disables a track, upon migration complete and
subscription bind, ignore disable and stream.
* don't hold lock during callback
* don't need to store pubMuted
* don't need to hold settings lock for pub muted
* Log receiver close.
This is going to increase log volume, but want to check if peer
connection close trickles back into receiver close.
* log final close
Was at 20 when LOST was introduced, but was going to 20 even when under
not LOST conditions. When there are packets, want the min to be at 30.
Going down to 20 resulted in reporting LOST quality even when packets
were flowing (although they were experiencing heavy loss and quality
would have been very bad, yet they are not lost).
Also, sample warning about adding packet to bucket even more.
* Add debug to understand VP9 freezes.
Have reports of VP9 freezing in some rooms.
Some data indicates that NACKs are received by SFU, but cannot get RTP
packet when that happens. It is possible that the NACKs are all from
dropped packets. Adding some debug to understand drops/NACKs better.
* enable DD debug
* comment out DD debug
* markers
* add back log about diff length mismatch
* add back key frame mismatch logging
* log skipped drops also
* Do not synthesise DISCONNECT on session change.
v12 clients can handle session change based on identity.
* change for testf
* Squelch participant update if close reason is DUPLICATE_IDENTITY.
* fix test
* comment
* Clean up participant close reason a bit
* fix test
* test
Cannot send old style leave request during migration and other scenarios
when client is expected to resume. The old style can only do a full
reconnect or disconnect. If `CanReconnect: false` which will be the case
for resume, client will disconnect.
Add a parameter to selectively send leave request to older clients.
* Reverting participant worker.
Reverts https://github.com/livekit/livekit/pull/2420 partially.
This did not revert clean. So, reverting manually. Also, keeping the
drive-by clean up bits.
* fix test
It is possible that state of underlying object has changed between
event posting and event processing. So, cache data synchronously
and use it during event processing.
This is still not perfect as things like `hidden` and `IsClosed` is
accessed in worker. Ideally, it can be a snapshot of current state of
all required values that can be posted to the worker and the worker just
operates with data.
* Use a participant worker queue in room.
Removes selectively needing to call things in goroutine from
participant.
Also, a bit of drive-by clean up.
* spelling
* prevent race
* don't need to remove in goroutine as it is already running in the worker
* worker will get cleaned up in state change callback
* create participant worker only if not created already
* ref count participant worker
* maintain participant list
* clean up oldState
* Use Seque in ops queue.
Standardizing some uses
- Change OpsQueue to use Deque so that it can grow/shrink as necessary and
need not worry about channel getting full and dropping events.
- Change StreamAllocator and TelemetryService to use OpsQueue so that
they also need not worry about channel size and overflows.
* Address feedback
* delete obvious comment
* clean up
* Augment LeaveRequest with alternate regions to connect.
* update protocol and issue resume action on close if expected to resume
* use current protocol in tests
* address feedback
Used the full TrackInfo in my previous PR, but telemetry might be
relying on top level Width/Height. So, make a pared down TrackInfo to
report to telemetry.
Also, correct some spelling/comments.
* Unify muted and unmuted migration paths.
If dynacast had disabled all layers, after a migration, the client did
not restart publish (it is akin to muted track). That failed migration
because migration state machine waits for unmuted tracks to be published
(i. e. server has to receive packets).
If a migrating track is in muted state, server does not wait for
packets. It synthesises the published event and catches up later when
packets actually come in.
Just treating all migrations as the erstwhile muted case. Sythesise
publish whether track is muted or not. In the unmuted case, packets
might arrive soon after whereas in muted case, it will depend on when
unmute happens.
This is tricky stuff. So, will need good testing.
* use muted from track info