* Remove some logs.
Also, changing Errorw -> Warnw in a bunch of places.
Going to move towards using `Errorw` for cases where a functionally
unexpected condition happens, i.e by design a condition should not
happen yet it triggered kind of scenarios.
* log error
Unless there are no published tracks, declare connected on primary PC
connected.
Streamlining this a bit. A bit of history
- With original migration, migration complete was declared on all tracks
published.
- When muted tracks has to be migrated, a publish is synthesised for
muted tracks, but migration complete did not wait till publisher peer
connection connected.
- A few weeks back, those paths were merged and all cases were changed
to use synthesised publish.
- Previously the completion point was different between muted and
unmuted tracks. And with the change to treat everything like a muted
track, completion point changed.
Change it so that if publisher PC is expected to be active, wait for it
to be connected before declaring migration complete.
Firefox on Windows 10 seems to be producing simulcast tracks with
duplicate RID. That causes a leak as only one buffer is processed.
Ignore duplicate rid.
NOTE: This is not perfect as the actual layer -> rid is indeterminable
at addition time. It would require looking at packets to determine the
video dimensions and match to rid/layer to figure out which one is
correct and which one is duplicate.
To simplify though, taking the first one and dropping later ones.
This could mean the correct resolution is not streamed, but that should
be okay. The leak is far more destructive.
Based on user feedback, clarifying that LiveKit CLI is a separate repo and we recommend someone installs. Making it more clear that the commands below are to install LiveKit server.
* Ignore `disabled` when adpative stream is enabled.
Due to interplay of adaptive stream/visibility/dynacast, when adaptive
stream is enabled, subscribed track forces visibility and starts
streaming at low quality. This would trigger a render on client and
trigger a visibility update.
So, even if a migration disables a track, upon migration complete and
subscription bind, ignore disable and stream.
* don't hold lock during callback
* don't need to store pubMuted
* don't need to hold settings lock for pub muted
* Log receiver close.
This is going to increase log volume, but want to check if peer
connection close trickles back into receiver close.
* log final close
Was at 20 when LOST was introduced, but was going to 20 even when under
not LOST conditions. When there are packets, want the min to be at 30.
Going down to 20 resulted in reporting LOST quality even when packets
were flowing (although they were experiencing heavy loss and quality
would have been very bad, yet they are not lost).
Also, sample warning about adding packet to bucket even more.
* Add debug to understand VP9 freezes.
Have reports of VP9 freezing in some rooms.
Some data indicates that NACKs are received by SFU, but cannot get RTP
packet when that happens. It is possible that the NACKs are all from
dropped packets. Adding some debug to understand drops/NACKs better.
* enable DD debug
* comment out DD debug
* markers
* add back log about diff length mismatch
* add back key frame mismatch logging
* log skipped drops also
* Do not synthesise DISCONNECT on session change.
v12 clients can handle session change based on identity.
* change for testf
* Squelch participant update if close reason is DUPLICATE_IDENTITY.
* fix test
* comment
* Clean up participant close reason a bit
* fix test
* test
Cannot send old style leave request during migration and other scenarios
when client is expected to resume. The old style can only do a full
reconnect or disconnect. If `CanReconnect: false` which will be the case
for resume, client will disconnect.
Add a parameter to selectively send leave request to older clients.
* Reverting participant worker.
Reverts https://github.com/livekit/livekit/pull/2420 partially.
This did not revert clean. So, reverting manually. Also, keeping the
drive-by clean up bits.
* fix test
It is possible that state of underlying object has changed between
event posting and event processing. So, cache data synchronously
and use it during event processing.
This is still not perfect as things like `hidden` and `IsClosed` is
accessed in worker. Ideally, it can be a snapshot of current state of
all required values that can be posted to the worker and the worker just
operates with data.
* Use a participant worker queue in room.
Removes selectively needing to call things in goroutine from
participant.
Also, a bit of drive-by clean up.
* spelling
* prevent race
* don't need to remove in goroutine as it is already running in the worker
* worker will get cleaned up in state change callback
* create participant worker only if not created already
* ref count participant worker
* maintain participant list
* clean up oldState
* Use Seque in ops queue.
Standardizing some uses
- Change OpsQueue to use Deque so that it can grow/shrink as necessary and
need not worry about channel getting full and dropping events.
- Change StreamAllocator and TelemetryService to use OpsQueue so that
they also need not worry about channel size and overflows.
* Address feedback
* delete obvious comment
* clean up