Currently, the signal requests are counted on media side and signal
responses are counted on controller side. This does not provide the
granularity to check how many response messages each media node is
sending.
Seeing some cases where track subscriptions are slow under load. This
would be good to see if the media node is doing a lot of signal response
messages.
* WIP
* check using protocol version
* revert
* clean up
* sdp cid argument
* WIP
* WIP
* test
* clean up
* clean up
* fixes
* clean up
* clean up
* clean up
* conditional checks
* tests for both dual and single peer connection
* test
* test
* test
* type check
* test
* todo
* munges
* combined config
* populate mid
* limit to receive only
* clean up
* clean up
* clean up
* older test
* clean up
* alternative audio codec
* dtx
* don't need to copy
* Anunay feedback
* use the available peer connection
* publisher check
* WIP
* WIP
* WIP
* no mid
* media sections requirement
* mage generate
* WIP
* WIP
* set data channel receive size for test
* handle early media better
* WIP
* do not do ICERestart if no subscriber
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* start up subscriber RTCP worker
* WIP
* WIP
* clean up
* clean up
* flag to indicate use of single peer connection
* remove unused interface method
* clean up
* clean up
* Jie feedback #1
* deps
* do not access subscriber in one shot mode
* more places for one shot mode
* more one shot fixes
* deps
* deps
* test
* Rework node stats a bit.
Related protocol PR - https://github.com/livekit/protocol/pull/1023
- Make a config for node stats measurements. Wanted to put the config in
`routing` package, but a circular dependency forced me to put in
config.go
- Make rate calculations explicit, i. e. requested via config.
Previously, it had some odd checks to decide when to calculate rate
and it would have been calculating over different windows.
- Report signal/data channel bytes every 5 seconds to stats collection
module. Previously, it was doing it every 30 seconds and that meant
some windows could have had a large spike
NOTE: Still need to think about this for load calculations as a large
number of participants leaving could flush in a small window and that
could report a large spike in bytes/packets. Maybe need to ignore
signal bytes for load calculation?
* deps
* use default node stats config if given config is nil
* split out node stats into a struct for re-use
* update config
* WIP
* comment
* Verify method on LocalParticipant
* cleanup
* clean up
* pass in one-shot-mode to StartSession
* null message source and sink
* feedback and also remove check in ParticipantImpl for one-shot-mode-filtering as a null sink can be used for that
* Handle room configuration that's set in the grant itself
* ensure refresh token contains updates
* deps
* dep
---------
Co-authored-by: Paul Wells <paulwe@gmail.com>
* Log cleanup pass
Demoted a bunch of logs to DEBUG, consolidated logs.
* use context logger and fix context var usage
* moved common error types, fixed tests
Sometimes the initial selected node could fail. In that case, we'll give it a few more attempts to locate a media node for the session instead of failing it after the first try.
* feat: unpublish tracks after publish permissions are revoked.
Uses protocol 7 to indicate client support, otherwise it attempts to
mute the tracks.
Also sends back permissions objects of all participants, and cleaned up
our handling of various permissions attributes.
* fix static check
This ensures client reconnect attempts would be successful for long running rooms. It also fixes inaccurate permissions that were set incorrectly when full reconnections take place.
* Stream Allocator Try 3
Making an intermediate PR to do
- Special treatment for screen share tracks
- When allocating all tracks,
o try to stream all tracks by starting with the lowest layer
o multi-pass across tracks to get a more even distribution
Not yet done:
-------------
In deficient state,
o Allocate a specific track on a change
o Steal from other tracks
* Correct sense of managed track
* have to range to copy
* generate
* fix VideoLayers compare
* Use t.simulcasted
* small refactor
* extra line
* fix room allocator test
* selector fakes not used
* keep decisions out of router
* put nodeId logic back
* fix room allocator test
* Disallow AddTrack from participants that don't have the permission
* Support protocol 3 speaker updates, client info
* update protocol
* Disallow AddTrack from participants that don't have the permission
* increase wait time for GH to pass
* prestop and hasparticipants in interface
* add prestop function to existing routers
* fakerouter prestop
* update protocol version
* read lock
* redis router graceful stop
* test fix
* force stop