* Use rtcscore-go to calculate audio/video score
Signed-off-by: shishir gowda <shishir@livekit.io>
* Get max expected layer and find max actual layer from stream
Signed-off-by: shishir gowda <shishir@livekit.io>
* Cleanup unused methods
Signed-off-by: shishir gowda <shishir@livekit.io>
* Cleanup code - address review comments
Signed-off-by: shishir gowda <shishir@livekit.io>
* get expected layer info instead of just quality
Signed-off-by: shishir gowda <shishir@livekit.io>
* Move SpatialLayerForQuality to utils/helpers
method is required in rtc,sfu and connectionstats pkg
Moved to utils/helpers.go to remove cyclic deps
Signed-off-by: shishir gowda <shishir@livekit.io>
* update tests
Signed-off-by: shishir gowda <shishir@livekit.io>
* Pick stream stats with max layer
Signed-off-by: shishir gowda <shishir@livekit.io>
* Update rtcscore-go pkg to make rtt/jitter optional
when passing 0, rtcscore-go was setting default values
Signed-off-by: shishir gowda <shishir@livekit.io>
* update score to rating
Signed-off-by: shishir gowda <shishir@livekit.io>
* Update rtcscore-go pkg to use simulcast layer info for score
Signed-off-by: shishir gowda <shishir@livekit.io>
* Update score ratings to reflect rtcscore range
Signed-off-by: shishir gowda <shishir@livekit.io>
* update test params for new rtcscore
Signed-off-by: shishir gowda <shishir@livekit.io>
* Delay sending scores to connections only till full data is available
first interval can have partial data leading to lower scores
Signed-off-by: shishir gowda <shishir@livekit.io>
* Check for inf values in quality params
Signed-off-by: shishir gowda <shishir@livekit.io>
* Clean up initial score calculation. Default to 5
Signed-off-by: shishir gowda <shishir@livekit.io>
Co-authored-by: David Zhao <dz@livekit.io>
* Use grants clone
* Fix a couple of more races
Use a shadow copy of down tracks in DownTrackSpreader.
Read always uses the shadow.
On Add/Delete of down track, make a new copy.
Copying is done only on add/delete.
If somebody is holding reference to a shadow, it will be in tact as Add/Delete create a new slice.
With this, not seeing any more races in test. So, enabling CI tests with `-race`.
Also fixing another race reported in #603
There are a couple of more races in that bug report that needs to be
chased down.
* Use env suggested in https://lifesaver.codes/answer/runtime-race-detector-sigabrt-or-sigsegv-on-macos-monterey-49138
* staticcheck, did not fail locally, but reported by CI
* use API to get down tracks
* Inject silence opus frames on mute.
If not, under certain circumstances, residual noise
seems to trigger comfort noise generation at the decoder
which is not all that comfortable.
* inject silence only when muting
* Add comment
* augment comment
* Delete blank line
* Adjust TS offset on blank frames
* Remove debug
* Do not modify `lastTS` as it affects timestamp on next switch.
* Trying more stuff for DTX
* - Use a go routine to send blank frames
- Use duration instead of number of frames and calculate number of
frames
* augment comment
* Remove debug
* Return a chan from writeBlankFrameRTP
* Use a long duration for mute
* add comment
* Incorporate suggestion from Jie
Should be okay as that is usually only three elements max.
Stream tracker + manager send available layers. As stream trackers
run in different go routines, need to think some more about chances
of out-of-order operations. But, making a copy in forwarder so
that it is not referncing moving data.
Also, when setting up provisional allocation, make a copy
(that was the intention, i. e. a provisional allocation should have
stable data to work with).
Improve scalability by batching subscriber updates on an interval.
When lots of subscribers join, the server ends up spending 20% CPU
sending each state change to everyone. There's a non-trivial amount of
overhead with each send operation.
For publishers, updates are sent right away.
This probably needs more work
- Maybe some events should not be queued
- Reduce number of events
Bump up the size any way so that the queue does not overflow
when subscribing to a lot of tracks.
* Inject silence opus frames on mute.
If not, under certain circumstances, residual noise
seems to trigger comfort noise generation at the decoder
which is not all that comfortable.
* inject silence only when muting
* Add comment
* augment comment
* Delete blank line
* Adjust TS offset on blank frames
* Remove debug
* Do not modify `lastTS` as it affects timestamp on next switch.
* use duration to calculate number of blank frames
* log errors in blank frames write path
* Tweak screen share stream tracker.
Screen share uses extremely low frame rate when content is static.
This leads to stream tracker changes and layer switches.
Use a longer measurement interval for screen share tracks.
Setting it to 4 seconds. The screen share frame rate is
approx. 1 fps and with two temporal layers, it should get
a frame every 2 seconds or so in every layer. But, that
does not work. Even three second measurement window was
reporting 0 rate. So, it is getting clumped. 4 seconds works
fine.
Also modifying the bit rate availability change to trigger
on a temporal layer becoming available or going away.
With these changes, on a local test, not seeing any unnecessary
PLIs from client and hence no unnecessary key frames from publisher.
* Fix test
Newly joining participant does not get information about
currently active speaker till there is a speaker state change.
This addresses it by sending a speaker update on subscription
if the subscribed to participant is actively speaking.
* Improve frequency of stats update
Prometheus stats are updated as the data becomes available, instead of
aggregated along with telemetry batches. Node availability decisions can
now react much faster to these stats.
* use the same intervals for connection quality updates
* Do not count padding packets in stream tracker.
There are cases where publisher uses padding only packets
in a layer to probe the channel. Treating those as valid
layer packets makes stream tracker declare that the layer
is active where in reality, it is not.
Also, removing check for out-of-order packets. Out-of-order
packets can happen and should be counted in bitrate calculation.
There is the extreme case of heavy loss which might skew it, but
that is an extreme case.
* Fix test
* Stats of NACKs acked and number of repeated NACKs.
Also making a change in delta stat to drop negative packet loss
counts to 0. Because of windowing it is a legitimate case.
The receiver could have seen a loss in window we are measuring
and in the subsequent window, the receiver could have gotten
a retransmission and reduced the packet loss count resulting
in a negative delta. When we report negative delta, it could
get dropped by analytics validator. That will be lost data.
Avoid that.
* Remove unused code
* Pick up latest protocol
* Support participant identity in permissions
It is harder for clients to update permissions by SID as remote
reconnecting means a new SID for that participant. Using participant
identity is a better option.
For now, participant SID is also supported. Internally, it will
get mapped to identity. Server code uses identity throughout after
doing any necessary conversion from SID -> Identity.
* Address comments
* Remove `Head` field from `ExtPacket` structure.
Although we do not intend to, but if packets get out-of-order
in the forwarding path (maybe reading in multiple goroutines
or using some worker pool to distribute packets), the `Head`
indicator could lead to wrong behaviour. It is possible that
at the receiver, the order is
- Seq Num N, Head = true
- N + 1, Head = true
If the forwarding path sees `N + 1` first, the Head flag
when it sees `N` packet is incorrect and will lead to incorrect
behaviour.
The alternative check is very simple. So, remove `Head` flag.
* Remove unused field
* Use seq num offsets cache instead of missing seq num map.
Map operations can be costly. Use a fixed size array to store offsets.
4096 sequence numbers should be more than 16 seconds for 720p video
which should be plenty to look up offset of out-of-order packets.
Packets out-of-order more than that will probably be useless anyway.
* Move offset cache population to only when we are forwarding
* add some debug logs
* Remove debug
* Remove callbacks queue from sfu/DownTrack
- Connection stats callback was happening in connection stats go routine
- RTT update launches a goroutine on the receive side as it affects the
subscriber. So, no need to queue it.
- Changed two things
o Move close handler to goroutine. It is better that way as it touches
the subscriber as well
o Move max layer handling into a goroutine also so that the callback
does minimal work.
With this all the send side callback queues are removed.
* small clean up
In some implementations of the Store interface, it's possible that the
media node has not yet persisted the participant when we are trying to
send it messages. In which case we should not error on loading of the
participant.