Happens when converting quality in subscibed settings to layer.
Looks like it can happen only if the provided quality is OFF.
Don't know of any client that does that. Anyhow, prevent out-of-range
access which causea a panic.
Switching to using session specific TURN credentials instead of shared
credentials per Room. Also eliminates need to load Room from Redis
during TURN authentication
It's been reported that "ghost" participants, those that did not terminate
cleanly, hang around the room for too long after they disappear.
Evaluating our timeouts a bit, it seems that we are really conservative
in waiting for participants to disconnect. This PR cuts down the disconnect
timeout from 50s to 20s, a 30s reduction.
* Use bit map.
Also, duplicate packet detection is impoetant for dropping padding
only packets at the publisher side itself. In the last PR, mentioned
that it is only for stats.
* clean up
* Update deps
* Reduce packet meta data cache - part 1
Packet meta data cache takes a good amount of space.
That cache is 8K entries deep and each entry is 8 bytes.
So, that takes 64KB per RTP stream.
It is mostly needed for down stream to line up with receiver reports.
So, removing cache from up stream (RTPStatsReceiver) as part 1.
Will look at optimising the down stream in part 2.
* Remove caching from RTPStatsReceiver
* clean up a bit more
* maintain history and fix test
* Throttle packet errors/warns a bit.
In very bad network conditions, it is possible that packets
arrive out-of-order, have choppy behaviour.
Use some counters and temper logs.
* slight change in comment
Using time from outside make anachronous samples in expected
distance/bit rate measurement. So, have to let the time be
snap shotted in scorer lock scope.
Streaming could start after 16-bits has rolled over. So, have to add
that base back to what is received in receiver report.
Otherwise, it looks like there are not packets received in window
leading to poor quality.
Need to pass in the correct time. Previously streaming start was
determined by another delta snap shot which as removed for efficiency.
Did not realise that we were passing in zero time for stats.
Also, revert of the change (the part which did not re-pause) from this
PR (https://github.com/livekit/livekit/pull/2037). That change affects
other paths. The edge it was trying to fix is more rare. Need to think
about a way which covers all cases.
* Split RTPStats into receiver and sender.
For receiver, short types are input and need to calculate extended type.
For sender (subscriber), it can operate only in extended type.
This makes the subscriber side a little simpler and should make it more
efficient as it can do simple comparisons in extended type space.
There was also an issue with subscriber using shorter type and
calculating extended type. When subscriber starts after the publisher
has already rolled over in sequence number OR timestamp, when
subsequent publisher side sender reports are used to adjust subscriber
time stamps, they were out of whack. Using extended type on subscriber
does not face that.
* fix test
* extended types from sequencer
* log
* Fix time stamp adjustment when starting with dmummy packets.
- Populated extended values in ExtPacket on dummy packet.
- Have to pass reference time stamp offset to first packet time
adjustment.
* display participant version info
* Sequencer small optimisations
1. Use range map to exclude padding only packets. Should take lesser
space as we are not using slice to hold pointer to actual data.
2. Avoid `time.Now()` when adding each packet. Just use the arrival time
as it should be close enough. `time.Now()` was showing up in
profile.
* remove debug
* correct comment
Profiling showed updating jitter going through the snapshot maps.
With the reduction of one, there should only be one snapshot
and hopefully that should gain some cycles back.
* Cache extended highest.
Prevents calculating extended highest on every update to populate
PreExtendedHighest in the result.
* remove incorrect comment