* Introducing OpsQueue
Creating a PR to get feedback on standardizing on this concept.
Can be used for callbacks.
Already a couple of places use this construct. Wondering if we
should standardize on this across the board.
Just changing one place to use the new struct. Another place
that I know of which uses this pattern is the telemetry package.
* atomic flag -> bool
* Fix no-video with adaptive streaming
With a recent change to initialize max quality for subscriber
synchronously, a subsequent update at the same quality was
getting ignored. So, there was no message back to publisher
to start up video layers. Reproducible every time the subscriber
joined after all the layers of publishers was turned off.
While not pretty, for now, disable the check for quality match
on subscriber update. That disabling itself is fine as there is
another check for consolidated quality match before sending
a message to the publisher, but in general this area has shown
some shakiness and needs some work.
* Use notify function to set initial quality also
With screen sharing in Chrome 97, static content sends one packet
a second. Layer 0 stream tracker was configured to expect 2 per second.
Make it very relaxed so that one packet in two seconds declares layer 0
active.
* Consolidating PLI throttle
Use the throttler in `sfu.WebRTCReceiver`.
Does change shape of config object.
* Move PLIThrottleConfig to sfu.WebRTCReceiver
* fix test compile
* Cleaning up unused stuff
* improve readability
* RTT
- Calculate down track RTT using RTCP Receiver report
- Surface it back to the participant
- Participant updates all its published trackes
(throttled to limit update to once in 5 seconds)
- That propagates to all the upstream sfu.Buffer and the nacker.
So, we will have RTT throttled NACKs.
* rtt callback
* Consolidating PLI throttle
Use the throttler in `sfu.WebRTCReceiver`.
Does change shape of config object.
* Move PLIThrottleConfig to sfu.WebRTCReceiver
* fix test compile
* Cleaning up unused stuff
* readability improvement
RTCP messages are going through two channel hops now.
Maybe we don't need that anymore now that the original
problem is diagnosed. But, pushing all RTCP via
the callbackOps channel for now to make it consistent.
The SRTP replay detection was disabled recently.
But, they were effectively getting dropped in `sfu.bucket`.
Doing two things with RTX packets in this PR
1. Update stats - add to packet count and bytes
2. Process header extension - to process TWCC
* WIP commit
* fix test
* More clean up
* cast to right size
* use local variable
* set a default RTT
* Do not log RTX, although need to figure out source of RTX
* fix test
* Cleaning/simplifying some buffer bits
1. NACKs are always inserted in order. So, get rid of
bunch of out-of-order handling in there and simplify.
2. For now, removing triggering a key frame from NACKs.
Let subs drive it.
3. Move to 16-bit sequence numbers except for receiver
report handling. Simplify bits about unwrapping sequence
number on all packets.
4. Remove unused code.
* remove unused field
* Random clean up
* Split out StreamTrackerManager for re-use
* Reset the correct tracker
* use generation counter to exit coroutine
* start only for video and when enabled
* Add RemoveAllTrackers method
* Use an atomic flag to stop stream allocator
* use a mutex for event channel
* RLock while posting event
* lock isStopped flag to prevent posting to closed channel