Seeing a time stamp jump that I am not able to explain.
Basically, it looks like the time stamp doubles at some
point. There is no code which doubles the timestamp.
Can understand an erroneous roll over/wrap around, but
doubling is very strange.
So, logging only audio packets. Will disable as soon
as I have some smaples from canary.
Seeing some unexplained jumps in sender report time stamp
in canary. Wonder if the calculated clock rate is way off
during some interval. Logging clock deviations to understand
better.
* More fine grained filtering NACKs after a key frame.
There are applications with periodic key frame.
So, a packet lost before a key frame will not be retransmitted.
But, decoder could wait (jitter buffer, play out time) and cause
a stutter.
Idea behind disabling NACKs after key frame was another knob to
throttle retransmission bit rate. But, with spaced out retransmissions
and max retransmissions per sequence number, there are throttles.
This would provide more throttling, but affects some applications.
So, disabling filtering NACKs after a key frame.
Introducing another flag to disallow layers. This would still be quite
useful, i. e. under congestion the stream allocator would move the
target lower. But, because of congestion, higher layer would have lost
a bunch of packets. Client would NACK those. Retransmitting those higher
layer packets would congest the channel more. The new flag (default
enabled) would disallow higher layers retransmission. This was happening
before this change also, just splitting out the flag for more control.
* split flag
* Log skew in clock rate.
Remember seeing sender report time stamp moving backward
across mute with replaceTrack(null). Not able to reproduce
it in JS sample app, but have seen it elsewhere.
Logging to understand it better. Wondering if the sender report
should be reset on time stamp moving backward or if we should drop
backwards moving reports.
* set threshold at 20%
* Error log of padding updating highest time to get backtrace.
* Do not update highest time on padding packet.
Padding packets use time stamp of last packet sent.
Padding packets could be sent when probing much after last packet
was sent. Updating highest time on that screws up sender report
calculations. We have ways of making sure sender reports do not
get too out-of-whack, but it logs during that repair.
That repair should be unnecessary unless the source is behaving weird
(things like publisher sending all packets at the same time, publisher
sample rate is incorrect, etc.)
* Log highest time update on padding packet.
Seeing a strange case of what looks like highest time getting
updated on a padding packet. Can't see how it happens in code.
So, logging to check. Will be removing log after checking.
* log sequence number also
* Use 32-bit time stamp to get reference time stamp on a switch.
With relay and dyncast and migration, it is possible that different
layers of a simulcast get out of sync in terms of extended type,
i. e. layer 0 could keep running and its timestamp could have
wrapped around and bumped the extended timestamp. But, another layer
could start and stop.
One possible solution is sending the extended timestamp across relay.
But, that breaks down during migration if publisher has started afresh.
Subscriber could still be using extended range.
So, use 32-bit timestamp to infer reference timestamp and patch it with
expected extended time stamp to derive the extended reference.
* use calculated value
* make it test friendly
When a room is created via room service, when `StartSession`
runs, it sees a closed request source and returns an error
and that gets logged. It is not a real error.
Defer the sink and source close so that room creation can finish without
errors.
* Fix ICE connection fallback
Short connection detection relied on iceFailedTimeout, which previously
had been misinterpreted. Since we've reduced iceFailedTimeout, it is
creating false negatives.
We'll instead use PingTimeout since clients are expected to keep the
signal connection active.
* reduce ping interval to align with total ice failure timeout
Seeing a large positive gap which I am not able to explain.
Wondering if at some other time, a large negative is happening
and the large positive is just a correction.
* Prevent old packets resolution.
With range map, we are just looking up ranges and not exactly
which packets were missing. This caused the case of old packets
being resolved after layer switch.
For example,
- Packet 10 is layer switch, range map gets reset
- Packet 11, 12, 13 are forwarded
- Packet 9 comes, it should ideally be dropped as pre-layer switch old
packet. But, when looking up range map, it gets an offset and hence
gets re-mapped to something before layer switch. This was probably
okay as decoders would have had a key frame at the switch point and
moved ahead, but incorrect technically.
Fix is to reset the start point in the range map to the switch point
and not 0. So, when packet 9 comes, range map will return "key too old"
error and that packet will be dropped as missing from cache.
* fix tests
* Log potential sequence number de-sync in receicer reports.
Seeing some cases of a roll over being missed. That ends up
as largish range to search in an interval and reports missing packets
in the packet metadata cache.
Logging some details.
* just log in one place
Ideally, can remove the nil return when there are too many packets
as we have more information with extended sequence numbers, but
logging duration first to understand what is happening better.
When converting from RED -> Opus, if there is a loss, SFU recovers
that loss if it can using a subsequent redundant packet. That path
was not setting the extended sequence number properly.
Also, ensuring use of monotonic clock for first packet time adjustment
also.
* Cap expected packets to padding diff.
On the receiver, no longer using packet metadata cache to calculate
interval stats. An optimisation to get rid of packet metadata cache
on receiver side.
Because of that, padding packets in an interval could be more than
expected packets. As padding packets is just a counter, out-of-order
padding packets will make the diff look larger than expected packets
in a window. Cap the expected to 0.
NOTE: This makes it so that the count is not accurate in a window,
but that is okay occasionally. It will affect reported stats and quality
calculations, but it should be rare. For example, if 30 packets were
received in a window and 60 out-of-order padding packets were received,
it would reported as 0 packets were received. One option is to not
increment padding packets when they are out-of-order, but that will mess
up overall stats. Will make that change if we see this happen a lot.
* log unexpected padding packets
* Log resync at Infow.
Seeing potentially large sequence number jumps on a resync.
And it seems to happen on a lot of subscribe/unsubscribe.
Logging at Infow to understand better.
Probably need to find a way to avoid resync. But, logging for now to
check if I can catch one.
* Remove resync and log large sequence number jumps
Sending a single PLI on connected & bound meant that the upstream
throttler may not have sent it and down stream does not have a key frame
to lock onto. Caused some e2e test failures due to limited time of
track.
There are cases that experience the signalling channel timeout
and disconnect and there are no logs of what the state of ICE at all.
Log ICE candidates when closing transport so that there is some
visibility in those cases.
n
Logging expected WS close at Infow to understand reasons for closure.
Moving "read from ws" to Debugw as it happens when signalling closes.
Also filter out a data channel abort chunk log as it shows a bunch of
errors, but those are expected though.
* Use marshal + unmarshal to ensure unmarshallable fields are not copied.
Need to ensure that config structs/fields are marshallable.
There was a use of a = b copy of struct and some of the embeded structs
had locks and copying was not good.
* update protocol
* Update deps
Happens when converting quality in subscibed settings to layer.
Looks like it can happen only if the provided quality is OFF.
Don't know of any client that does that. Anyhow, prevent out-of-range
access which causea a panic.
Switching to using session specific TURN credentials instead of shared
credentials per Room. Also eliminates need to load Room from Redis
during TURN authentication
It's been reported that "ghost" participants, those that did not terminate
cleanly, hang around the room for too long after they disappear.
Evaluating our timeouts a bit, it seems that we are really conservative
in waiting for participants to disconnect. This PR cuts down the disconnect
timeout from 50s to 20s, a 30s reduction.