* Rework receiver restart.
- Protect against concurrent restarts
- Clean up and consolidate code around restarts
- Use `RestartStream` of buffer rather than creating new buffers.
* fix test
* Minor refactor in buffer base and audio level
- Make a function for `restartStream`. Will be useful
when external signal needs to restart a stream. Also restart all the
bits (audio level, dd parser and frame rate calculator)
- make an audio level mode with RTP timestamp so that some state can be
moved out of buffer base
* clean up
* log restart
This is not all of it as it is not possible (or at least I do not know
of a way) to get all suggestions for a repo/project. Did this via loop
searching mainly and taking the modernize suggestions.
* Resolve RTX pair via OnTrack also.
In simulcast probing path, the interceptor chain is not invoked
for primary stream. Not sure if this is a recent change. Due to this,
the RTX pair does not get resolved.
Use the onTrack callback to resolve the pair.
* remove debug
* Return extended sequence number only and not packet.
Callers need only the extended sequence number.
Extended packet could get release if the forwarder processes it before
caller accesses it causing a data race.
* grow bucket in a go routine
* Refactor receiver and buffer into Base and higher layer.
To be able to share code/functionality with relay.
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* clean up
* deps
* fix test
* fix test
* Store buffer after creating it.
Also changing signature of creator function as it could call TrackInfo()
and get into a deadlock.
* fix double unlock
* add some more debug logging
* Refactor receiver and buffer into Base and higher layer.
To be able to share code/functionality with relay.
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* clean up
* deps
* fix test
* fix test
* Some logging changes.
Trying to chase a case of large sequence number gap on subscriber side
where packets are sent after a long time.
* return values instead of logging
* Add support for RTP stream restart.
When an unhandled packet is encountered, try a restart sequence.
Restart happens when 5 packets with contiguous sequence numbers and same
or increasing time stamps are received. Note that this does not work for
B-frame type of scenarios, but that is true for receive path handling
even before this. As WebRTC does not use B-frames, it is fine. But,
needs to be looked at again if B-frames are necessary.
It is controlled by a config that is disabled by default.
* clean up
* debug log
It is possible that the stream stops just after start and
restarts much later introducing a large gap in sequence number.
That could look like an unhandled case because the wrap back handler
does not have enough packets yet.
Let other checks based on time stamp gap take effect and only if that
also leaves the sequence number unhandled, drop the packet.
* switch participant callbacks to room to listener interface
* mage generate
* clean up
* clear listener
* clean up
* use interface in up data track manager
* tweaks
* Paul feedback - should reduce the diff as this keeps the room handlers as is except making methods for a couple of anonymous handlers
* clean up
When a participant is closing, RTCP readers should be cleaned up from
factory even if the participant is expected to resume. The resumed
participant will be a new participant session and peer connection(s) and
everything will be set up again.
- New bucket API to pass in max packet size and sequence number offset
and seequence number size generic type
- Move OWD estimator to mediatransportutil.
* Use sync.Pool for objects in packet path.
Seeing cases of forwarding latency spikes that aling with GC.
This might be a bit overkill, but using sync.Pool for small +
short-lived objects in packet path.
Before this, all these were increasing in alloc_space heap profile
samples over time. With these, there is no increase (actually the lines
corresponding to geting from pool does not even show up in heap
accounting when doing `list` in `pprof`)
* merge
* Paul feedback
* Forwarding latency measurement tweaks.
- prom transmission type public
- do not measure short term values as it is not used and saves some lock
contention time in packet path potentially. Adding a separate method
for that.
- Change latency/jitter summary reporting to `ns` also to match the
histogram.
* add GetShortStats
* Higher resolution forwarding latency histogram.
Was using the average latency/jitter of last second to populate
forwarding latency/jitter histogram. But, it is too coarse, i. e. the
average value of latency/jitter is very low and those summarised samples
end up in the lowest bucket always.
A few things to address it
- record per packet forwarding latency in histogram
- adjust histogram bins to include smaller values
- Drop jitter histogram
This is a per packet call, but prometheus histogram is supposedly
fast/light weight. Would be good to get better resolution histograms.
Hence doing this. Please let me know if there are performance concerns.
* typo
* one more typo
* Debug high forwarding latency missing.
* log highest
* log condition
* update log
* log
* log
* change log
* Track start up delay.
Digging into forwarding latency, there are a few things
1. Seems to be caused due to forwarding packets queued before bind. They
would be in the queue till bind. There are two ways it is showing up
a. Bind itself is delayed and releasing queued packets causes the
high forwarding latency.
b. There is a significant gap between bind and first packet being
pulled off the queue to be forwarded, in one example 100ms.
(a) is understandable if the signalling delays things. Can drop these
packets without forwarding or indicate in the packet that it is a queued
packet and drop it from forwarding latency calculation. Dropping is
probably better as down stream components like egress will see a burst
in these situations.
(b) looks like go scheduling latency? Unsure.
Logging more to understand this better.
* log start
* Use buffered indicator to exclude from forwarding latency.
Buffered packets live the queue for a while before Bind releases them.
They have high(ish) queuing latency and not true representation of
forwarding latency.
* Debug high forwarding latency missing.
* log highest
* log condition
* update log
* log
* log
* change log
* Track start up delay.
Digging into forwarding latency, there are a few things
1. Seems to be caused due to forwarding packets queued before bind. They
would be in the queue till bind. There are two ways it is showing up
a. Bind itself is delayed and releasing queued packets causes the
high forwarding latency.
b. There is a significant gap between bind and first packet being
pulled off the queue to be forwarded, in one example 100ms.
(a) is understandable if the signalling delays things. Can drop these
packets without forwarding or indicate in the packet that it is a queued
packet and drop it from forwarding latency calculation. Dropping is
probably better as down stream components like egress will see a burst
in these situations.
(b) looks like go scheduling latency? Unsure.
Logging more to understand this better.
* log start
* Log write count atomic.
* Return write count from WriteRTP.
Apologies for the frequent changes on this. With relays, the down track
could write to several targets. So, use count to have an accurate
indication of how may subscribers were written to.
Packets not being forwarded were getting included in forwarding stats
calculation and skewing the measurement towards a smaller number.
The latency measurement does not include the batch IO of packets on
send. With a 2ms batching, that will add an average latency of 1ms.
* Broadcast cond var on RTX write.
High forwarding latency logs all show high queuing delay so far. From
code inspection, RTX writes were not signaling the cond var. Not sure if
that is the reason, but adding a signal there for further tests.
* Remove return values from writeRTX as they are not used
* Prevent leakage of previous codec after codec regression.
In the window between forwarder restart and determining codec, the old
codec packet could leak through. Prevent tha by doing the restart and
codec determination atomically on a codec regression.
* tidy
* use locked function