* Seed snapshots
- For one cycle after seeding, delta snap shot can get a huge gap
because of snapshot iitializing from start if not present. Not
a huge deal sa it should not affect functionality, but saving/restoring
(at least with down track) snap shot is a big deal. So just do it.
- Have been seeing a bunch of cases of delta stats getting a lot of
packets due to out-of-order (what seems like) receiver report. So,
save the receiver report and log it when out-of-order is detected
to understand if they are closely spaced or something else could be
happening.
* Remove comment that does not apply anymore
* log current time and RR
Have been seeing a few instances of "too many packets expected in delta"
when trying to generate RTCP SR on down track. Actual sequence numbers
indicate that start is after the end.
As down track RTPStats are driven by receiver report, wondering if we
are getting RTCP_RR out-of-order somehow causing this to happen.
Cannot find any other reason for this.
So, accepting RTCP_RR based update only if the sequence number is higher
than existing and also logging a warning with sequence numbers if they
look out-of-order.
* Cache RTPStats and seed on re-use
When a cached down track is re-used, RTPStats was not cached.
This caused sender reports getting out-of-sync with the remote side.
Cache RTPStats and seed it on re-use.
* staticcheck
- Do not update jitter on padding only packet.
Padding only packet may not have proper timestamp.
If it does, it probably has the time stamp of the
last packet with payload. That will also affect
jitter calculation, i. e. wall clock time is moving,
but RTP time is the same.
- Do not send `onMaxLayer` changed on bind.
It was probably racing with update when max layer
is updated when adaptive stream is off. There is
no need to send that update as the default would
be OFF. It will be enabled when adaptive stream
subscription turns it on or when max layer is
set when down track bind happens and adaptive stream
is off.
* Use media payload size in scoring.
Subtract out header bytes when calculating score.
This does not seem to affect the score (under perfect conditions),
but, using header bytes will inflate the bit rate and
will affect scoring.
* Add header bytes to ToProto
* protocol pointer
* fix test
With rapid changes to subscription settings, use of a goroutine
could end up processing dynacast needs for that subscriber in
a different order. So, record the susbcription needs of a subscriber
in the callback and process the data in a go routine.
* WIP commit
* WIP commit
* Remove debug
* Revert to reduce diff
* Fix tests
* Determine spatial layer from track info quality if non-simulcast
* Adjust for invalid layer on no rid, previously that function was returning 0 for no rid case
* Fall back to top level width/height if there are no layers
* Use duration from RTPDeltaInfo
- SenderSSRC was not set for NACK, RTCP_RR - so SRTP context
was using SSRC = 0 which is not bad, but let us set the SSRC properly.
- PLI was using a random SSRC on every PLI. So, that would have
created a new SRTP stream (not bad as that stream context is small)
on every PLI. It is wasteful. So, set the SenderSSRC to the mediaSSRC.
- Reduce re-start of higher layers to 10 seconds. That is long enough
to declare that a stream layer has restarted.
* Use grants clone
* Fix a couple of more races
Use a shadow copy of down tracks in DownTrackSpreader.
Read always uses the shadow.
On Add/Delete of down track, make a new copy.
Copying is done only on add/delete.
If somebody is holding reference to a shadow, it will be in tact as Add/Delete create a new slice.
With this, not seeing any more races in test. So, enabling CI tests with `-race`.
Also fixing another race reported in #603
There are a couple of more races in that bug report that needs to be
chased down.
* Use env suggested in https://lifesaver.codes/answer/runtime-race-detector-sigabrt-or-sigsegv-on-macos-monterey-49138
* staticcheck, did not fail locally, but reported by CI
* use API to get down tracks
* Stats of NACKs acked and number of repeated NACKs.
Also making a change in delta stat to drop negative packet loss
counts to 0. Because of windowing it is a legitimate case.
The receiver could have seen a loss in window we are measuring
and in the subsequent window, the receiver could have gotten
a retransmission and reduced the packet loss count resulting
in a negative delta. When we report negative delta, it could
get dropped by analytics validator. That will be lost data.
Avoid that.
* Remove unused code
* Pick up latest protocol
* Remove `Head` field from `ExtPacket` structure.
Although we do not intend to, but if packets get out-of-order
in the forwarding path (maybe reading in multiple goroutines
or using some worker pool to distribute packets), the `Head`
indicator could lead to wrong behaviour. It is possible that
at the receiver, the order is
- Seq Num N, Head = true
- N + 1, Head = true
If the forwarding path sees `N + 1` first, the Head flag
when it sees `N` packet is incorrect and will lead to incorrect
behaviour.
The alternative check is very simple. So, remove `Head` flag.
* Remove unused field
* Use delta stats throughout and avoid calculating deltas in telemetry
* Fix a few things after testing
* Remove debug
* Fix tests
* delete instead of setting to nil
* Point to the latest protocol