- ticker.Stop always
- clean up timer func (if they are added) on participant close
- sequencer test enhancement to add a real packet after a pdding packet
Have been seeing a few instances of "too many packets expected in delta"
when trying to generate RTCP SR on down track. Actual sequence numbers
indicate that start is after the end.
As down track RTPStats are driven by receiver report, wondering if we
are getting RTCP_RR out-of-order somehow causing this to happen.
Cannot find any other reason for this.
So, accepting RTCP_RR based update only if the sequence number is higher
than existing and also logging a warning with sequence numbers if they
look out-of-order.
Padding packets do not need the full structure. They just
need a placeholder in the sequencer array. So, use pointers
(with padding slots filled by nil) to save some memory.
Also, don't need padding for audio (yet). As padding packets
are used only for probing and we do not probe using audio tracks (yet).
Now only set mapping when user_external_ip enabled or node_ip is
explicitly set. If multiple local address resolved to same external
ip, only the first one will be mapped to external, avoid candidate
conflict between different clients.
* Cache RTPStats and seed on re-use
When a cached down track is re-used, RTPStats was not cached.
This caused sender reports getting out-of-sync with the remote side.
Cache RTPStats and seed it on re-use.
* staticcheck
If/when users have a typo or misindentation in their config file, we
would have silently failed to assign the value and moved on.
With this change it'll error out, alerting user of the problem.
Seeing some supervisor error logs under two conditions
- Issuing a full reconnect - client should close this session and
form a new one. So, supervisor errors on the to be closed session
is not useful.
- Some times it takes a long time for publisher PC to establish.
If publish monitor timer stars when a pending track is added,
the time out fires before ICE/DTLS is established. So, include
a condition to start timer on publication monitor only after
peer connection is connected.
This allows deleting and updating an ingress even if the ingress server that was handling it died. It does however mean that if the ingress responds again later, its state will be inconsistent. To somewhat make this less likely, also keep trying contacting the ingress for 1 min in the background.
Also fixing a race where an active deleted Ingress would get recreated on delete because of the update triggered by the ingress session shutdown
With migration in, once the local track is published, the
remote track should be closed. Add a flag to `RemovePublishedTrack`
to control the close behaviour. Invoke `Close` if specified.
Without, the remote track is not closed if it is waiting to resolve,
i. e. not yet attached. That remote track is left hanging.
It's my understanding that the nodeIP config can be set to ensure that a
specific IP is provided for the host candidate. The code being changed
here was added as a convenience so that:
| By giving it STUN servers, it should be
| connectable even without passing in --node-ip explicitly
We'd prefer to be able to specify a nodeIP and then as a side effect
have a STUN server added.
In extreme cases, media nodes could take more than 5s to spin up the
session. Increasing this timeout to 10s reduces the number of disconnections
due to edge cases.