Clean up the tracks in the synchronous path and remove track from track
manager. This is not strictly required in a single node case. But,
multi-node needs this. So, doing this here for consistency.
There are two very very edge case scenarios this is triyng to address.
Scenario 1:
-----------
- both pA and pB migrating
- pA migrates first and subscribes to pB via remote track of pB
- while the above subscribe is happening, pB also migrates and
closes the remote track
- by the time the subscribe set up completes, it realises that
the remote track is not open any more and removes itself as
subscriber
- but that removal is using the wrong `isExpectedToResume` as clearing
all receivers has not run yet which is what caches the
`isExpectedToResume`.
- That meant, the down track transceiver is not cached and hence not
re-used when re-subscribing via pB's local track
- Fix it by caching the expected to resume when changing receiver state
to `closing`.
Scenario 2:
-----------
- both pA and pB migrating
- pA migrates first and subscribes to pB via remote track of pB
- while the above subscribe is happening, pB also migrates and
closes the remote track
- pB's local track is published before the remote track can be fully
closed and all the subscribers removed. That local track gets added
to track manager.
- While the remote track is cleaning, subscription manager triggers
again to for pA to subscribe to pB's track. The track manager now
resolves to the local track.
- Local track subscription progresses. As the remote track clean up is
not finished, the transceiver is not cached. So, the local track based
subscription creates a new transceiver and that ends up causing
duplicate tracks in the SDP offer.
- Fix it by creating a FIFO in track manager and only resolve using the
first one. So, in the above case, till the remote track is fully
cleaned up, the track manager will resolve to that. Yes, the
subscriptions itself will fail as the track is not in open state (i. e.
it might be in `closing` state), but that is fine as subscription
manager will eventually resolve to the local track and proper
transceiver re-use can happen.
Seeing an error in an e2e test, after migration, no packets are
forwarded. The only reason seems to be payload type mismatch (assuming
there are no errors in the forwarding loop pulling packets from buffer).
So, logging some packet stats in forwarding loop.
* Use atomic to store codec.
It can change on up stream codec change, but not seeing any racy
behaviour with atomic access.
Reverting the previous change to mute with this change.
* no mime arg
Need to re-visit the bind lock scope and maybe make the codec/mime
atomic and access them without bind lock. But, doing a whack-a-mole a
bit first to move things forward. Will look at making them atomics.
With publish RED and subscribe Opus, the RTCP sender reports were not
sent to down track as publisher sender reports were not forwarded to the
down track.
* Dependent participants should not trigger count towards FirstJoinedAt
According to the API, empty timeout should be honored as long as no
independent participant joins the room. If we counted Agents and Egress
as part of FirstJoinedAt, it would have the side effect of using
departureTimeout instead of emptyTimeout for idle calculations.
* use Room logger
- With probing the packet rate can get high suddenly and remote may not
have sent receiver report as it might be sending for the non-spikey
rate. That causes metadata cache overflows. So, give RTX more cahe.
- Don't need a large cache for primary as either reports come in
regularly (or they are missing for a long time and having a biger
cache is not the solution for that, so reduce primary cache size)
- Check for receiver report falling exactly back by (1 << 16). Had done
that change in the inside for loop, but missed the top level check :-(
This is mostly to clean up forwarder state cache for already started
tracks.
A scenario like the following could apply the seed twice and end up with
an incorrect state resulting in a large jump
- Participant A let's say is the one showing the problem
- Participant A migrates first. So, it tries to restore its down track states by querying state from the previous node.
- But, its down tracks start before the response can be received. However, it remains in the cache.
- Participant B migrates from a different node to where Participant A. So, the down track of Participant A gets switched from relay up track publisher -> local up track publisher.
- I am guessing the seeding gets applied twice in this case and the cached value from step 3 above causes the huge jump.
In those cases, the cache needs to be cleaned up.
(NOTE: I think this seeding of down track on migration is not necessary
as the SSRC of down track changes and the remote side seems to be
treating it like a fresh start because of that. But, doing this step
first and will remove the related parts after observing for a bit more)
Also, moving fetching forwarder state to a goroutine as it involves a
network call to the previous node via Director.