mirror of
https://github.com/livekit/livekit.git
synced 2026-03-30 19:55:41 +00:00
This is mostly to clean up forwarder state cache for already started tracks. A scenario like the following could apply the seed twice and end up with an incorrect state resulting in a large jump - Participant A let's say is the one showing the problem - Participant A migrates first. So, it tries to restore its down track states by querying state from the previous node. - But, its down tracks start before the response can be received. However, it remains in the cache. - Participant B migrates from a different node to where Participant A. So, the down track of Participant A gets switched from relay up track publisher -> local up track publisher. - I am guessing the seeding gets applied twice in this case and the cached value from step 3 above causes the huge jump. In those cases, the cache needs to be cleaned up. (NOTE: I think this seeding of down track on migration is not necessary as the SSRC of down track changes and the remote side seems to be treating it like a fresh start because of that. But, doing this step first and will remove the related parts after observing for a bit more) Also, moving fetching forwarder state to a goroutine as it involves a network call to the previous node via Director.