Sometimes the initial selected node could fail. In that case, we'll give it a few more attempts to locate a media node for the session instead of failing it after the first try.
(cherry picked from commit 2fa46e2df4)
Added a flag to throttle dynacast messages to the publisher.
Added a helper method to check if a client supports setting `active`
field in RTP sender encoding. This can be used to disable dynacast based
on some other feature flag/config settings.
* some additional logging
* Do not use local time stamp when sending RTCP Sender Report
As local time does not take into account the transmission delay
of publisher side sender report, using local time to calculate
offset is not accurate.
Calculate NTP time stamp based on difference in RTP time.
Notes in code about some shortcomings of this, but should
get better RTT numbers. I think RTT numbers were bloated because of
using local time stamp.
Related to livekit/protocol#273
This PR adds:
- ParticipantResumed - for when ICE restart or migration had occurred
- TrackPublishRequested - when we initiate a publication
- TrackSubscribeRequested - when we initiate a subscription
- TrackMuted - publisher muted track
- TrackUnmuted - publisher unmuted track
- TrackPublish/TrackSubcribe events will indicate when those actions have been successful, to differentiate.
* Fix handling of non-monotonic timestamps
Timed version is inspired by Hybrid Clock. We used to have a mixed behavior
by using time.Time:
* during local comparisons, it does increment monotonically
* when deserializing remote timestamps, we lose that attribute
So it's possible for two requests to be sent in the same microsecond, and
for the latter one to be dropped.
To fix that behavior, I'm switching it to keeping timestamps to consolidate
that behavior, and accepting multiple updates in the same ms by incrementing ticks.
Also using @paulwe's idea of a version generator.
* initial commit
* add correct label
* clean up
* more cleanup on adding stats
* cleanup
* move things to pub and sub monitors, ensure stats are correctly updated
* fix merge conflict
* Fix panic on MacOS (#1296)
* fixing last feedback
Co-authored-by: Raja Subramanian <raja.gobi@tutanota.com>
* WIP commit
* comment
* clean up
* remove unused stuff
* cleaner comment
* remove unused stuff
* remove unused stuff
* more comments
* TrackSender method to handle RTCP sender report data
* fix test
* push rtcp sender report data to down tracks
* Need payload type for codec id mapping in relay protocol
* rename variable a bit
* Do not get tripped by default values.
The following scenario declared the second message a dupe incorrectly
- UpdateSubscription{subscribe: true}: This message initialized quality
to default which is livekit.VideoQuality_LOW
- UpdateTrackSettings{quality: livekit.VideoQuality_LOW} - this one got
tripped as duplicate because the previous message initialized quality
to LOW.
Fix it by recording whether track settings have been seen.
no auto subscribe + quality setting to LOW test failed before this
change and passes with this change.
* patch all track setting fields
* FPS based stream tracker tweaks
- Cleaning up code
- Two tweaks
o A layer is declared active on receiving first packet (when starting fresh).
But, if there are no frames after that (no packets after girst packet or
there is only one frame), layer would not have been declared stopped as
the previous version waited for second frame. Now, if there are no more
frames in eval interval, declare the layer stopped.
o When frame rate goes to 0, reset FPS calculator. Otherwise, layer starting
after a long time will have frames spaced apart too far which would result
in very low frame rate. Reset the calculator and let it pick up after the
the layer restarts
- Also changing from lowest FPS -> estimated FPS and update up slowly and down fast.
There are cases where frames are to far apart result in really low FPS. Seems to
happen with NLC kind of cases where bandwidth is changed rapidly and the estimator
on browser probably gets a bit confused and starts/stops layers a bit erratically.
So, update estimate periodically to ensure eval interval is tracking current rate.
* fix factor
* spelling fix
* Add interface and ipfilter to udpmux option
* validate external ip is accessable by client
* add context
* use external ip only for firefox
* fix mapping error
* Update pion/ice and use external ip only for firefox
* Use single external ip for NAT1To1Ips if validate failed
* update pion/ice
* Split stream tracker impl from base
* slight re-arrangement of code
* fps based stream tracker
* MinFPS config
* switch back to packet based tracker
* use video config by default to handle sources without type
GetSelectedICECandidatePair can return nil for the candidate pair if not
available even if the error is not nil. Protect against the nil
de-reference panic.
message.
There are is a sequence where a dupe could be detected due to patching
which could lead to issues.
The sequence is
- UpdataTrackSettings with some values
- UpdateSubscription with Subcribe: false - this will patch from above
track settings
- UpdateSubscription with Subscribe: true - this will continue patching
- UpdateTrackSettings with the same settings as in the first step - this
will be declared a dupe because the track is enabled and the patched
settings will declare no change in settings.
This is okay in the current code as subscription settings are cached at
participant level and applied when somebody re-subscribes. But, that
down stream processing can change any time.
So, when processing `UpdateSubscription` message, just do not patch.
If a later `UpdateTrackSettings` comes along, let it pass even if it
is not changing anything.
* Initial commit of signal deduper.
Idea is protect against signal storm from misbehaving clients.
Design:
- SignalDeduper interface with one method to handle a SignalRequest and
return if dupe or not.
- Signal specific deduper. Could have made a single de-duper which could
handle all signal message types, but making it per type so that the
code is cleaner.
- Some module (like the router) can instantiate whatever signal types
it wants to de-dupe. When a signal message is received, that module
can run the signal message through the list of de-dupers and
potentially drop the message if any of the de-dupers declare that the
message is a dupe. Making it a list makes things a little bit
inefficient, but keeps things cleaner. Hopefully, not many de-dupers
will be needed so that the inefficiency is not pronounced.
* re-arrange comments
* helper function
* add ParticipantClosed