It's not always possible for WebSocket clients to obtain status code or
error messages returned during WS Upgrade. Moving autocreation validation
to an explicit interface in the validation step so the /rtc/validate
would be able to return an appropriate message.
When AdaptiveStream is enabled, default the subscriber to LOW quality stream
we would want LOW instead of OFF for a couple of reasons
1. when a subscriber unsubscribes from a track, we would forget their previously defined settings
depending on client implementation, subscription on/off is kept separately from adaptive stream
So when there are no changes to desired resolution, but the user re-subscribes, we may leave stream at OFF
2. when interacting with dynacast *and* adaptive stream. If the publisher was not publishing at the
time of subscription, we might not be able to trigger adaptive stream updates on the client side
(since there aren't any video frames coming through). this will leave the stream "stuck" on off, without
a trigger to re-enable it
Sometimes the initial selected node could fail. In that case, we'll give it a few more attempts to locate a media node for the session instead of failing it after the first try.
Added a new manager to handle all subscription needs. Implemented using reconciler pattern. The goals are:
improve subscription resilience by separating desired state and current state
reduce complexity of synchronous processing
better detect failures with the ability to trigger full reconnect
* Use local time base for NTP in RTCP Sender Report for downtracks.
More details in comments in code.
* Remove debug
* RTCPSenderReportInfo -> RTCPSenderReportDataExt
* Get rid of sender report data pointer checks
From logs, can see when reference layer is out of order, it is
out of order by a small amount most of the time (saw a maximum
of 5 ms in a sampling of logs). When using arrival time in those
case, some times the offset comes out to 10 ms or so. In most
of the cases, the time based diff is a lot higher (by several ms).
Just use the default of +1 diff on switch layer in case of
out-of-order time stamp. That will allow playback to move
forward and keep the timing close to actual frame time.
* add prometheus stats for rtt/jitter/packet loss
* add track source to metrics
* better packet loss bins
* add track type to metrics
* remove source from AnalyticsStat
* regenerate telemetry service fake
* compute loss from per stream packet count
Added a flag to throttle dynacast messages to the publisher.
Added a helper method to check if a client supports setting `active`
field in RTP sender encoding. This can be used to disable dynacast based
on some other feature flag/config settings.
* some additional logging
* Do not use local time stamp when sending RTCP Sender Report
As local time does not take into account the transmission delay
of publisher side sender report, using local time to calculate
offset is not accurate.
Calculate NTP time stamp based on difference in RTP time.
Notes in code about some shortcomings of this, but should
get better RTT numbers. I think RTT numbers were bloated because of
using local time stamp.
Related to livekit/protocol#273
This PR adds:
- ParticipantResumed - for when ICE restart or migration had occurred
- TrackPublishRequested - when we initiate a publication
- TrackSubscribeRequested - when we initiate a subscription
- TrackMuted - publisher muted track
- TrackUnmuted - publisher unmuted track
- TrackPublish/TrackSubcribe events will indicate when those actions have been successful, to differentiate.
* Fix handling of non-monotonic timestamps
Timed version is inspired by Hybrid Clock. We used to have a mixed behavior
by using time.Time:
* during local comparisons, it does increment monotonically
* when deserializing remote timestamps, we lose that attribute
So it's possible for two requests to be sent in the same microsecond, and
for the latter one to be dropped.
To fix that behavior, I'm switching it to keeping timestamps to consolidate
that behavior, and accepting multiple updates in the same ms by incrementing ticks.
Also using @paulwe's idea of a version generator.
* initial commit
* add correct label
* clean up
* more cleanup on adding stats
* cleanup
* move things to pub and sub monitors, ensure stats are correctly updated
* fix merge conflict
* Fix panic on MacOS (#1296)
* fixing last feedback
Co-authored-by: Raja Subramanian <raja.gobi@tutanota.com>
* WIP commit
* comment
* clean up
* remove unused stuff
* cleaner comment
* remove unused stuff
* remove unused stuff
* more comments
* TrackSender method to handle RTCP sender report data
* fix test
* push rtcp sender report data to down tracks
* Need payload type for codec id mapping in relay protocol
* rename variable a bit
* Do not get tripped by default values.
The following scenario declared the second message a dupe incorrectly
- UpdateSubscription{subscribe: true}: This message initialized quality
to default which is livekit.VideoQuality_LOW
- UpdateTrackSettings{quality: livekit.VideoQuality_LOW} - this one got
tripped as duplicate because the previous message initialized quality
to LOW.
Fix it by recording whether track settings have been seen.
no auto subscribe + quality setting to LOW test failed before this
change and passes with this change.
* patch all track setting fields
* FPS based stream tracker tweaks
- Cleaning up code
- Two tweaks
o A layer is declared active on receiving first packet (when starting fresh).
But, if there are no frames after that (no packets after girst packet or
there is only one frame), layer would not have been declared stopped as
the previous version waited for second frame. Now, if there are no more
frames in eval interval, declare the layer stopped.
o When frame rate goes to 0, reset FPS calculator. Otherwise, layer starting
after a long time will have frames spaced apart too far which would result
in very low frame rate. Reset the calculator and let it pick up after the
the layer restarts
- Also changing from lowest FPS -> estimated FPS and update up slowly and down fast.
There are cases where frames are to far apart result in really low FPS. Seems to
happen with NLC kind of cases where bandwidth is changed rapidly and the estimator
on browser probably gets a bit confused and starts/stops layers a bit erratically.
So, update estimate periodically to ensure eval interval is tracking current rate.
* fix factor
* spelling fix
* Add interface and ipfilter to udpmux option
* validate external ip is accessable by client
* add context
* use external ip only for firefox
* fix mapping error
* Update pion/ice and use external ip only for firefox
* Use single external ip for NAT1To1Ips if validate failed
* update pion/ice