It's my understanding that the nodeIP config can be set to ensure that a
specific IP is provided for the host candidate. The code being changed
here was added as a convenience so that:
| By giving it STUN servers, it should be
| connectable even without passing in --node-ip explicitly
We'd prefer to be able to specify a nodeIP and then as a side effect
have a STUN server added.
With removal of subscription, it could happen twice.
When the subscriber actually unusbscribes and the publisher
removing all subscribers. The second unsubscribe was never
removed from the event list and eventually timed out.
So, process events as long as condition is satisfied.
* Remove VP9 from media engine set up.
* Remove vp9 from config sample
* Supervisor beginnings
Eventual goal is to have a reconciler which moves state from
actual -> desired. First step along the way is to observe/monitor.
The first step even in that is an initial implementation to get
feedback on the direction.
This PR is a start in that direction
- Concept of a supervisor at local participant level
- This supervisor will be responsible for periodically monitor
actual vs desired (this is the one which will eventually trigger
other things to reconcile, but for now it just logs on error)
- A new interface `OperationMonitor` which requires two methods
o Check() returns an error based on actual vs desired state.
o IsIdle() returns bool. Returns true if the monitor is idle.
- The supervisor maintains a list of monitors and does periodic check.
In the above framework, starting with list of
subscriptions/unsubscriptions. There is a new module
`SubscriptionMonitor` which checks subscription transitions.
A subscription transition is queued on subscribe/unsubscribe.
The transition can be satisfied when a subscribedTrack is added OR
removed. Error condition is when a transition is not satisfied for
10 seconds. Idle is when the transition queue is empty and
subscribedTrack is nil, i. e. the last transition would have been
unsubscribe and subscribed track removed (unsubscribe satisfied).
The idea is individual monitors can check on different things.
Some more things that I am thinking about are
- PublishedTrackMonitor - started when an add track happens,
satisfied when OnTrack happens, error if `OnTrack` does not
fire for a while and track is not muted, idle when there is
nothing pending.
- PublishedTrackStreamingMonitor - to ensure that a published track
is receiving media at the server (accounting for dynacast, mute, etc)
- SubscribedTrackStreamingMonitor - to ensure down track is sending
data unless muted.
* Remove debug
* Protect against early casting errors
* Adding PublicationMonitor
* Send connection type to telemetry
When connected, determine how the participant's primary connection is
connected and report it in ParticipantActive event.
* address feedback
* fixed case where prflx is reported instead of relay
* incorporate comments
I don't like using `Target` as direction.
There is one place in code that depends on it.
I am thinking we should add a params `IsOfferer` or something to make it
explicit.
* WIP commit
* WIP commit
* fix copy pasta
* setting PC with previous answer has to happen synchronously
* static check
* WIP commit
* WIP commit
* fixing transport tests
* fix tests and clean up
* minor renaming
* FIx test race
* log event when channel is full
* Export CloseSignalConnection
There are a few places where that close pattern is repeated.
Export it and use that function in other places directly.
* fix test
* Clear disconnect timer on ICERestart
Disconnect timer is set up when a transport fails.
But, it is possible that the connection is resumed.
So, clear disconnect timer on resume.
* clean up
* Start RTCP workers after peer connection connects
* Move more things into transport module
* Start RTCP workers only on connected
* Test needs PeerConnection() method
* adjust comment
* Prevent track subscriptions/adding receivers after close
With subscribe/unsubscribe queuing, a subscribe may be
attempted after a call to `RemoveAllSubscribers`.
So, renaming `RemoveAllSubscribers` to `InitiateClose`
and maintaining state that track is in the process of closing.
* Mime specific remove
* Remove unused error
* do not add receiver when closing
* Promoting a few logs to Info
Also, adding a couple of more info logs which I will remove later
after some debugging.
* mime type
* Protect pause/max layer
* notify even if not bound
- Do not update jitter on padding only packet.
Padding only packet may not have proper timestamp.
If it does, it probably has the time stamp of the
last packet with payload. That will also affect
jitter calculation, i. e. wall clock time is moving,
but RTP time is the same.
- Do not send `onMaxLayer` changed on bind.
It was probably racing with update when max layer
is updated when adaptive stream is off. There is
no need to send that update as the default would
be OFF. It will be enabled when adaptive stream
subscription turns it on or when max layer is
set when down track bind happens and adaptive stream
is off.
* WIP commit
* Connection quality changes
- Fix Firefox showing poor quality
o The issue was that we were using max available layer and
calculating quality. The rationale being that even if
server sends dynacast messages, client may not implement
dynacast and still stream all layers. But, with Firefox
(maybe a Firefox bug), it sends some small amount of
data on layer 2 even when that layer is disabled.
Guessing it is probing (or actually we might be using
some small value for high layers as Firefox cannot turn off
layers). That higher layer gets used in quality calculation.
As the bit rate on that layer is extremely low, it yields low
score.
Fixed by considering the max expected layer. That is of most
interest. Yes, clients may ignore dynacast and stream all layers,
but, max expected is the one of interest. So, look for
quality in the max expected layer and not max available layer.
- Lots of clean up around connection quality stuff
o Use a dynamic scaling thing to ensure that we do not get bitten
by absolute values. Calculate best possible scenario score and
map that to maximum MOS score. This will ensure that different
codecs, different settings do not mess up the scoring. For example,
a client might use 1 Mbps for 720p, but a different client could
use 2 Mbps for 720p. As an SFU/infrastructure middlebox, we do
not have control over quality at those rates. We can only ensure
that streaming happens smoothly at those rates. So, in that
example, for client 1, 1 Mbps will map to MOS 5.0 and for client 2,
2 Mbps will map to MOS 5.0. Any impairments after that will
reflect in the score.
o Penalise for missing target layer by one level for one layer missed.
o Move tests to connection quality directory. The participant test
was not super useful.
* Add missed file
* Remove debug code
* use more constants and initialise normalisation factor
* rtcscore pointer