* Simplify sliding window collapse.
Keep the same value collapsing simple.
Add it to sliding window as long as same value is received for longer
than collapse threshold.
But, add a prune with three conditions to process the siliding window
to ensure only valid samples are kept.
* flip the order of validity window and same value pruning
* increase collapse threshold to 0.5 seconds during non-probe
1. Probe end time needs to include the probe cluster running time also.
2. Apply collapse window only within the sliding window. This is to
prevent cases of some old data declaring congestion. For example,
an estimate could have fallen 15 seconds ago and there might have
been a bunch of estimates at that fallen value. And the whole
sliding window could have that value at some point. But, a further
drop may trigger congestion detection. But, that might be acting too
fast, i. e. on one instance of value fall. Change it so that we
detect if there is a fall within the sliding window and apply
collapse based on that.
On a state change, it was possible an aborted probe was pending
finalize. When probe controller is reset, the probe channel
observer was not reset. Create a new non-probe channel observer
on state change to get a fresh start.
Also limit probe finalize wait to 10 seconds max. It is possible
that the estimate is very low and we have sent a bunch of probes.
Calculating wait based on that could lead to finalize waiting for
a long time (could be minutes).
* Simplify probe done handling.
Seeing a case where the channel abserver is not re-created after
an aborted probe. Simplifying probe done (no callbacks, making it
synchronous).
* log more
* Split probe controller from StreamAllocator.
With TWCC, there is a need to check for probe status
in a separate goroutine. So, probe specific stuff need
locking. Split out the probe controller to make that cleaner.
* remove defer
* A coupke of stream allocator tweaks
- Do not overshoot on catch up. It so happens that during probe
the next higher layer is at some bit rate which is much lower
than normal bit rate for that layer. But, by the time the probe
ends, publisher has climbed up to normal bit rate.
So, the probe goal although achieved is not enough.
Allowing overshoot latches on the next layer which might be more
than the channel capacity.
- Use a collapse window to record values in case of a only one
or two changes in an evaluation window. Some times it happens
that the estimate falls once or twice and stays there. By collapsing
repeated values, it could be a long time before that fall in estimate
is processed. Introduce a collapse window and record duplicate value
if a value was not recorded for collapse window duration. This allows
delayed processing of those isolated falls in estimate.
* minor clean up
* add a probe max rate
* fix max
* use max of committed, expected for max limiting
* have to probe at goal
When detecting congestion based on loss, it is possible that
the loss based signal triggers earlier and the estimate based
signal is lagging. In those cases, check against last received
estimate and if that is lower than loss based throttling, use that.
Without this, it was possible that the current usage high.
Loss based throttling may not dial things back far enough to pause
the stream. Ideally, congestion should hit again and it should be dialled
down further and eventually pause, but there are situations it never
dials back far enough to pause.
* Support simualting subscriber bandwidth.
When non-zero, a full allocation is triggered.
Also, probes are stopped.
When set to zero, normal probing mechanism should catch up.
Adding `allowPause` override which can be a connection option.
* fix log
* allowPause in participant params