Skip to content
This repository has been archived by the owner on Jan 12, 2019. It is now read-only.

handle dynamic buffer sizes (4k support) #1416

Closed

Conversation

squarebracket
Copy link
Contributor

@squarebracket squarebracket commented May 7, 2018

Description

This is a PR to allow for streams with large segment sizes, such as with 4K media. When a QuotaExceededError happens, the forward and back buffer are dynamically resized, and we keep track of how much data is in the buffer. We then use these as checks within the logic for determining if a segment should be downloaded, or if we should just chill.

Note that this PR doesn't use any GOP info for determining what's safe or not. The reason is that we'd have to parse out the GOP info for fMP4 streams. I figure the best time to add that functionality is in VHS, where a) media-sources is part of the code and b) we'll have to parse fMP4 stuff anyway for things like captions. So that's planned for when this is ported to VHS.

Requires the code in videojs/videojs-contrib-media-sources#178 to work.

Specific Changes proposed

  • Add BACK_BUFFER_LENGTH config
  • Make goalBufferLength, maxGoalBufferLength, and backBufferLength getters+setters on the master playlist controller, and pass them through to the segment loaders
  • Set the aforementioned values when we get a QuotaExceededError based on the amount of time and data in the buffers when it occurs
  • Make safeBackBufferTrimTime take a backBufferLength argument to use instead of the default 30s
  • Calculate a nextSegmentSize in the segment loader to determine if we might bust the buffer, either based on the current playlist's BANDWIDTH attribute if it exists or a running average of segment sizes if not
  • checkBuffer_ checks to see if we might bust the buffer, and trims the back buffer and returns null if that's the case
  • Only remove data from a buffer if we can remove a whole segment (to keep estimates of data in buffer accurate)
  • Slice up the segment into 5MB chunks if we get a QuotaExceededError on the first append
  • Wait until we can append the segment if we get a QuotaExceededError after the first append
  • Add functions for calculating the min and max bytes in the buffer based on segment metadata cues (playhead is rarely at a segment boundary so we need both min and max functions for maximum safety)
  • Add function for removing non-buffered segment metadata cues
  • In the SourceUpdater, when we get a QuotaExceededError, move the pendingCallback to another variable (deferredCallback) and null out pendingCallback so that we can process a remove() and try to reappend the segment that caused the error

Requirements Checklist

  • Feature implemented / Bug fixed
  • If necessary, more likely in a feature request than a bug fix
  • Reviewed by Two Core Contributors

@squarebracket
Copy link
Contributor Author

Note that tests will fail until #1412 is merged in and I can rebase against it.

@squarebracket
Copy link
Contributor Author

squarebracket commented May 17, 2018

Rebased against master to make (most) tests pass. Only failing test now is related to VTT cue removal, which will keep failing until videojs/videojs-contrib-media-sources#178 is merged in; I coded the tests for the new media-sources version, and all tests pass locally when using the patched media-sources.

@genelamarellis
Copy link

genelamarellis commented May 18, 2018

Amazing! Thank you so much! Just double checking the implementation steps to test this. I understand how to fetch the pull request for #1416. Then do I fetch the pull request for videojs/videojs-contrib-media-sources#178 as well. Then rebuild both, upload the dist directories, reference the proper js files, and call it a day?

@squarebracket
Copy link
Contributor Author

@genelamarellis pretty much, though you have to make sure contrib-hls is building with the patched version of contrib-media-sources, which it will not do by default. The easiest way I know of to do this:

git clone https://github.com/videojs/videojs-contrib-media-sources.git
git clone https://github.com/videojs/videojs-contrib-hls.git
cd videojs-contrib-media-sources
# <apply PR changes>
npm link
cd ../videojs-contrib-hls
npm link videojs-contrib-media-sources
# <apply PR changes>
npm install
npm run build

Once you've got that done, as you say, upload the dist directory and reference the appropriate file.

@genelamarellis
Copy link

Ok thank you so much! I have run all of the commands and I'm waiting for my team to give me an update to see if it worked. Thanks!

@squarebracket
Copy link
Contributor Author

Excellent! I'm excited to hear the results.

@genelamarellis
Copy link

Hero! This has solved our issues! Thank you so much! I hope this gets merged into the main branch. Life saver! Thanks again!

@squarebracket
Copy link
Contributor Author

Great, glad to hear it! It's a fair bit of code, so it might be a bit until it's merged in. Hopefully it won't be too long.

@ldayananda

This comment has been minimized.

@squarebracket

This comment has been minimized.

@ldayananda
Copy link

If I understand correctly, you're saying that they could be static for a single playback? If so, then maxGoalBufferLength could be a statically configured value (and essentially is that), but goalBufferLength and backBufferLength must be dynamic to support dynamic buffer sizes.

heh, sorry I definitely was not being specific enough in my last comment. I think we're on the same page in so far as the need to dynamically change the buffer sizes. What I meant was that instead of have getters/setters for the goalBufferLength, maxGoalBufferLength, and backBufferLength at the masterPlaylistController level, they could be getters/setters at the segmentLoader level. That would allow different values for different loader types, though admittedly I don't see an immediate need for that level of flexibility.

That's true that data is the important factor here, but the underlying MSE functions (like removing from the buffer) work with time, not data. So we need at least rough estimations of the equivalent time.

Oh interesting, didn't actually realize that remove used times 🤔 I suppose a potential alternative would be to try and match a TimeRanges object returned from buffered(). Not sure if that would make a significant difference though, depending on how remove actually goes about removing data from a sourceBuffer.

It is. checkBuffer_ is simply an algorithm to see whether or not a new segment should be downloaded, based on what is currently in the buffer. In this case, the answer is no, but it also tries to free up stuff in the back buffer since the check for "do we have more than goalBufferLength in the buffer?" returns false.

What I meant was that the order of execution could be:

  • trim back buffer if needed and adjust other buffers to match
  • checkBuffer_ to see if we need to download the next segment

In this scenario the functions of (i) trimming the back buffer and (ii) checking if we should download the next segment are clearly separated.

As you'll see in the code, the way things work out, in certain situations the safest thing to do is take the max, and in other situations the safest thing to do is take the min.

Interesting 🔍

It removes all cues within a time range. removeNonBufferedSegmentCues loops through the buffers and removes cues wholly outside the buffered data. It could use removeCuesFromTrack for the actual removal, but you would still need to determine the time ranges to remove.

Gotcha

Can you clarify what you mean? Every append has a callback that needs to be run after the append succeeds. If the buffer is full, we need to remove data from the buffer, then append the original data to the buffer, then run the original callback. However, since the SourceUpdater needs to act like an abstraction of a native SourceBuffer, the SegmentLoader needs to make sure that sourceBuffer.updating === false, otherwise any remove or append operations will fail. In SourceUpdater, updating will be true if pendingCallback_ is not null. I experimented with other ways of having updating be false, but I could never get them working.

Interesting 🤔 Sounds like you hit a wall with how the code is currently setup. I'll have to take a more in-depth look today to see if there's another way to do this that doesn't involve nulling out pendingCallback_. I'll let you know if I come up with something half-usable 😄

Copy link

@ldayananda ldayananda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a reminder to myself, I only got as far as reviewing up to src/master-playlist-controller.js.

I will return to the rest of the changes ASAP

src/config.js Outdated
@@ -8,5 +8,6 @@ export default {
// How much of the buffer must be filled before we consider upswitching
BUFFER_LOW_WATER_LINE: 0,
MAX_BUFFER_LOW_WATER_LINE: 30,
BUFFER_LOW_WATER_LINE_RATE: 1
BUFFER_LOW_WATER_LINE_RATE: 1,
BACK_BUFFER_LENGTH: 30

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this mean we'll have a total buffer length of 60? (GOAL_BUFFER_LENGTH being the forward buffer)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The forward buffer scales with time, so at the beginning it would be 60 (GOAL_BUFFER_LENGTH + BACK_BUFFER_LENGTH) and after playing for a bit it would be 90 (MAX_GOAL_BUFFER_LENGTH + BACK_BUFFER_LENGTH).

The constants could probably use some renaming to make them more clear :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm. Ok, curious if we should consider tweaking these values, but that's just curiosity... 🙃

Copy link
Contributor Author

@squarebracket squarebracket Jul 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed! I just used the values that were already in the code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 probably fine for now

*
* @return {Number} Desired forward buffer length in seconds
* @param {Number=} Desired forward buffer length in seconds

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

@@ -584,6 +589,8 @@ export class MasterPlaylistController extends videojs.EventTarget {
if (!currentPlaylist.endList ||
// For the same reason as LIVE, we ignore the low water line when the VOD
// duration is below the max potential low water line
// TODO: This probably needs changing? Not sure what to change it to though.
// Maybe just this.bufferLowWaterLine() ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should technically remain the same as the intention was to allow rendition switching on shorter videos (those shorter than the max low water line). Also, we should remove this TODO since we didn't actually change the value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I guess since the bufferLowWaterLine is never used for media that's shorter than the max, a smaller water line due to a QuotaExceededError would be irrelevant.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, exactly

return Math.min(initial + currentTime * rate, max);
}
this.goalBufferLength_ = Math.min(this.goalBufferLength_, value);
this.maxGoalBufferLength_ = Math.min(this.maxGoalBufferLength_, value);

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as outdated.


return Math.min(initial + currentTime * rate, max);
}
this.goalBufferLength_ = Math.min(this.goalBufferLength_, value);

This comment was marked as resolved.

This comment was marked as resolved.

if (value === undefined) {
return this.backBufferLength_;
}
this.backBufferLength_ = Math.min(this.backBufferLength_, value);

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

Copy link
Contributor Author

@squarebracket squarebracket Jul 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if another part of the code tries to make the buffers larger, the Math.min will prevent it. This is what should be happening. While it's true that you'd expect an actual setter to always set the value, this is not an actual property setter.

If your argument is semantic -- that a thing that looks and sounds like a setter should behave like a setter -- then I understand where you're coming from. They could, I suppose, be split into functions like getGoalBufferLength and safelySetGoalBufferLength (or better named versions) to break the semantic paradigm, if you think that would avoid any confusion. However, I don't think changing the actual underlying behaviour would be beneficial -- that safety is important for proper functioning.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that only goalBufferLength() already exists in master and backBufferLength() is new in this PR, right? If so, then we could probably rename backBufferLength() to be closer to its purpose?

@@ -1259,6 +1284,6 @@ export class MasterPlaylistController extends videojs.EventTarget {
const rate = Config.BUFFER_LOW_WATER_LINE_RATE;
const max = Math.max(initial, Config.MAX_BUFFER_LOW_WATER_LINE);

return Math.min(initial + currentTime * rate, max);
return Math.min(initial + currentTime * rate, max, this.goalBufferLength());

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as outdated.

This comment was marked as resolved.

Copy link

@ldayananda ldayananda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: got as far as src/source-updater.js

@@ -83,10 +83,12 @@ export const illegalMediaSwitch = (loaderType, startingMedia, newSegmentMedia) =
* The current time of the player
* @param {Number} targetDuration
* The target duration of the current playlist
* @param {Number} backBufferLength

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note here that after some discussions on slack, forward buffer should indeed be prioritized. I would say that is done as much as possible here, without having access to the GOP timing info.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

@@ -95,8 +97,8 @@ export const safeBackBufferTrimTime = (seekable, currentTime, targetDuration) =>
// If we have a seekable range use that as the limit for what can be removed safely

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could potentially leave us with a buffer greater than the backBufferLength passed in though, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

full disclosure: I have no idea what the seekable range means. I don't think I ever entered this code path when dealing with this PR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the seekable range only really comes into play when you have a live stream with a sliding window (in other words you might be disallowed from seeking to the beginning of the live stream). To be fair, it seems unlikely that you'll have a 4K live stream 😆

Copy link
Contributor Author

@squarebracket squarebracket Jul 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok. I'm not sure it's reasonable to assume there will never be 4K live streams. I guess I should try to test with one.

// otherwise remove anything older than 30 seconds before the current play head
removeToTime = currentTime - 30;
// otherwise remove anything older than the back buffer
removeToTime = currentTime - backBufferLength;

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

// Safari doesn't throw QuotaExceededErrors. Instead it just drops frames
// from the buffer. Here we set a 250MB cap, which _should_ prevent any
// dropped frames
this.maxBytes_ = 250000000;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to log a warning in Safari too? I suppose it'll be more difficult to tell that the reason why frames are being dropped is due to the browser buffer though 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's possible to know that frames are being dropped, but I'd have to research it more.

@@ -372,6 +385,9 @@ export default class SegmentLoader extends videojs.EventTarget {
init_() {
this.state = 'READY';
this.sourceUpdater_ = new SourceUpdater(this.mediaSource_, this.mimeType_);
this.sourceUpdater_.on('adjustBuffers', () => this.adjustBuffers_(true));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there's a reset method for the segment loaders, we should remove the event handlers there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, good catch, should also be added to dispose

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh actually, I'm not sure the segment loaders are re-init after they're reset. In any case, I'll do some checks and make sure the event handlers are being removed when appropriate.

for (i = 0; i < this.segmentMetadataTrack_.cues.length; i++) {
cue = this.segmentMetadataTrack_.cues[i];
for (j = 0; j < buffered.length; j++) {
cmpStart = (start > buffered.start(j)) ? start : buffered.start(j);

This comment was marked as resolved.

This comment was marked as resolved.

if (!this.segmentMetadataTrack_) {
return 0;
}
if (end < start) {

This comment was marked as resolved.

This comment was marked as resolved.

This comment was marked as resolved.

cmpStart = start > buffered.start(j) ? start : buffered.start(j);
cmpEnd = end < buffered.end(j) ? end : buffered.end(j);

if (cue.endTime <= cmpEnd && cue.startTime >= cmpStart - 0.1) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why cue.startTime >= cmpStart - 0.1 ? Could this just be cue.startTime > cmpStart

cmpEnd = buffered.end(j);

if (cue.startTime <= cmpEnd && cue.endTime >= cmpStart) {
keep = true;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couldn't you just call remove each cue as you decide to remove it? I think the keep flag is a bit confusing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't modify an array while iterating over it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think I meant to say couldn't you push a cue onto removeCues at this line?

}
}
matchingCues.forEach((matchingCue) => {
this.remove(0, matchingCue.endTime);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couldn't you just find the first cue that is outside the buffer and call remove from 0 to that cue instead of repeatedly removing from 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably.

@squarebracket
Copy link
Contributor Author

they could be getters/setters at the segmentLoader level. That would allow different values for different loader types, though admittedly I don't see an immediate need for that level of flexibility.

Right ok, I understand what you're getting at now. That was actually the original design I implemented but found that I needed to change it to what you see now. That's probably a design decision I should have documented in the PR description.

IIRC, it was something related to switching rates. I may just be thinking of this, but I feel like there was something else that caused me to pull that functionality into the master playlist controller.

If you'd like I can try pushing them down to the SegmentLoader. This code did go through several heavy rewrites and redesigns, so it's entirely possible that the requirements at the time are no longer needed.

I suppose a potential alternative would be to try and match a TimeRanges object returned from buffered().

I'm not sure I follow here? We want to know the amount of time in the buffer when we get a QuotaExceededError so that we can
a) know how much forward buffer (in s) we should have before we stop downloading chunks (because we don't want to put too many bytes into the buffer)
b) pass time-based parameters to MSE functions (like remove(0, currentTime - backBufferLength))

It's annoying, but everything uses time, except for the error which is triggered by bytes.

What I meant was that the order of execution could be:

  • trim back buffer if needed and adjust other buffers to match
  • checkBuffer_ to see if we need to download the next segment

True, we could trim the back buffer on every loop, and not only when it looks like the next segment will bust the buffer. That way, checkBuffer_ (mostly) remains a series of yes/no questions that take no actions.

There is kind of an open question to me, here, which is how we should prioritize the various goals (and thus how/when we should trim/not download). For instance, say you've got a back buffer goal of 10, a forward buffer goal of 30s, and GOPs are every 2s. Imagine that your forward buffer falls below 30s, but you see that the next segment will burst your buffer. Should slice a couple GOPs off your back buffer, even if it brings it down to 6s? Or should you just wait until you can download that segment and not trim off any more than 10s?

This is a part of the PR that I very much want feedback on, so I'm anxious to hear what you think is the best behaviour once you've gone through all the code.

Sounds like you hit a wall

That is 100% how I would describe it, yes 😅

// otherwise remove anything older than 30 seconds before the current play head
removeToTime = currentTime - 30;
// otherwise remove anything older than the back buffer
removeToTime = currentTime - backBufferLength;

This comment was marked as resolved.

src/config.js Outdated
@@ -8,5 +8,6 @@ export default {
// How much of the buffer must be filled before we consider upswitching
BUFFER_LOW_WATER_LINE: 0,
MAX_BUFFER_LOW_WATER_LINE: 30,
BUFFER_LOW_WATER_LINE_RATE: 1
BUFFER_LOW_WATER_LINE_RATE: 1,
BACK_BUFFER_LENGTH: 30

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 probably fine for now

@@ -584,6 +589,8 @@ export class MasterPlaylistController extends videojs.EventTarget {
if (!currentPlaylist.endList ||
// For the same reason as LIVE, we ignore the low water line when the VOD
// duration is below the max potential low water line
// TODO: This probably needs changing? Not sure what to change it to though.
// Maybe just this.bufferLowWaterLine() ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, exactly

*
* @return {Number} Desired forward buffer length in seconds
* @param {Number=} Desired forward buffer length in seconds

This comment was marked as resolved.

return Math.min(initial + currentTime * rate, max);
}
this.goalBufferLength_ = Math.min(this.goalBufferLength_, value);
this.maxGoalBufferLength_ = Math.min(this.maxGoalBufferLength_, value);

This comment was marked as resolved.

if (!this.nextSegmentSize_) {
this.nextSegmentSize_ = segmentInfo.byteLength;
} else {
this.nextSegmentSize_ -= this.nextSegmentSize_ /

This comment was marked as resolved.

if (!this.segmentMetadataTrack_) {
return 0;
}
if (end < start) {

This comment was marked as resolved.

try {
this.sourceBuffer_.appendBuffer(bytes);
} catch (error) {
this.sourceBuffer_.trigger({

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we should double check that the error type is correct here right? Just in case it isn't actually a QuotaExceededError. Alternatively, I'm not sure we need to catch the error here since it's already handled in media-sources

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This must exist to handle errors thrown by fmp4 streams (where media-sources is not used).

Added the error checking.


handleBufferMaxed_(event) {
videojs.log.warn('SourceBuffer exceeded quota; attempting to recover');
this.deferredCallback_ = this.pendingCallback_;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naive alternative suggestion(I still need to think this through again once I get to the end of the PR):

Couldn't you pass in the pendingCallback alone(rather than done)? Then call pendingCallback after resolving the failed append? onUpdateendCallback_ would then resolve anything else in the queue?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or another alternative would be to add a flag to onUpdateendCallback_ that would fail fast(not actually run the rest of the callbacks) while we are resolving the failed append. Then once we successfully resolve that append, we can turn the flag off and allow the onUpdateendCallback_ to continue to run normally. This might be a bit more complicated in practice though, since we're not 100% sure our next attempt to append after trimming the back buffer will work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC simply removing || this.pendingCallback_ from here caused problems, but I'm open to trying it again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I said this because your solutions seem to imply that requirement)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying over gesinger's suggestion from Slack:

Since pendingCallback_ should only be referenced internal to this class, it may make sense to have an additional property like waitingToAppend, and the code paths which normally rely on pendingCallback_ for determining whether something is updating/etc. also consider if we’re waitingToAppend or not before basing the status on pendingCallback_ and/or the updateend.

Which, if we follow, would involve adding waitingToAppend to that method.

@@ -3,6 +3,7 @@ import videojs from 'video.js';
import xhrFactory from '../src/xhr';
import Config from '../src/config';
import {
MockTextTrack,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to return to reviewing the tests once the code comments are resolved 😄

@ldayananda

This comment has been minimized.

@@ -260,6 +260,9 @@ export class MasterPlaylistController extends videojs.EventTarget {
this.useCueTags_ = useCueTags;
this.blacklistDuration = blacklistDuration;
this.enableLowInitialPlaylist = enableLowInitialPlaylist;
this.goalBufferLength_ = Config.GOAL_BUFFER_LENGTH;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor to use the properties on Hls (see this)

@ldayananda
Copy link

So I noticed a few things while testing this:

1️⃣ safeBackBufferTrimTime can return negative numbers 😭

  • the else case can be negative when currentTime is less than backBufferLength.

2️⃣ Should timeupdateHandler be turned off once we call remove? That's the earliest point that we can do it, we don't want to run remove multiple times for the same values. Currently it's possible that we'll call remove while a remove is in process.

3️⃣ It would be nice to have helpers in MPC for forwardBufferLength (separate from goalBL) and backBufferLength that just tell you exactly how many seconds are in the buffer ahead of and behind the playhead(it returns a static value and is a general convenience method)

forwardBufferLength = () => buffered.end(last) - currentTime
backBufferLength = () => currentTime - buffered.start(first)

4️⃣ A couple different observations on adjustBuffers():

  • For the usage of adjustBuffers in checkBuffer_ I think that this is actually not useful as trimBackBuffer should be ensuring that the forwardBuffer is prioritized. If it is not doing that, then we have an issue with how we are calculating safeBackBufferTrimTime. It's likely that we are "seeing more back buffer than forward buffer" because we are using the old backBuffered time and we should wait for trimming to be complete via updateend before checking the buffered times again anyway.
  • When we adjustBuffers we don't actually look at how much is actually in the backBuffer. As far as I understand, backBufferLength is an actual value in the buffer rather than a goal so this doesn't seem correct. backBufferLength should reflect the time in the buffer behind the playhead.
  • Having the adjustBuffers event seems undesirable since I believe we want to adjust buffers after removing the appropriate amount of back buffer? Either way, having adjustBuffers() be called more clearly either before or after handleBufferMaxed would be good.

5️⃣ For minBufferedBytes at least, it seems to be used in 2 ways:
i. to calculate the min possible bytes in the buffer
ii. to figure out how many bytes might be in a range of buffered time: https://github.com/squarebracket/videojs-contrib-hls/blob/79f53fed214a241df7e02801fb52deb1ca662ce1/src/segment-loader.js#L1369.

I think it would be much cleaner if there two purposes were split out into two separate methods. minBufferedBytes would be for purpose i. and another method to calculate the bytes for a time range. Maybe bytesInTimeRange(start, end).

6️⃣ This series of things happens twice in handleBufferMaxed: https://github.com/squarebracket/videojs-contrib-hls/blob/79f53fed214a241df7e02801fb52deb1ca662ce1/src/segment-loader.js#L1387-L1394

safeBackBufferTrimTime
minBufferedBytes
if you can trim off more than the segment that caused an error then
  removeWholeSegments

It would be great to pull that out into another method to make it easier to read and debug.


After looking at the code for a bit, I think that the code has portions of a couple different solutions in it(undefined browser behavior is fun!) and they just need to be unified. Here's a bit of pseudocode with a slightly different alternative to how you have things set up:

When a QuotaExceededError happens:

  • Make note of the full size of the buffer. This is the whole buffer length, not the forward buffer length or back buffer length.
  • Notify the master playlist controller that the error happened, with the full number of bytes in the buffer (maxNumberOfBytes) and seconds in the buffer (maxNumberOfSeconds) so we can use this to determine the new caps for the buffer.
  • Trim off as much as possible from the buffer to append the first offending segment. If that's not possible, error.
  • if we were able to append the segment, continue with notifying the MPC of the maxNumberOfBytes and maxNumberOfSeconds.
  • use the maxNumberOfSeconds to determine whether we need to set a new goalBufferLength value and maxGoalBufferLength value (prioritizing having more seconds for the forward buffer than the back buffer). We only need to set a new one if the maxNumberOfSeconds is very close to or less than the current goalBufferLength value. However, we could make the goalBufferLength method return a static value of the config goalBufferLength value to ensure we don't grow over that value.

When checking the buffer to see whether to download the next segment:

  • if we are approaching the maxNumberOfBytes, then we need to trim more from the buffer and wait to download a new segment.
  • don't change the goalBufferLength after this if the maxNumberOfSeconds does not change.
  • if we are unable to remove enough from the buffer to download a new segment, we must error, we cannot recover as the segment sizes are too large.

@squarebracket
Copy link
Contributor Author

squarebracket commented Jul 31, 2018

1️⃣ Fixed.

2️⃣ Good question. It's probably fine to remove the handler when it thinks it can append, provided the calculations are correct 100% of the time and the browser isn't engaged in shenanigans. You're right that it's possible remove will be called twice, in which case an InvalidStateError would be thrown.

3️⃣ Did we conclusively decide on renames? backBufferLength is already a function on the MPC.

4️⃣ I think you're misunderstanding the point of adjusting the buffers. It is unrelated to actual buffered data, simply on setting the desired lengths of the forward/back buffer. Trimming the back buffer simply lobs off data. In order to reduce priority on the back buffer and increase priority on the front buffer, the respective goal lengths must change. Or I suppose, I should say, that's the way I understand priority -- if you have a fixed amount of time that you can buffer (which is the case after a QuotaExceededError), then increasing priority on the forward buffer means allocating more of that overall fixed time to the forward buffer.

At first glance, it does appear that I could perhaps be using the forward buffer here instead of secondsInBuffer. But I will need to meditate on how that will change things.

It's likely that we are "seeing more back buffer than forward buffer" because we are using the old backBuffered time and we should wait for trimming to be complete via updateend before checking the buffered times again anyway.

I find this extremely unlikely. In my experience the problem comes from not being able to set the back buffer less than a target duration (which is of course done since we aren't slicing on GOPs).

As far as I understand, backBufferLength is an actual value in the buffer rather than a goal

This is incorrect; it is the equivalent of goalBufferLength for the amount to buffer behind the playhead.

backBufferLength should reflect the time in the buffer behind the playhead.

No, semantically it should be the target amount of back buffer. In the latest code it is called backBufferGoal.

Having the adjustBuffers event seems undesirable since I believe we want to adjust buffers after removing the appropriate amount of back buffer?

No, definitely not this. To simplify the explanation, assume a fixed bitrate stream. When you get the QuotaExceededError, you know that staying within currentlyBufferedSeconds of total (fwd + back) buffer will mean you will never get another QuotaExceededError, because it's the next append that pushed you over the top. This is a good conservative estimate, without being too conservative.

If you trimmed your back buffer before adjusting the buffers, that means you'd have smaller buffer sizes than you can actually handle. In this case, you'd be going too conservative, which means a greater likelihood of getting stuck.

Either way, having adjustBuffers() be called more clearly either before or after handleBufferMaxed would be good.

That is actually exactly what is done, see source-updater.js. It triggers adjustBuffers then it triggers event (which is bufferMaxed).

5️⃣ No, it is only ever used for both those functions described. It calculates the lower limit on the amount of bytes buffered within the range start, end. This is because the browser may yank bytes from the buffer, so you always have to make sure your stored values haven't changed without you knowing. See this note in the spec.

However, upon closer reading of the spec, it seems to say that coded frame eviction will take place only after appending to the SourceBuffer, so we could potentially run removeNonBufferedSegmentCues on the append handler, and then remove all the buffer checking nonsense from minBufferedBytes and maxBufferedBytes, which would be fucking fantastic.

6️⃣ The section you highlighted doesn't repeat. That first call to safeBackBufferTrimTime is actually inside an else block.

Trixy hobbitses, I know.


My questions/comments about your approach:

  • So you're saying the SourceUpdater would trigger on the SegmentLoader which would trigger on the MPC? Also, you don't need to pass those values since they're accessible by the MPC, but that's just some pedantry.
  • Trimming means you have now invalidated your maxNumberOfBytes and maxNumberOfSeconds. Always adjust your maxes before trimming.

use the maxNumberOfSeconds to determine whether we need to set a new goalBufferLength value and maxGoalBufferLength value

When would this not be the case? We're never going to buffer more than forwardBufferGoal + backBufferGoal. So we'd never hit a QuotaExceededError where we don't need to adjust the buffer sizes.

prioritizing having more seconds for the forward buffer than the back buffer

How would you do this? A sliding scale like GBL? Currently I'm doing newBackBufferGoal = 1/3 * maxNumberOfSeconds and maxForwardBufferGoal = 2/3 * maxNumberOfSeconds.

We only need to set a new one if the maxNumberOfSeconds is very close to or less than the current goalBufferLength value.

As explained above, it's impossible to have a QuotaExceededError and have more time in the buffer than the GBL, since it determines scheduling of downloads.

don't change the goalBufferLength after this if the maxNumberOfSeconds does not change.

Why would trimming the back buffer cause maxNumberOfSeconds to change? Unless you're saying, just remove the buffer readjustment that I am doing.

if we are unable to remove enough from the buffer to download a new segment, we must error, we cannot recover as the segment sizes are too large.

We only know we're screwed for sure if we've stalled AND the next segment will bust the buffer, since any advancement of the playhead may result in us being able to trim some more back buffer. We do know, though, that if we try to readjust the back buffer, and we're already at a target duration, then we're in danger.

@squarebracket
Copy link
Contributor Author

Updated some code, haven't updated tests.


if (setByteLimit) {
loggerMsg += '\n\tbyte limit: ' + this.maxBytes_ + ' => ' + bytesInBuffer;
loggerMsg += `\n\tbyte limit: ${this.maxBytes_} => ${bytesInBuffer}`;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this log would be a little inaccurate if we end up staying at the original maxBytes_ value

* Depending on what caused the QuotaExceededError, one of two things will happen.
* If it happened when appending the first segment of a stream, it will be split
* into smaller pieces and appended immediately. If it happened in the regular
* course of playing a feed, it will wait until there's sufficient room in the

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there are reason for using different strategies here? If the error happens after the first segment, couldn't we remove from the back buffer and then append small pieces of the next segment instead of waiting for enough space to append the whole segment?

let sourceBuffer = event.target;
// Whe start time (in seconds) of the currently-playing GOP at the time the

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The?

@ldayananda
Copy link

2️⃣ Do you know if an InvalidStateError would be bubbled up to the player?

3️⃣ I believe backBufferLength is new in this PR. From talking to the rest of the contributors, we should be able to rename goalBufferLength() as well since it's an internal API. It was suggested that the configuration values GOAL_BUFFER_LENGTH and MAX_GOAL_BUFFER_LENGTH not be changed for backwards compatibility. Maybe we could add deprecation warnings to the getter/setters for those values and create an issue to remind us to change them on the next major(this can be moved to VHS once contrib-hls is merged in).

4️⃣ In the case where we have a fixed buffer length(after a QuotaExceededError) I would expect that we could remove enough from the back buffer until we get larger sized segments and cannot safely remove more from the back buffer. Is that your understanding as well? Perhaps the buffer lengths need to be adjusted immediately after we detect this scenario then, to group the logic in one place.

I find this extremely unlikely. In my experience the problem comes from not being able to set the back buffer less than a target duration (which is of course done since we aren't slicing on GOPs).

Would be cool to get a usage event for this case.

That is actually exactly what is done, see source-updater.js. It triggers adjustBuffers then it triggers event (which is bufferMaxed).

What I actually meant to just call adjustBuffers() in handleBufferMaxed_. The event seems to not be useful outside the bufferMaxed usecase, unless I missed another scenario.

5️⃣ I agree that this portion of the spec seems to imply that the frame eviction will occur during the execution of an append on the source buffer. If we can remove some of the buffer bounds modifications, that would be great!

If not, I think it would be clearer if this was removed and the contract for the function is:

minBufferedBytes(start = 0, end = Infinity) {

which would set the default values, or alternatively replace with:

    if (start === undefined) {
      start = 0;
    }
    if (end === undefined) {
      end = Infinity;
    }

just so it's absolutely clear what case is being handled.

6️⃣ I was suggesting restructuring that bit of code so you could reuse a function:

function trimSegmentsForSeconds(removeToTime, bytesTried) {
  bytesTrimmable = this.minBufferedBytes(0, removeToTime);
  if (bytesTrimmable >= bytesTried) {
    this.removeWholeSegments(removeToTime);
    return true;
   }
   return false;
}

and then restructure this as:

if (safeRemovePoint) {
  removeToTime = safeRemovePoint;
} else {
  removeToTime = safeBackBufferTrimTime(this.seekable_(),
                                         this.currentTime_(),
                                         this.playlist_.targetDuration || 10,
                                         this.backBufferLength_());
}

var removedWholeSegment = trimSegmentsForSeconds(removeToTime, bytesTried);

if (!removedWholeSegment) {
  let **passedSegmentBoundary** = this.currentTime_() + this.playlist_.targetDuration;
  let buffered = this.buffered_();
  let **tooCloseToBufferEnd** = buffered.end(buffered.length - 1) - 2;

  timeupdateHandler = () => {
    let currentTime = this.currentTime_();

    if (currentTime < passedSegmentBoundary) {
       return;
    } else if (currentTime > tooCloseToBufferEnd) {
       this.removeWholeSegments(currentTime - this.playlist_.targetDuration);
       return;
    }
    removeToTime = safeBackBufferTrimTime(this.seekable_(),
                                           currentTime,
                                           this.playlist_.targetDuration || 10,
                                           this.backBufferLength_());
    trimSegmentsForSeconds(removeToTime, bytesTried);

which makes the intention of the code a little clearer. (the **whatever** is just because I changed those variable names)


I think we discussed the majority of differences between my suggested algorithm and the PR out-of-band, so I'll just address some of the ones we didn't.

Why would trimming the back buffer cause maxNumberOfSeconds to change? Unless you're saying, just remove the buffer readjustment that I am doing.

I think, as I mentioned earlier in this comment, the specific case that would actually require changing the goal values after making the buffer goal a static value(after QuotaExceededError) would be if the segment sizes/quality increase to a point that it requires the forward goal buffer value to decrease. I think I'm not 100% convinced yet that a back buffer goal is needed as a separate concept from "remove x amount from the buffer".

That would require some redesigning of the solution, however, and retesting so it's mostly up to you. The change you suggested in your last comment, continuing to use forwardBufferGoal as the newForwardGoal and then reducing the backBufferGoal based on what the total available buffer is, is similar to what I was trying to go for.

We only know we're screwed for sure if we've stalled AND the next segment will bust the buffer, since any advancement of the playhead may result in us being able to trim some more back buffer. We do know, though, that if we try to readjust the back buffer, and we're already at a target duration, then we're in danger.

If we aren't able to download the next segment and we're close to running out of buffer, haven't we stalled? For example if we can only fit one segment in the buffer because the segment sizes are that large.


Something I didn't express very clearly in my last comment was that there seems to be 3 strategies employed at the moment in this PR:

  • i. break up a segment into small chucks and append the chunks. Only used as a workaround for Firefox throwing QuotaExceededErrors when the buffer won't actually be busted by the first segment.
  • ii. Listen to QuotaExceededErrors and remove from the back buffer to allow the segment that caused the error to be appended.
  • iii. Try and predict if the buffer will be busted and pre-trim the back buffer to avoid another QuotaExceededError.

I wonder if we could simplify the code by whittling this down into 2 strategies, perhaps i and ii or ii and iii (where iii would be used to prevent the Firefox error).

@tonowoe
Copy link

tonowoe commented Nov 11, 2018

Hopefully this will be looked more into in the future. Can't play 1080p videos. After 30 minutes of playing the video I get an error:

video.js:128 VIDEOJS: ERROR: (CODE:-3 undefined) Failed to execute 'appendBuffer' on 'SourceBuffer': The SourceBuffer is full, and cannot free space to append additional buffers. MediaError
logByType @ video.js:128

.ts files about 20-30 MB each.

@forbesjo
Copy link
Contributor

Thank you for your PR but we will not be accepting new changes for this repo and will be archived very soon. I would advise you to open your PR against the next iteration of this project at https://github.com/videojs/http-streaming.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants