Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate Job Creation Step #429

Closed
j0sh opened this issue May 14, 2018 · 7 comments
Closed

Separate Job Creation Step #429

j0sh opened this issue May 14, 2018 · 7 comments
Assignees

Comments

@j0sh
Copy link
Collaborator

j0sh commented May 14, 2018

Right now we create a job automatically when a RTMP stream comes in. Should we separate out the step of creating a job, and make it an explicit operation that the user has to do?

The current behavior is nice in that it's one fewer step for the user to engage in before starting a broadcast. However it has a few shortcomings:

  • We are spending Ether without confirmation. We probably want to be more explicit about this. We don't want users to be afraid of pointing OBS at Livepeer because it'll accidentally cost them money.
  • Jobs won't be immediately available until confirmed on-chain, and the current flow doesn't really lend itself to a nice UX to indicate that pending state.
  • There is no way to create multiple jobs (eg, at different price levels for different profiles), and chose which job a stream should use.

Separating out the job from the incoming stream would also allow broadcasters to configure and confirm a job in advance, which might help for large/planned events, or features such as delayed broadcasting #316 . Less chance for eg, a tx failure just when an event is supposed to start.

A manual job creation step is less of a stretch when considering the steps the user still has to take anyway before creating the job: depositing ETH, setting a segment price, ABR preferences, confirming the gas price, etc.

A livepeer_cli wizard could guide users through this process, but the user doesn't need direct access to a running Livepeer node at all, just the Eth keys used for the job. So this could also be done with a really nice web3 UI. The possibilities are even richer when considering features such as livepeer/protocol#203 .

@yondonfu
Copy link
Member

I am onboard with this particularly when considering that a lot of the recent issues experienced with broadcasting using nodes on Rinkeby have been oriented around job transactions being confirmed, but the stream being inaccessible due to networking issues (often relating to inaccessible bootnodes). Ideally, you would want to know that your stream is accessible to the rest of the network before you spend money on creating an on-chain job to have your stream transcoded.

I also like how by separating on-chain job creation from stream creation, we would also separate a desire for your stream to be transcoded from a desire to create a stream in general (right now those intents are combined and as a user you declare both intents with a single action).

@ericxtang
Copy link
Member

I like it. What about making this configurable (and default to separate job creation)?

  • Create a flag -autoJobCreation or
  • Accept a ?createjob=true flag in the rtmp request

@jozanza
Copy link
Contributor

jozanza commented May 15, 2018

My only question here is what parameters do we allow users to set when creating jobs? Allowing users to set streamId and/or endBlock might be desirable at some point, but seems like it may cause issues in our current networking protocol.

@dob
Copy link
Member

dob commented May 16, 2018

I have a couple reactions to this:

  1. It already is separate. It is a txn that needs to be submitted. However our node implementation just does it automatically. So I'm on board with the flag to enable or disable this, but theoretically as far as the protocol is concerned they are already separate and the client implementation could be flexible to either help a user by doing this automatically, or preventing it to save them accidental loss of ETH.

  2. I think this feature should have a dependency on Broadcaster-Transcoder Networking v2 #430, and essentially being able to test that your stream is accessible before creating the job (as Yondon mentioned).

  3. Perhaps the right way to think about it is to set up jobs for the particular profiles/prices that you want, and then the stream interface allows you to choose which job you would like performed on the incoming RTMP stream. The one thing I wouldn't really want to sacrifice is the ease of "run a livepeer node -> stream to it -> access the stream." If we introduce a lot of overhead in terms of having to configure/submit/confirm jobs first, and then manually choose/identify it for each stream you send it, we really lose a lot of usability for someone trying to just test the network by streaming to the URL of their node.

@j0sh
Copy link
Collaborator Author

j0sh commented May 16, 2018

Create a flag -autoJobCreation or

Agreed, this would also help for testing/devenv. We could also trigger jobs via a special RTMP connection string, but I'd prefer to also do that in conjunction with a command-line flag enabling the feature, to prevent CSRF-type attacks.

My only question here is what parameters do we allow users to set when creating jobs?

Currently, we set the segment price, the transcoding profiles and the deposit amount. We could certainly add other params, including endBlock . Setting StreamId is interesting -- will have to think more about the implications of that (more below).

It already is separate.

Yes, there's no real coupling as far as the smart-contract protocol goes, but it is a workflow change on the node with significant implications.

I think this feature should have a dependency on #430,

Correct. This proposal has a hard dependency on delayed broadcasting, which is in turn dependent now on #430.

Perhaps the right way to think about it is to set up jobs for the particular profiles/prices that you want, and then the stream interface allows you to choose which job you would like performed on the incoming RTMP stream.

I was going to set up a separate ticket with the implementation details, but this is how I was thinking of job selection working: the ingest would take a stream along the lines of rtmp://localhost/stream/<streamID/jobID> and pick up that job. For auto-generated jobs, we could have separate endpoints, eg (eg, rtmp://localhost/stream/new), and so forth.

@yondonfu 's point about trying to view/run a stream before submitting a job is a good one, and segues into @jozanza 's suggestion of a configurable StreamID. While we can already view off-chain streams via the p2p relay network, I think we have to be a little careful about how to generate and identify the stream without inadvertent side-effects (eg, aliasing an existing stream). Now that it seems we're all onboard with the high-level workflow changes, we can figure out these details.

@f1l1b0x
Copy link

f1l1b0x commented May 16, 2018

as a first step it might be enough to just keep the rtmp <streamID/jobID> alive for a time that allows recovery of the rtmp source.

If you started a stream and only lost connection for a moment there is no need to create a new on-chain job description. a couple of moments later it might come back on and continues.

Make the broadcaster wait n amount of time before he timeouts a stream would be my call

@j0sh
Copy link
Collaborator Author

j0sh commented Jan 12, 2019

We don't have jobs anymore with Streamflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants