You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, after transcoding and adding to the manifest, nothing further is done with transcoded segments. There is no way to retrieve source segments by job or organize transcoded segments that have been paid for.
Describe the solution you'd like
This is clearly open to design and discussion, but one approach is to group all segments for a job by folder. For example, we could put segments following this folder structure:
~/.lpData/media/jobID/{source,profileName}/N.ts
This might make it a bit easier to run other types of local analysis such as determining which transcodes succeeded and reviewing the content.
Also, it might be useful to have some knobs to manage this content, eg to remove old jobs to free up disk space. But perhaps that'd be better done with a separate tool, eg one that builds off the CLI API and/or the sqlite data. Also see #553 and #352
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Currently, after transcoding and adding to the manifest, nothing further is done with transcoded segments. There is no way to retrieve source segments by job or organize transcoded segments that have been paid for.
Describe the solution you'd like
This is clearly open to design and discussion, but one approach is to group all segments for a job by folder. For example, we could put segments following this folder structure:
This might make it a bit easier to run other types of local analysis such as determining which transcodes succeeded and reviewing the content.
Also, it might be useful to have some knobs to manage this content, eg to remove old jobs to free up disk space. But perhaps that'd be better done with a separate tool, eg one that builds off the CLI API and/or the sqlite data. Also see #553 and #352
The text was updated successfully, but these errors were encountered: