Skip to content

pzhlkj6612/streamlink-eplus_jp-object_storage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Streamlink, eplus.jp and Object Storage

DMCA is coming...?

What does this docker image do

  • Download the live streaming or VOD from eplus and other websites via Streamlink or yt-dlp.
  • Upload the video file to S3-compatible object storage via S3cmd or to Azure Storage container via Azure CLI.

Details

Storage requirement

For a 4-hour live event, the size of a MPEG-TS recording with the best quality is about 9.88 GB (9.2 GiB).

Downloader support

The yt-dlp support is experimental.

Output

The output file is in ".ts" format. I believe that your media player is smart enough to get to know the actual codec.

The file is located in the "/SL-downloads" directory in the container. You are able to access those files by mounting a volume into that directory before creating the container (otherwise you may have to play with docker cp or anonymous volumes).

You will see some intermediate files. Those files will be renamed to "final files" finally:

# template:
${datetime}.${OUTPUT_FILENAME_BASE}.ts # full
${OUTPUT_FILENAME_BASE}.ts             # NO_AUTO_PREFIX_DATETIME

# example:
'20220605T040302Z.name-1st-Otoyk-day0.ts' # full
'name-1st-Otoyk-day0.ts'                  # NO_AUTO_PREFIX_DATETIME

Final files:

# template:
${datetime}.${OUTPUT_FILENAME_BASE}.${size}.${md5}.ts # full
${OUTPUT_FILENAME_BASE}.${size}.${md5}.ts             # NO_AUTO_PREFIX_DATETIME
${datetime}.${OUTPUT_FILENAME_BASE}.${md5}.ts         # NO_AUTO_FILESIZE
${datetime}.${OUTPUT_FILENAME_BASE}.${size}.ts        # NO_AUTO_MD5

# example:
'20220605T040302Z.name-1st-Otoyk-day0.123456789.0123456789abcdef0123456789abcdef.ts' # full
'name-1st-Otoyk-day0.123456789.0123456789abcdef0123456789abcdef.ts'                  # NO_AUTO_PREFIX_DATETIME
'20220605T040302Z.name-1st-Otoyk-day0.0123456789abcdef0123456789abcdef.ts'           # NO_AUTO_FILESIZE
'20220605T040302Z.name-1st-Otoyk-day0.123456789.ts'                                  # NO_AUTO_MD5
Variable Description
OUTPUT_FILENAME_BASE base file name (env)
datetime datetime at UTC in ISO 8601 format.
strftime(${datetime}, 17, "%Y%m%dT%H%M%SZ", gmtime(&(time(0))))
size file size.
du -b "$filepath"
md5 file hash.
md5sum "$filepath"

Prepare your object storage

AWS S3-compatible preparation (simpler)

Create your own object storage:

Environment variables:

Name Description
S3_BUCKET URL in s3://bucket-name/dir-name/ style
S3_HOSTNAME For example:
s3-eu-west-1.amazonaws.com
nyc3.digitaloceanspaces.com
us-east-1.linodeobjects.com
ewr1.vultrobjects.com
objects-us-east-1.dream.io
AWS_ACCESS_KEY_ID The access key
AWS_SECRET_ACCESS_KEY The secret key

Azure preparation

Create a service principal on Azure.

For azure-cli :

$ az login --use-device-code
$ az ad sp create-for-rbac --role 'Contributor' --name "${name}" --scopes "/subscriptions/${subscription}/resourceGroups/${resourceGroup}/providers/Microsoft.Storage/storageAccounts/${AZURE_STORAGE_ACCOUNT}"

Environment variables:

Name Description
AZ_SP_APPID Application (client) ID
AZ_SP_PASSWORD Client secret
AZ_SP_TENANT Directory (tenant) ID
AZURE_STORAGE_ACCOUNT Azure storage account
AZ_STORAGE_CONTAINER_NAME Storage container name

Launch the container

Docker

Install Docker Engine and use the docker compose command to manipulate Docker Compose.

Create a service:

# docker-compose.yml

services:
  sl:
    image: docker.io/pzhlkj6612/streamlink-eplus_jp-object_storage
    volumes:
      - ./SL-downloads:/SL-downloads:rw
      - ./YTDLP:/YTDLP:rw  # edit "cookies.txt" in it
    environment:
      # base file name; will use a random one if leaving empty.
      - OUTPUT_FILENAME_BASE=

      # output file name configuration
      - NO_AUTO_PREFIX_DATETIME=
      - NO_AUTO_FILESIZE=
      - NO_AUTO_MD5=

      # Input control
      # only one input allowed; using file has the highest priority.

      # file
      # does NOT imply "NO_AUTO_PREFIX_DATETIME", "NO_AUTO_FILESIZE" and "NO_AUTO_MD5".
      # does imply "NO_DOWNLOAD_TS".
      - USE_EXISTING_MPEG_TS_VIDEO_FILE=

      # proxy for streamlink and yt-dlp
      - HTTPS_PROXY=http://127.0.0.1:1926  # empty by default.

      # streamlink
      - STREAMLINK_STREAM_URL=           # enable streamlink.
      - STREAMLINK_STREAM_QUALITY=       # "best" by default.
      - STREAMLINK_OPTIONS=              # options passed into streamlink after default ones; see https://streamlink.github.io/cli.html

      # yt-dlp
      - YTDLP_STREAM_URL=      # enable yt-dlp.
      - YTDLP_OPTIONS=         # options passed into yt-dlp after default ones; see https://github.com/yt-dlp/yt-dlp

      # direct download
      - VIDEO_FILE_URL=  # download a video file.

      # ffmpeg
      - GENERATE_STILL_IMAGE_MPEG_TS=  # generate a still image MPEG-TS video.

      # Output control
      # multiple outputs supported.

      # file
      - NO_DOWNLOAD_TS=  # do not save the video file. it may not be a MPEG-TS file, but a MKV one.

      # rtmp
      - RTMP_TARGET_URL=     # enable RTMP streaming.
      - RTMP_FFMPEG_USE_AAC_ENCODING=      # enable audio re-encoding, otherwise just copy the stream.
      - RTMP_FFMPEG_USE_LIBX264_ENCODING=  # enable video re-encoding, otherwise just copy the stream.
      - RTMP_FFMPEG_CRF=     # CRF value for video re-encoding, 23 by default, see https://trac.ffmpeg.org/wiki/Encode/H.264#a1.ChooseaCRFvalue .

      # uploading control

      - ENABLE_S3=           # enable s3cmd.
      - ENABLE_AZURE=        # enable azure-cli.

      # s3cmd
      - AWS_ACCESS_KEY_ID=
      - AWS_SECRET_ACCESS_KEY=
      - S3_BUCKET=
      - S3_HOSTNAME=
      - S3CMD_MULTIPART_CHUNK_SIZE_MB=  # "--multipart-chunk-size-mb", 15 by default.

      # azure-cli
      - AZURE_STORAGE_ACCOUNT=
      - AZ_SP_APPID=
      - AZ_SP_PASSWORD=
      - AZ_SP_TENANT=
      - AZ_STORAGE_CONTAINER_NAME=

Run it:

$ docker compose up sl

For developers who want to build the image themselves:

$ docker build --tag ${tag} .

Podman

Install Podman. Create a pod and a "hostPath" volume:

# pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: sl
spec:
  volumes:
    - name: SL-downloads
      hostPath:
        path: ./SL-downloads
        type: Directory
  restartPolicy: Never
  containers:
    - name: sl
      image: docker.io/pzhlkj6612/streamlink-eplus_jp-object_storage
      resources: {}
      volumeMounts:
        - mountPath: /SL-downloads
          name: SL-downloads
      env:
        # Please refer to the "Docker" section.
        - name: # ...
          value: # "..."

Finally, play it:

$ podman play kube ./pod.yaml  # 開演!

For developers who want to build the image themselves:

$ podman build --tag ${tag} .

Credits