Skip to content

๐Ÿš€ Code2Video is a simple project that converts code snippets into video format (mp4).

Notifications You must be signed in to change notification settings

hsiangjenli/code2video

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Code2Video

Code2Video is a simple project that converts code snippets into video format (mp4). This is achieved by using the carbon-now-cli tool to generate images from code, and ffmpeg to assemble these images into a video.

Overview

This project is divided into four main components, each responsible for a specific task in the conversion process:

  1. code2partial.py:

    • Purpose: Splits the input code into smaller parts or segments, simulating the effect of typing out the code step-by-step. This is particularly useful for creating videos that visually mimic the process of code being written or revealed gradually.
    • Input: A code file (e.g., main.py).
    • Output: A set of smaller code files, each containing a portion of the original code, which can then be sequentially converted into images.
  2. partial2image.py:

    • Purpose: Converts each code segment into an image using the carbon-now-cli tool.
    • Input: The partial code files generated by code2partial.py.
    • Output: Image files (e.g., .png or .jpg) representing each code segment.
    // Example configuration for carbon-now-cli
    {
    "latest-preset": {
        "theme": "monokai",
        "backgroundColor": "#272822",
        "windowTheme": "none",
        "windowControls": false,
        "fontFamily": "Fira Code",
        "fontSize": "30px",
        "lineNumbers": true,
        "firstLineNumber": "1",
        "dropShadow": false,
        "dropShadowOffsetY": "20px",
        "dropShadowBlurRadius": "68px",
        "selectedLines": "*",
        "widthAdjustment": false,
        "lineHeight": "133%",
        "paddingVertical": "60px",
        "paddingHorizontal": "40px",
        "squaredImage": false,
        "watermark": false,
        "exportSize": "4x",
        "type": "png"
        }   
    }
    
  3. image2frame.py:

    • Purpose: Ensures that all the generated images are resized or adjusted to the same dimensions, preparing them for seamless video creation.
    • Input: The image files generated by partial2image.py.
    • Output: Resized image files, ready for video conversion.
  4. cover.py:

    • Purpose: Generates a cover image for the video. This image can serve as a title slide or any other introductory visual that you wish to include at the beginning of the video.
    • Input:
      • image_path: The path to a background image that will be used for the cover.
      • title: The text that will be displayed on the cover image, typically the title of the video or any relevant introductory text.
    • Output: A cover image file.

Final Step: Video Conversion

After the images are generated and formatted, use ffmpeg to compile these images into a video. This tool is responsible for converting the sequence of images into a final .mp4 video file.

Example FFmpeg Command:

ffmpeg -loop 1 -t 1 -i cover.png -framerate 10 -i $(GENERATED_FRAMES_FOLDER)/%d.png -filter_complex "[0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0];[1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1];[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]" -map "[v]" -c:v libx264 -r 10 output.mp4

Explanation of Each Parameter (Automatically Generated by ChatGPT):

  1. -loop 1:

    • Loops the input image indefinitely. The value 1 means to loop the image, while 0 would mean no looping.
  2. -t 1:

    • Sets the duration of the first input (cover image) to 1 second. This means the cover image will be shown for 1 second in the video.
  3. -i cover.png:

    • Specifies the first input file, which is the cover.png image. This image will be used as the cover slide at the beginning of the video.
  4. -framerate 10:

    • Sets the frame rate for the following input (the generated frames) to 10 frames per second (fps). This means the video will display 10 frames per second.
  5. -i $(GENERATED_FRAMES_FOLDER)/%d.png:

    • Specifies the second input, which is a sequence of images located in the folder specified by the environment variable GENERATED_FRAMES_FOLDER. The %d is a placeholder for the image sequence numbers (e.g., 1.png, 2.png, etc.).
  6. -filter_complex "[0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0];[1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1];[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]":

    • -filter_complex: Specifies a complex filter chain. Hereโ€™s a breakdown of the filter steps:
      • [0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0]: Scales the first input (cover image) to an even width and height by truncating the input width (iw) and height (ih) to the nearest even number. This is necessary because some video codecs require even dimensions. The result is stored in the alias [v0].
      • [1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1]: Similarly, scales the second input (the sequence of images) to even dimensions. The result is stored in the alias [v1].
      • [v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]: Concatenates the two video streams [v0] (cover image) and [v1] (image sequence) into a single video. The n=2 specifies the number of video segments to concatenate, v=1 specifies one video stream, and a=0 specifies no audio streams. The format=yuv420p ensures the output video is in the YUV 4:2:0 pixel format, which is widely compatible with most video players. The final output is stored in the alias [v].
  7. -map "[v]":

    • Maps the filtered video output [v] to the final output file. This tells ffmpeg to use the result of the previous filter chain as the video stream in the output file.
  8. -c:v libx264:

    • Specifies the video codec to use for encoding the video. libx264 is a widely used codec for H.264 video compression.
  9. -r 10:

    • Sets the output video frame rate to 10 frames per second. This ensures that the final video will run at 10 fps.
  10. output.mp4:

    • The name of the final output video file.

Dependencies

  • carbon-now-cli: This tool is used to convert code snippets into beautiful images. Install it via npm:

    npm install -g carbon-now-cli
  • ffmpeg: A versatile tool to handle multimedia data, specifically used here for video creation. Installation instructions can be found here.

Usage

  1. Split the code: Run code2partial.py to divide your code into smaller parts.
  2. Convert to images: Use partial2image.py to generate images from these code segments.
  3. Adjust image size: Run image2frame.py to ensure all images are the same size.
  4. Create a cover image (optional): Use cover.py if you want a custom cover for your video.
  5. Generate the video: Compile everything into a video using ffmpeg.

Results

About

๐Ÿš€ Code2Video is a simple project that converts code snippets into video format (mp4).

Resources

Stars

Watchers

Forks