-
Notifications
You must be signed in to change notification settings - Fork 62
Home
The purpose of wf-recorder is to create a video file by recording your desktop. To use it, install and run it with
wf-recorder
It will output some text and begin recording. Use Ctrl+C to stop recording. The video file will be stored as recording.mp4 in the current working directory.
To tell wf-recorder to record audio, use -a
wf-recorder -a
Check pavucontrol recording tab for an entry while it is recording and choose the input. Optionally pass --audio=<device>
where <device>
is one output by pactl list sources | grep Name
.
wf-recorder -c h264_vaapi -d /dev/dri/renderD128 -f out.mp4
Optionally change the encoder codec to one found with ffmpeg -encoders|grep vaapi
, provided vainfo
shows the driver supports it.
Note that by default, either mic input or monitor of speakers can be used for input but not both. You can work around this by running pactl load-module module-loopback
to load the loopback module, which outputs the mic to the default speaker output, however it might not work very well depending on your exact setup.
A better, but more complicated way: Run pactl list sources | grep Name
and note the names of all the sources that you want to capture. Now, create a virtual sink, then using the loopback module make each of the sources output to this sink, and finally capture the sink's monitor with wf-recorder (replace the alsa_*
are the names of the sources to capture)
pactl load-module module-null-sink sink_name=Combined
pactl load-module module-loopback sink=Combined source=alsa_output.pci-0000_00_1f.3.analog-stereo.monitor
pactl load-module module-loopback sink=Combined source=alsa_input.pci-0000_00_1f.3.analog-stereo
Now use Combined.monitor
as source for wf-recorder.
You can use wf-recorder and v4l2loopback
to display your monitor as a camera in Zoom & other applications.
Basic setup:
- Install
v4l2loopback
kernel module as described in its repository, then load it with
sudo modprobe v4l2loopback exclusive_caps=1 card_label=WfRecorder
- Launch
wf-recorder
with the following parameters:
wf-recorder --muxer=v4l2 --codec=rawvideo --file=/dev/video2 -x yuv420p
You may need to adjust the file name, for ex. set it to /dev/video0
depending on how many cameras are already existing on your computer.
NOTE: The intel driver does rgb0 to yuv420 conversion implicitly while gallium drivers like radeon and nouveau do not. Depending on the case, it might be useful to do the conversion on the gpu by using vaapi with the following command:
wf-recorder -f /dev/video2 --muxer=v4l2 -d /dev/dri/renderD128 --codec=h264_vaapi --pixel-format=yuv420p
- Test the camera with
ffplay
,cheese
or any webcam test in your browser.
ffplay /dev/video2
(Instructions based on https://github.com/ammen99/wf-recorder/pull/43)
Use the following command to show /dev/video0 in a native window with gst-launch-1.0 (gstreamer). Note that glimagesink must be available.
gst-launch-1.0 -v v4l2src device=/dev/video0 ! glimagesink
- Install nginx rtmp module and wf-recorder
- Configure nginx rtmp and start nginx.
- Run wf-recorder with the command:
wf-recorder -f rtmp://127.0.0.1/feed/wayfire
- Test:
ffplay rtmp://127.0.0.1/feed/wayfire
TIP: Make sure the application directive name matches the URL. For example, if you have 'application live
{ ... }', then you must also use rtmp://127.0.0.1/live
/your_stream_path. Optionally add -a
for audio and -c h264_vaapi
to shift resource usage from cpu to gpu.
Example nginx rtmp config:
rtmp {
server {
listen 1935;
chunk_size 4096;
application feed {
live on;
record off;
}
}
}
Like RTMP, RTSP requires a server to handle multiple clients; libav does not provide this on its own. However gstreamer provides a simple program that is an RTSP server and accepts gstreamer syntax for input. Here is a simple command to start an RTSP server and stream the output to it:
wf-recorder -a -R 48000 -o DP-2 -y -c h264_vaapi -m rawvideo -D -r 15 -f /dev/stdout | ./test-launch '( fdsrc fd=0 ! decodebin ! videoconvert ! video/x-raw,format=I420,width=1920,height=1080 ! x264enc tune="zerolatency" byte-stream=true ! rtph264pay name=pay0 pt=96 autoaudiosrc ! audioconvert ! avenc_aac ! rtpmp4gpay name=pay1 pt=97 )'
The notable thing here is the -r 15
option which denotes the video framerate, -R 48000
is the audio sample rate and the width=1920,height=1080
which is the resolution. The ./test-launch
program is included with gst-rtsp-server repository. Then play the stream. The default port can be found in the source. mpv rtsp://127.0.0.1:8554/test