My Logitech C920 webcam provides a video stream encoded in h264. I'm using this "capture" tool to access the data:
So I can view live video:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 \
! h264parse \
! decodebin\
! xvimagesink sync=false
...or record the stream as a raw h264 file:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-0.10 -e filesrc location=/dev/fd/0 \
! h264parse \
! mp4mux \
! filesink location=/tmp/video.mp4
...but I can't for the life of me figure out how to do both at the same time. Having a live feed on screen while recording can be useful sometimes, so I'd like to make this work.
Spent hours and hours looking for a way to grab and screen simultaneously but no luck. No amount of messing around with tee
s and queue
s is helping.
Guess it would be a bonus to get ALSA audio (hw:2,0) into this as well, but I can get around that in an ugly hacky way. For now, I get this even though hw:2,0 is a valid input in Audacitu or arecord, for example:
Recording open error on device 'hw:2,0': No such file or directory
Recording open error on device 'plughw:2,0': No such file or directory
So to recap: would love to put those two video bits together, bonus if audio would work too. I feel like such a newbie.
Thanks in advance for any help you can provide.
edit: non-working code:
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! tee name=myvid ! h264parse ! decodebin \
! xvimagesink sync=false myvid. ! queue ! mux. alsasrc device=plughw:2,0 ! \
audio/x-raw,rate=44100,channels=1,depth=24 ! audioconvert ! queue ! mux. mp4mux \
name=mux ! filesink location=/tmp/out.mp4
...leads to this:
WARNING: erroneous pipeline: could not link queue1 to mux
Edit: Tried umlaeute's suggestion, got a nearly empty video file and one frozen frame of live video. With/without audio made no difference after fixing two small errors in the audio-enabled code (double quotation mark typo, not encoding audio to anything compatible with MP4. Adding avenc_aac
after audioconvert
did that trick). Error output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstMP4Mux:mux: Could not multiplex stream.
Additional debug info:
gstqtmux.c(2530): gst_qt_mux_add_buffer (): /GstPipeline:pipeline0/GstMP4Mux:mux:
DTS method failed to re-order timestamps.
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2809): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming task paused, reason error (-5)
EDIT: Okay, umlaeute's corrected code works perfectly, but only if I'm using v4l2src instead of the convert tool. And for now, that means grabbing the MJPEG stream rather than the H264 one. No skin off my nose, though I guess I'd prefer a more modern workflow. So anyway, here's what actually works, outputting an MJPEG video file and a real-time "viewfinder". Not perfectly elegant but very workable. Thanks for all your help!
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videorate ! 'image/jpeg, width=1280, height=720, framerate=24/1' ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mux.video_0 \
alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! lamemp3enc ! queue ! mux.audio_0 \
avimux name=mux ! filesink location=/tmp/out.avi