3

Is it possible to both capture (record) an RTSP stream and capture scene change events using a single ffmpeg command? I can almost do what I want with:

ffmpeg -i 'rtsp://mystream' \
-map 0:v -map 0:a -c:v copy -c:a copy -f segment \
-segment_time 300 -segment_format matroska -strftime 1 "%Y%m%d%H%M%S_video.mkv" \
-map 0:v -an -filter:v "select='gt(scene,0.1)'" -frames:v 1 "%Y%m%d%H%M%S_scenechange.png"

This gives me nice 300s stream segments saved to disk, and a scene.png when a scene change is detected. However, scene.png only appears when I terminate the process, and when I do, I only get the last scene event. Ideally I'd like to get a new PNG (or even better, a short video clip) anytime a scene change is detected, without interrupting the recording of video.mkv. I'm sure it can be done with pipes and multiple ffmpeg commands, but for simplicity's sake (and mostly my own curiosity at this point), I'd like to see what can be done with just a single process.

AnodeCathode
  • 55
  • 1
  • 7

2 Answers2

2

With -frames:v 1, you'll only get one image output. Without strftime for the image output, the name string is used literally. You'll also need to stop ffmpeg from generating a constant frame rate stream for the image output using -vsync 0 (not noticed since you limited total output to 1 frame)

Use

ffmpeg -i 'rtsp://mystream' \
-map 0:v -map 0:a -c:v copy -c:a copy -f segment \
-segment_time 300 -segment_format matroska -strftime 1 "%Y%m%d%H%M%S_video.mkv" \
-map 0:v -an -filter:v "select='gt(scene,0.1)'" -vsync 0 -strftime 1 "%Y%m%d%H%M%S_scenechange.png"
Gyan
  • 85,394
  • 9
  • 169
  • 201
  • Thanks, that did the trick! Now the followup is, is it possible to get multiple frames (e.g. maybe 5-6) or a short video of the scene change event each time, or is this approach limited to a single frame for each event? – AnodeCathode Jun 13 '19 at 15:02
  • @Gyan any idea how to do this so that it outputs an mp4 file of every scene it detects instead of just a png with a timestamp? – Brodan Dec 08 '21 at 20:18
0

Firstly, would like to thank the user "Gyan" for his code example. I had the same issue as to OP and his code example have helped me.

I'm able to record by encoding from RAW of a live webcam stream and also take snapshots of the stream when there is motion detected or change in frame pixels using the following command:

ffmpeg -vaapi_device /dev/dri/renderD128 -f v4l2 -framerate 30 -video_size 1920x1080 -i /dev/video0 -vf 'format=nv12,hwupload' -c:v hevc_vaapi -f segment -segment_time 600 -reset_timestamps 1 -strftime 1 Living_Room/"(%b-%-d-%Y-%a)_%-I::%M::%S::%p.mp4" -filter:v "select='gt(scene,0.03)'" -vsync 0 -strftime 1 "(%b-%-d-%Y-%a)_%-I::%M::%S::%p_Snap.png"

I'm using this as a DVR/NVR method, works well. Now I can use the snapshot pics as an event list and also able to view the pictures to see what ffmpeg has captured as motion detection so that I can also reference to the video list to view the full recording which are segmented into 10 min video files.

You can change the sensitivity of motion detection by increasing or decreasing the value of:

(scene,0.03)

Example:

(scene,0.0003)

Note: I am using my GPU for encoding. To get a proper transcoding and other features will require a lot more commands and parameters, will get super confusing. I just left the important aspects needed for motion detection and cross referencing using the timestamps on the videos and pics.

Max Dax
  • 49
  • 3