1

There is a similar question (not that detailed and no exact solution). I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.

What I Need

A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.

enter image description here

So, if I have the following video frame's order

A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...

To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.

     [overlap]  [overlap]  [overlap] [overlap]  [overlap]
 A,    A,B,       B,C,       C,A,       A,C,      C,B,  ...

What I've Tried and Stuck

A demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.

Trial 1 Ref.

ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4

After that, on the out.mp4, I applied slice the video frames using opencv

import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
    cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image)     
    success,image = vidcap.read()
    count += 1

Next, I rotated these saved images horizontally (as my video is a vertical view).

vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
    image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
    
    image = cv2.rotate(image,cv2.cv2.ROTATE_180)
    image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

    cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)

The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.

Trail 2 - Ref

ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4

didn't work at all.

Trail 3

ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png

No luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before):

stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 

Update

After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.

On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.

enter image description here


Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips:

  • It's not straight forward, i.e. camera shaking
  • Lighting condition, i.e. causes different visual look at the same spot
  • Cameral flickering or banding

This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.


However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.

E_net4
  • 27,810
  • 13
  • 101
  • 139
Innat
  • 16,113
  • 6
  • 53
  • 101
  • Do you mind if we use a different video from the demo? I don't like trying to stitch together video of a website since so much of a website happens to be very similar. It might even be messing with your decimate since a lot of the frames look alike even if they are from two completely different sections of video. Text in particular is gross because a letter looks the same regardless of where it is and it messes up feature matching attempts. – Ian Chu Feb 25 '21 at 15:41
  • Do you want to share your solution with a different from the demo video? If so, yes, please. No problem at all. -) – Innat Feb 25 '21 at 16:07
  • I'll collect a video and work on a solution later today, but if you could post your video then we would be confident that our stitching solution works for your given conditions. – Ian Chu Feb 25 '21 at 17:40

2 Answers2

1

My approach to decimating the video is to pretty much do what a stitching program would do to try and stitch two frames together. I look for matching feature points and I only save frames once the number of matched points dip below what I think is an acceptable level.

To stitch, I just used OpenCV's built-in stitcher. If you want to avoid OpenCV's solution, I can redo the code to go without it (though I won't be able to replicate all of the nice cleaning steps that opencv does). The decimate program is honestly already most of the way there towards doing a generic stitch.

I got the video from here: https://www.videezy.com/nature/48905-rain-forest-pan-shot

And this is the panorama (decimated to 7 frames at cutoff = 50)

enter image description here

This is a pretty ideal case though, so this strategy might fail for a more difficult video like the one you described. If you can post that video then we can test out this solution on the actual use case and modify it if need be.

I like this program. And these panning shots are cool. Here's another one from this video: https://www.videezy.com/abstract/41671-pan-of-bryce-canyon-in-utah-4k

(decimated to 4 frames at cutoff = 50)

enter image description here

https://www.videezy.com/nature/11664-panning-shot-of-red-peaks-and-green-valleys-in-4k

(decimated to 4 frames at cutoff = 150)

enter image description here

Decimate

import cv2
import numpy as np
import os
import shutil

# rescale the images
def rescale(img):
    scale = 0.5;
    h,w = img.shape[:2];
    h = int(h*scale);
    w = int(w*scale);
    return cv2.resize(img, (w,h));

# delete and create directory
folder = "frames/";
if os.path.isdir(folder):
    shutil.rmtree(folder);
os.mkdir(folder);

# open vidcap
cap = cv2.VideoCapture("PNG_7501.mp4"); # your video here
counter = 0;

# make an orb feature detector and a brute force matcher
orb = cv2.ORB_create();
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False);

# store the first frame
_, last = cap.read();
last = rescale(last);
cv2.imwrite(folder + str(counter).zfill(5) + ".png", last);

# get the first frame's stuff
kp1, des1 = orb.detectAndCompute(last, None);

# cutoff, the minimum number of keypoints
cutoff = 50; 
# Note: this should be tailored to your video, this is high here since a lot of this video looks like

# count number of frames
prev = None;
while True:
    # get frame
    ret, frame = cap.read();
    if not ret:
        break;

    # resize
    frame = rescale(frame);

    # count keypoints
    kp2, des2 = orb.detectAndCompute(frame, None);

    # match
    matches = bf.knnMatch(des1, des2, k=2);

    # lowe's ratio
    good = []
    for m,n in matches:
        if m.distance < 0.5*n.distance:
            good.append(m);

    # check against cutoff
    print(len(good));
    if len(good) < cutoff:
        # swap and save
        counter += 1;
        last = frame;
        kp1 = kp2;
        des1 = des2;
        cv2.imwrite(folder + str(counter).zfill(5) + ".png", last);
        print("New Frame: " + str(counter));

    # show
    cv2.imshow("Frame", frame);
    cv2.waitKey(1);
    prev = frame;

# also save last frame
counter += 1;
cv2.imwrite(folder + str(counter).zfill(5) + ".png", prev);

# check number of saved frames
print("Counter: " + str(counter));

Stitcher

import cv2
import numpy as np
import os

# target folder
folder = "frames/";

# load images
filenames = os.listdir(folder);
images = [];
for file in filenames:
    # get image
    img = cv2.imread(folder + file);

    # save
    images.append(img);

# use built in stitcher
stitcher = cv2.createStitcher();
(status, stitched) = stitcher.stitch(images);
cv2.imshow("Stitched", stitched);
cv2.waitKey(0);
Innat
  • 16,113
  • 6
  • 53
  • 101
Ian Chu
  • 2,924
  • 9
  • 14
  • Upvoted. The approach you're using here is highly dependent on the consecutive video frames. On the demo video, it gave promising results too but sometimes mess around too heavily and easily breaks the stitching algorithm. – Innat Feb 26 '21 at 13:27
  • Is it possible for you to upload your video? I can't try to fix my answer without knowing what the specific problem is. – Ian Chu Feb 26 '21 at 15:48
  • Due to an NDA, I can't. However, I can create a video with such conditions (within next day). – Innat Feb 26 '21 at 15:57
  • [Here](https://drive.google.com/file/d/13UECjMmo3hZCQVtmz6Xz486cDWraR2mF/view?usp=sharing) is a link to a video that close to the original one. – Innat Feb 28 '21 at 05:33
  • I strongly suspect that that video is gonna give weird results. It's not really a panorama since we're changing our position + going in and out which messes with the scaling. I'll see what happens if I run it through, but it'll probably take a lot more work to get something useful out of that. – Ian Chu Feb 28 '21 at 19:29
  • the stitcher failed with an error code: ERR_CAMERA_PARAMS_ADJUST_FAIL. This isn't a huge surprise given how the camera was moving. We could ditch the opencv stitcher and calculate out the homographies, but I'm not confident that we'd get great results from that video. – Ian Chu Feb 28 '21 at 20:34
  • Agree with you. It's not only difficult but also unrealistic to get panorama from such video. Anyway, thanks for you effort. -) – Innat Feb 28 '21 at 22:01
-1

Try adjusting the fps with below command.

ffmpeg -i check.mp4 -vf fps=0.2 images%03d.bmp