1

EDITED VERSION AT THE BOTTOM

I have movie footage from a 360 camera that I would like to map onto a sphere. The scene is saved as an equirectangular projection in a MP4 file.

I found this example explaining how to map an image onto a sphere and was able to wrap the code from the example with the mayavi.animate decorator, read out consecutive frames from the MP4 file and map the textures onto the sphere as intended (using a code snipped from the mayavi example section that converts numpy arrays to tvtk.ImageData objects).

However I encountered one problem that makes me suspect that I may be using this completely wrong... Weird circular artifacts: Movie frame on sphere

When I use the linked example with a JPG snapshot from the movie the projection looks as it should be (the upper part just looks like its wrong, but this is part of the video footage): Image on sphere

Here is the code:

'''
Altered version of Andras Deak's
on https://stackoverflow.com/questions/53074908/map-an-image-onto-a-sphere-and-plot-3d-trajectories

'''

import imageio
from mayavi import mlab
from tvtk.api import tvtk # python wrappers for the C++ vtk ecosystem
from tvtk.tools import visual
from tvtk.common import configure_input_data, is_old_pipeline

def image_from_array(ary):
    """ Create a VTK image object that references the data in ary.
        The array is either 2D or 3D with.  The last dimension
        is always the number of channels.  It is only tested
        with 3 (RGB) or 4 (RGBA) channel images.
        Note: This works no matter what the ary type is (accept
        probably complex...).  uint8 gives results that make since
        to me.  Int32 and Float types give colors that I am not
        so sure about.  Need to look into this...
    """

    sz = ary.shape
    dims = len(sz)
    # create the vtk image data
    img = tvtk.ImageData()

    if dims == 2:
        # 1D array of pixels.
        img.whole_extent = (0, sz[0]-1, 0, 0, 0, 0)
        img.dimensions = sz[0], 1, 1
        img.point_data.scalars = ary

    elif dims == 3:
        # 2D array of pixels.
        if is_old_pipeline():
            img.whole_extent = (0, sz[0]-1, 0, sz[1]-1, 0, 0)
        else:
            img.extent = (0, sz[0]-1, 0, sz[1]-1, 0, 0)
        img.dimensions = sz[0], sz[1], 1

        # create a 2d view of the array
        ary_2d = ary[:]
        ary_2d.shape = sz[0]*sz[1],sz[2]
        img.point_data.scalars = ary_2d

    else:
        raise ValueError("ary must be 3 dimensional.")

    return img



# create a figure window (and scene)
fig = mlab.figure(size=(600, 600))
visual.set_viewer(fig)

# load video
vid = imageio.get_reader('movie.mp4', 'ffmpeg')

# use a TexturedSphereSource, a.k.a. getting our hands dirty
R = 1
Nrad = 180

# create the sphere source with a given radius and angular resolution
sphere = tvtk.TexturedSphereSource(radius=R, theta_resolution=Nrad, phi_resolution=Nrad)

# assemble rest of the pipeline, assign texture
sphere_mapper = tvtk.PolyDataMapper(input_connection=sphere.output_port)
sphere_actor = tvtk.Actor(mapper=sphere_mapper)

@mlab.show
@mlab.animate(delay=50)
def auto_sphere():

    for i in range(1,600):

        image = vid.get_data(i)
        img = image_from_array(image)
        texture = tvtk.Texture(interpolate=1)
        configure_input_data(texture, img)

        sphere_actor.texture = texture
        fig.scene.add_actor(sphere_actor)

        yield


auto_sphere()

I am completely new to this topic. How can this be done correctly?


EDIT:

So I think I managed to identify the actual problem. But I don't yet know how to solve it. The issue seems to be with the way the MP4 file is read in. In this modified version - which uses jpeg files of individual frames from the mp4 - the vtk pipeline and the resulting rendered movie looks right:

from mayavi import mlab
from tvtk.api import tvtk # python wrappers for the C++ vtk ecosystem

# create a figure
fig = mlab.figure(size=(600, 600))

# sphere source
R = 1
Nrad = 180
sphere = tvtk.TexturedSphereSource(radius=R, theta_resolution=Nrad, phi_resolution=Nrad)
# sphere mapper
sphere_mapper = tvtk.PolyDataMapper(input_connection=sphere.output_port)
# actor
sphere_actor = tvtk.Actor(mapper=sphere_mapper)
# image reader
image = tvtk.JPEGReader()
image.file_name = 'testdata/frame001.jpg'
# texture
texture = tvtk.Texture(input_connection=image.output_port, interpolate=1)
sphere_actor.texture = texture

# add actor to scene
fig.scene.add_actor(sphere_actor)

@mlab.show
@mlab.animate(delay=50)
def auto_sphere():

    for i in range(2,101):
        num = str(i)

        filepath = 'testdata/frame%s.jpg' % num.zfill(3)

        image.file_name = filepath

        # render current texture
        fig.scene.render()

        yield

auto_sphere()

I guess my new question now is: can I implement a custom vtkImageReader2 class or something similar in python that allows me to read in consecutive frames of the mp4? And if so, then how can this be done properly? Unfortunately I was unable to find any kind of tutorial on how this is done.

thdnk
  • 21
  • 2
  • you can see [this discussion on extracting frames of a mp4 video](https://stackoverflow.com/questions/33311153/python-extracting-and-saving-video-frames) – Felipe Lema May 22 '19 at 14:07

0 Answers0