EzDevInfo.com

moviepy

Video editing with Python User Guide — MoviePy 0.2 documentation

cliping a webm file using moviepy and ffmpeg parameters

Using moviepy, I am trying to trim a section of a webm file like this:

my_file.write_videofile(name, codec = 'libvpx')

Of course, I have already defined the beginning and end of the clip, etc. The code is returning the segment I want, however, I am noticed a decrease in the quality of the file. I am not resizing or constraiing the file size anywhere, so I don't understand why the clip has an inferior quality compared to the original.

There are some parameters that I could play with, which I suspect are set as defaults in moviepy to speed of video manipulation, but the documentation of moviepy does not say anything about them:

ffmpeg_params :

Any additional ffmpeg parameters you would like to pass, as a list of terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’]

Anybody outhere is familiar with the right parameters to keep the quality of the original file? As an alternative, is anybody is familiar with any other library to trim webm files?

Below are two pics showing the difference in quality. The first one is a frame of the trimmed file, the second one is approximately the same frame, for the original file.

enter image description here

enter image description here

Thank you


Source: (StackOverflow)

Difficulty animating a matplotlib graph with moviepy

I have to make an animation of a large number (~90,000) figures. For context, it's a plot of a map for every day from 1700 - 1950, with events of interest marked on relevent days. I can do this using matplotlib.animation.FuncAnimation, and I have code that does this successfully for a small test period. However, with the complete set of figures this is taking an impractical amount of time to render and will result in a very large movie file. I have read that apparently moviepy offers both speed and file size advantages. However, I am having trouble getting this to work – I believe my problem is that I have not understood how to correctly set the duration and fps arguments.

A simplified version of my code is :

import numpy as np
import matplotlib.pyplot as plt
from moviepy.video.io.bindings import mplfig_to_npimage
import moviepy.editor as mpy

fig = plt.figure()
ax = plt.axes()
x = np.random.randn(10,1)
y = np.random.randn(10,1)
p = plt.plot(x,y,'ko')

time = np.arange(2341973,2342373)

def animate(i):
   xn = x+np.sin(2*np.pi*time[i]/10.0)
   yn = y+np.cos(2*np.pi*time[i]/8.0)
   p[0].set_data(xn,yn)
   return mplfig_to_npimage(fig)

fps = 1 
duration = len(time)
animation = mpy.VideoClip(animate, duration=duration)
animation.write_videofile("test.mp4", fps=fps)

However, this does not produce the intended result of producing a movie with one frame for each element of time and saving this to an .mp4. I can’t see where I have gone wrong, any help or pointers would be appreciated.

Best wishes, Luke


Source: (StackOverflow)

Advertisements

Why the frames of a VideoClip change when it is written to a video file?

I wrote the following code:

from moviepy.editor import *
from PIL import Image
clip= VideoFileClip("video.mp4")
video= CompositeVideoClip([clip])
video.write_videofile("video_new.mp4",fps=clip.fps)

then to check whether the frames have changed or not and if changed, which function changed them, i retrieved the first frame of 'clip', 'video' and 'video_new.mp4' and compared them:

clip1= VideoFileClip("video_new.mp4")
img1= clip.get_frame(0)
img2= video.get_frame(0)
img3= clip1.get_frame(0)
a=img1[0,0,0]
b=img2[0,0,0]
c=img3[0,0,0]

I found that a=24, b=24, but c=26....infact on running a array compare loop i found that 'img1' and 'img2' were identical but 'img3' was different. I suspect that the function video.write_videofile is responsible for the change in array. But i dont know why...Can anybody explain this to me and also suggest a way to write clips without changing their frames?

PS: i read the docs of 'VideoFileClip', 'FFMPEG_VideoWriter', 'FFMPEG_VideoReader' but could not find anything useful...I need to read the exact frame as it was before writing in a code I'm working on. Please, suggest me a way.


Source: (StackOverflow)

FileNotFoundError from moviepy

I'm trying to replicate a gif converter from this tutorial but it still gives me errors.

I installed all dependencies for moviepy and followed all the instructions.

I'm using windows 8.1 64 bit

from moviepy.editor import *
clip = (VideoFileClip("C:\abi\youtubetogif_project\a.mp4")
    .resize(0.5))
clip.write_gif("a.gif")

changing path, changing videos still won't work

EDIT: using double backslashes like this "C:\\abi\\youtubetogif_project\\a.mp4" still gives me the error

the exception

Traceback (most recent call last):
File "C:\abi\youtubetogif_project\test.py", line 3, in <module>
clip = VideoFileClip("C:\abi\youtubetogif_project\a.mp4")
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\video\io\VideoFileClip.py", line 55, in __init__
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt)
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\video\io\ffmpeg_reader.py", line 22, in __init__
infos = ffmpeg_parse_infos(filename, print_infos, check_duration)
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\video\io\ffmpeg_reader.py", line 209, in ffmpeg_parse_infos
stderr=sp.PIPE)
File "C:\Python34\lib\subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1104, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

Source: (StackOverflow)

Unable to create a textclip in moviepy (imagemagick succesfully installed?) - got Utf8 Error

i got this error when using moviepy with "TextClip": 'utf8' codec can't decode byte 0x84 in position 5: invalid start byte

Imagemagick and wand are (proper?) installed. Does anybody knows a possible solution?


Source: (StackOverflow)

animating mayavi with moviepy

I am trying to figure out how to export 3D plots created with Mayavi to a movie that I can use for presentations in Powerpoint etc. I found a discussion of doing this using moviepy at

http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/

I used this code, with slight modifications as follows

duration = 6
def make_frame(t):
    u = np.linspace(0,2*np.pi,360)                                              
    y = np.sin(3*u)*(0.2+0.5*np.cos(2*np.pi*t/duration))
    pore_surface.mlab_source.set(y = y)                                         
    mlab.view(azimuth= 360*t/duration, distance=200)  
.
.
.
verts, faces = marching_cubes(large_region, 0.5, (1., 1., 1.))
surface_area = mesh_surface_area(verts, faces)
pore_surface = mlab.triangular_mesh([vert[0] for vert in verts],[vert[1] for vert in verts],[vert[2] for vert in verts],faces) 
mlab.show(pore_surface)

animation = mpy.VideoClip(make_frame, duration=duration).resize(0.5)
animation.write_videofile("pore_surface.mp4", fps=20)
animation.write_gif("pore_surface.gif", fps=20)

where marching_cubes is from scikits image

However, I get a broadcast error doing this as follows (there are 360 values in each of the new values)

Exception occurred in traits notification handler for object: , trait: y, old value: [ 0. 0. 1. ..., 62.5 63. 63. ], new value: [ 0.00000000e+00 3.67371235e-02 7.33729915e-02 1.09806628e-01 1.45937613e-01 1.81666362e-01 2.16894399e-01 2.51524628e-01 ...

Traceback (most recent call last): File "/Users/iz9/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/traits/trait_notifiers.py", line 340, in call self.handler( *args ) File "/Users/iz9/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/mayavi/tools/sources.py", line 835, in _y_changed self.points[:, 1] = y.ravel() ValueError: could not broadcast input array from shape (360) into shape (43505) ERROR:traits:Exception occurred in traits notification handler for object: , trait: y, old value: [ 0. 0. 1. ..., 62.5 63. 63. ], new value: [ 0.00000000e+00 3.67371235e-02 7.33729915e-02 1.09806628e-01 1.45937613e-01 1.81666362e-01 2.16894399e-01 2.51524628e-01 . . . this repeats many times. Meanwhile the Mayavi scene does show, and it does show the image spinning while this error continues

I am running Python under Enthought Canopy on a Mac.


Source: (StackOverflow)

How can I address large memory usage by MoviePy?

I am trying to make a video from a large number of images using MoviePy. The approach works fine for small numbers of images, but the process is killed for large numbers of images. At about 500 images added, the Python process is using all about half of the volatile memory available. There are many more images than that.

How should I address this? I want the processing to complete and I don't mind if the processing takes a bit longer, but it would be good if I could limit the memory and CPU usage in some way. With the current approach, the machine becomes almost unusable while processing.

The code is as follows:

import os
import time
from   moviepy.editor import *

def ls_files(
    path = "."
    ):
    return([fileName for fileName in os.listdir(path) if os.path.isfile(
        os.path.join(path, fileName)
    )])

def main():

    listOfFiles = ls_files()
    listOfTileImageFiles = [fileName for fileName in listOfFiles \
        if "_tile.png" in fileName
    ]
    numberOfTiledImages = len(listOfTileImageFiles)

    # Create a video clip for each image.
    print("create video")
    videoClips = []
    imageDurations = []
    for imageNumber in range(0, numberOfTiledImages):
        imageFileName = str(imageNumber) + "_tile.png"
        print("add image {fileName}".format(
            fileName = imageFileName
        ))
        imageClip = ImageClip(imageFileName)
        duration  = 0.1
        videoClip = imageClip.set_duration(duration)
        # Determine the image start time by calculating the sum of the durations
        # of all previous images.
        if imageNumber != 0:
            videoStartTime = sum(imageDurations[0:imageNumber])
        else:
            videoStartTime = 0
        videoClip = videoClip.set_start(videoStartTime)
        videoClips.append(videoClip)
        imageDurations.append(duration)
    fullDuration = sum(imageDurations)
    video = concatenate(videoClips)
    video.write_videofile(
        "video.mp4",
        fps         = 30,
        codec       = "mpeg4",
        audio_codec = "libvorbis"
    )

if __name__ == "__main__":
    main()

Source: (StackOverflow)

Why does moviepy complain about bitrate while generating audiofile?

I have just try to use moviepy library for the first time. Generation of movies from numpy arrays was really simple, intuitive and worked out of box. This is what I tried:

from moviepy.editor import VideoClip
import numpy as np

def make_frame(t):

    val = int(255.0*(t/3.0))

    ls = []
    for height in range(100):
        row = []
        for width in range(300):
            row.append([val,0,0])
        ls.append(row)
    frame = np.array(ls)
    return frame

animation = VideoClip(make_frame, duration = 3)

animation.write_gif('first_try.gif', fps=24)
animation.write_videofile('first_try.mp4', fps=24)

Then I wanted to use moviepy to generate sound. In theory it should work in a very similar way. Here is what I tried:

from moviepy.editor import AudioClip
import numpy as np

make_frame = lambda t : 2*[ np.sin(404 * 2 * np.pi * t) ]
clip = AudioClip(make_frame, duration=5)

clip.write_audiofile('sound.mp4')

However, I got an error message:

[MoviePy] Writing audio in sound.mp4
|----------| 0/111   0% [elapsed: 00:00 left: ?, ? iters/sec]Traceback (most recent call last):
  File "sound.py", line 9, in <module>
    clip.write_audiofile('sound.mp4')
  File "<string>", line 2, in write_audiofile
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 54, in requires_duration
    return f(clip, *a, **k)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/audio/AudioClip.py", line 204, in write_audiofile
    verbose=verbose, ffmpeg_params=ffmpeg_params)
  File "<string>", line 2, in ffmpeg_audiowrite
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 54, in requires_duration
    return f(clip, *a, **k)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/audio/io/ffmpeg_audiowriter.py", line 162, in ffmpeg_audiowrite
    writer.write_frames(chunk)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/audio/io/ffmpeg_audiowriter.py", line 122, in write_frames
    raise IOError(error)
IOError: [Errno 32] Broken pipe

MoviePy error: FFMPEG encountered the following error while writing file sound.mp4:

Invalid encoder type 'libx264'


The audio export failed, possily because the bitrate you specified was two high or too low for the video codec.

Does anybody know what this error means and how this problem can be resolved?


Source: (StackOverflow)

How can I get the volume of sound of a video in Python using moviepy?

I want to get the volume of sound of a video so I use the following:

import numpy as np # for numerical operations
from moviepy.editor import VideoFileClip, concatenate

clip = VideoFileClip("soccer_game.mp4")
cut = lambda i: clip.audio.subclip(i,i+1).to_soundarray(fps=22000)
volume = lambda array: np.sqrt(((1.0*array)**2).mean())
volumes = [volume(cut(i)) for i in range(0,int(clip.audio.duration-2))] 

But I get these errors:

Exception AttributeError: "VideoFileClip instance has no attribute 'reader'" in <bound method VideoFileClip.__del__ of <moviepy.video.io.VideoFileClip.VideoFileClip instance at 0x084C3198>> ignored

WindowsError: [Error 5] Access is denied

I am using IPython notebook and Python 2.7. I assume something doesn't have the appropriate permissions. I have changed run this program as an administrator for ffmpeg.exe, ffplay.exe, ffprobe.exe.


Source: (StackOverflow)

Create a series of text clip and concatenate them into a video using moviepy

  1. In MoviePy there is an api to create a clip from text as well as to concatenate list of clips.
  2. I am trying to create a list of clips in a loop and then trying to concatenate them.
  3. Problem is every time it creates a video file of 25 seconds only with the last text in a loop.

Here is the code

for text in a list:
    try:
        txt_clip = TextClip(text,fontsize=70,color='white')
        txt_clip = txt_clip.set_duration(2)
        clip_list.append(txt_clip)
    except UnicodeEncodeError:
        txt_clip = TextClip("Issue with text",fontsize=70,color='white')
        txt_clip = txt_clip.set_duration(2) 
        clip_list.append(txt_clip)
final_clip = concatenate_videoclips(clip_list)
final_clip.write_videofile("my_concatenation.mp4",fps=24, codec='mpeg4')

Source: (StackOverflow)

Why concatenation of images in moviepy fails sometimes?

I use the following code to read two images, set their duration and concatenate them into one animation.

from moviepy.editor import *

ic_1 = ImageClip('pg_0.png')
ic_1 = ic_1.set_duration(2.0)

ic_2 = ImageClip('pg_1.png')
ic_2 = ic_2.set_duration(2.0)

video = concatenate([ic_1, ic_2], method="compose")
video.write_videofile('test.avi', fps=24, codec='mpeg4')

It works as expected for pg_0.png and pg_1.png. But f I replace these two images by another two images I get an error message:

ValueError: operands could not be broadcast together with shapes (272,363,3) (272,363) 

If more details are needed, here is the complete message:

[MoviePy] >>>> Building video test.avi
[MoviePy] Writing video test.avi
|----------| 0/97   0% [elapsed: 00:00 left: ?, ? iters/sec]Traceback (most recent call last):
  File "test2.py", line 12, in <module>
    video.write_videofile('test.avi', fps=24, codec='mpeg4')
  File "<string>", line 2, in write_videofile
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 54, in requires_duration
    return f(clip, *a, **k)
  File "<string>", line 2, in write_videofile
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 137, in use_clip_fps_by_default
    return f(clip, *new_a, **new_kw)
  File "<string>", line 2, in write_videofile
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB
    return f(clip, *a, **k)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/video/VideoClip.py", line 339, in write_videofile
    ffmpeg_params=ffmpeg_params)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/video/io/ffmpeg_writer.py", line 204, in ffmpeg_write_video
    fps=fps, dtype="uint8"):
  File "/usr/local/lib/python2.7/dist-packages/tqdm.py", line 78, in tqdm
    for obj in iterable:
  File "/usr/local/lib/python2.7/dist-packages/moviepy/Clip.py", line 473, in generator
    frame = self.get_frame(t)
  File "<string>", line 2, in get_frame
  File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 89, in wrapper
    return f(*new_a, **new_kw)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/Clip.py", line 95, in get_frame
    return self.make_frame(t)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/video/compositing/CompositeVideoClip.py", line 110, in make_frame
    f = c.blit_on(f, t)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/video/VideoClip.py", line 571, in blit_on
    return blit(img, picture, pos, mask=mask, ismask=self.ismask)
  File "/usr/local/lib/python2.7/dist-packages/moviepy/video/tools/drawing.py", line 45, in blit
    new_im2[yp1:yp2, xp1:xp2] = blitted
ValueError: operands could not be broadcast together with shapes (272,363,3) (272,363) 

Why shapes are different? All images that I use look to me as normal png images. How can I resolve this problem?


Source: (StackOverflow)

Unexpected Output In Python While Using MoviePy

Hi i added a simple GUI in this script

A python script to automatically summarize soccer videos based on the crowd's reactions

The GUI script is the following

from Tkinter import *

from tkFileDialog import askopenfilename
from soccer_reacts import video_edit

class MyFrame(Frame):
def __init__(self):

    master = Tk()
    Label(master, text="Please Insert Video Path With Browse", width=30).grid(row=0)
    Frame.__init__(self)
    self.master.title("Video Editor")

    self.master.geometry('{}x{}'.format(300, 200))
    self.master.rowconfigure(5, weight=1)
    self.master.columnconfigure(5, weight=1)
    self.grid(sticky=W+E+N+S)

    self.button = Button(self, text="Browse", command=self.load_file, width=15)
    self.button.grid(row=1, column=0, sticky=W)
    self.button2 = Button(self, text="Start", command=self.vid_reactions, width=15)
    self.button2.grid(row=2, column=0, sticky=W)

def load_file(self):
    fname = askopenfilename(filetypes=(("MP4 files", "*.mp4"),("All files", "*.*") ))
    if fname:
        self.fname = fname

def vid_reactions(self):
    print("[*]Starting operation")
    print("[*]File : "+self.fname)
    video_edit(self.fname)
    print("[*]Operation Finished")



 if __name__ == "__main__":
MyFrame().mainloop()

End this is the new code of soccer cuts

import numpy as np # for numerical operations
from moviepy.editor import VideoFileClip, concatenate


def video_edit(file_name):
clip = VideoFileClip(file_name)
cut = lambda i: clip.audio.subclip(i,i+1).to_soundarray(fps=22000) 
volume = lambda array: np.sqrt(((1.0*array)**2).mean())
volumes = [volume(cut(i)) for i in range(0,int(clip.audio.duration-2))]
averaged_volumes = np.array([sum(volumes[i:i+10])/10
                             for i in range(len(volumes)-10)])

increases = np.diff(averaged_volumes)[:-1]>=0
decreases = np.diff(averaged_volumes)[1:]<=0
peaks_times = (increases * decreases).nonzero()[0]
peaks_vols = averaged_volumes[peaks_times]
peaks_times = peaks_times[peaks_vols>np.percentile(peaks_vols,90)]

final_times=[peaks_times[0]]
for t in peaks_times:
    if (t - final_times[-1]) < 60:
        if averaged_volumes[t] > averaged_volumes[final_times[-1]]:
            final_times[-1] = t
    else:
        final_times.append(t)

final = concatenate([clip.subclip(max(t-5,0),min(t+5, clip.duration))
                     for t in final_times])
final.to_videofile(file_name) # low quality is the default

When i run the new code, the output is a mp4 file, with the sound of the match but with no video. I ve checked all the changes i made and i cannot find something wrong. Can anyone help?


Source: (StackOverflow)

How to concatenate videos in moviepy?

I am trying to use moviepy to generate video with texts. First, I want to show one messages and then another one. In my case I want to show "Dog" for one second and than "Cat Cat". For that I use the following code:

from moviepy.editor import *

def my_func(messeges):

    clips = {}
    count = 0
    for messege in messeges:
        count += 1
        clips[count] = TextClip(messege, fontsize=270, color='green')
        clips[count] = clips[count].set_pos('center').set_duration(1)
        clips[count].write_videofile(str(count) + '.avi', fps=24, codec='mpeg4')

    videos = [clips[i+1] for i in range(count)]
    video = concatenate(videos)
    video.write_videofile('test.avi', fps=24, codec='mpeg4')

    video = VideoFileClip('test.avi')
    video.write_gif('test.gif', fps=24)

if __name__ == '__main__':

    ms  = []    
    ms += ['Dog']
    ms += ['Cat Cat']
    my_func(ms)

This is the result that I get:

enter image description here

Does anybody know why do I have problems with cats?


Source: (StackOverflow)

Python Moviepy module: output video?

So basically, my code is supposed to edit the videos in a given directory for the first 15 seconds, the middle 15 seconds, and the last 15 seconds. I'm on python 2.7 and i'm using the moviepy module.

import moviepy.editor as mp
from moviepy.editor import *
import os

for item in os.listdir(wildcard):
    clip = VideoFileClip(vid + item)
    dur = clip.duration
    firstHalf = (dur/2.0) - 7.5
    secHalf = (dur/2.0) + 7.5
    end = dur - 15.0
    clip1 = clip.subclip(0, 15.0)
    clip2 = clip.subclip(firstHalf, secHalf)
    clip3 = clip.subclip(end, int(dur))
    video = mp.concatenate([clip1,clip2,clip3])
    video.to_videofile(wildcard, fps=24, codec='mpeg4')

But I keep getting an error at the video = mp.concatenate() line. I'm not sure why, but it outputs the message "Errno 22: Invalid Argument."


Source: (StackOverflow)

Set the moviepy progress bar on a tkinter GUI

I´ve writed a prgramm with a GUI to cut some videos in a specific way. How can I retrieve and display in a Tkinter-GUI the progress bar printed in the console by the library tqdm?

thx


Source: (StackOverflow)