There's a python sounddevice
module, which is able to synthesize a realtime audio signal frames and blocks on demand as a function of current frame time. The example usage is on the rtd: https://python-sounddevice.readthedocs.io/en/0.3.14/examples.html#play-a-sine-signal
But I wanted to port this functionality using purely functionality from pygame (probably using pygame.mixer.Sound
). However looking at the documentation, it seems like the Sound
module wants to use python buffers. I have very little experience with those and zero idea how to start. The best I could come up with is this:
#!/usr/bin/env python3
import pygame as pg
import numpy as np
s_time = 0
def synth(frames = 1024):
print("Buffer read")
global s_time
def frame(i):
# there are some magic numbers, sorry about that
return 0.2 * 32767 * np.sin(2.0 * np.pi * 440 * i / 48000)
arr = np.array([frame(x) for x in range(s_time, s_time + frames)]).astype(np.int16)
print(arr)
print(len(arr))
s_time = s_time + frames
return arr
running = True
pg.init()
pg.display.set_mode([320, 200])
pg.mixer.init(48000, -16, 1, 1024)
snd = pg.mixer.Sound(buffer=synth())
snd.play(-1)
while running:
events = pg.event.get()
for event in events:
if event.type == pg.KEYDOWN:
if event.key == pg.K_q:
print("Key pressed")
if event.key == pg.K_ESCAPE:
print("Quitting")
running = False
if event.type == pg.KEYUP:
if event.key == pg.K_q:
print("Key depressed")
But that just loops the first 1024 frames instead of generating they sine wawe in real time.
Is there a pure pygame way to do a realtime sound synthesis, or do I have to resign to using souddevice
library?