Sound and video¶
pyglet can play many audio and video formats. Audio is played back with either OpenAL, DirectSound or Pulseaudio, permitting hardware-accelerated mixing and surround-sound 3D positioning. Video is played into OpenGL textures, and so can be easily manipulated in real-time by applications and incorporated into 3D environments.
Decoding of compressed audio and video is provided by FFmpeg v4.X, an optional component available for Linux, Windows and Mac OS X. FFmpeg needs to be installed separately.
If FFmpeg is not present, pyglet will fall back to reading uncompressed WAV files only. This may be sufficient for many applications that require only a small number of short sounds, in which case those applications need not distribute FFmpeg.
pyglet can use OpenAL, DirectSound or Pulseaudio to play back audio. Only one
of these drivers can be used in an application. In most cases you won’t need
to concern yourself with choosing a driver, but you can manually select one if
desired. This must be done before the
pyglet.media module is loaded.
The available drivers depend on your operating system:
The audio driver can be set through the
audio key of the
pyglet.options dictionary. For example:
pyglet.options['audio'] = ('openal', 'pulse', 'directsound', 'silent')
This tells pyglet to try using the OpenAL driver first, and if not available
to try Pulseaudio and DirectSound in that order. If all else fails, no driver
will be instantiated. The
audio option can be a list of any of these
strings, giving the preference order for each driver:
String Audio driver
No audio output
The following sections describe the requirements and limitations of each audio driver.
DirectSound is available only on Windows, and is installed by default. pyglet uses only DirectX 7 features. On Windows Vista, DirectSound does not support hardware audio mixing or surround sound.
OpenAL is included with Mac OS X. Windows users can download a generic driver
from openal.org, or from their sound device’s manufacturer. Most Linux
distributions will have OpenAL available in the repositories for download.
For example, Ubuntu users can
apt install libopenal1.
Pulseaudio has become the standard Linux audio implementation over the past few years, and is installed by default with most modern Linux distributions. Pulseaudio does not support positional audio, and is limited to stereo. It is recommended to use OpenAL if positional audio is required.
|||(1, 2) OpenAL is not installed by default on Windows, nor in many Linux distributions. It can be downloaded separately from your audio device manufacturer or openal.org|
Supported media types¶
If FFmpeg is not installed, only uncompressed RIFF/WAV files encoded with linear PCM can be read.
With FFmpeg, many common and less-common formats are supported. Due to the large number of combinations of audio and video codecs, options, and container formats, it is difficult to provide a complete yet useful list. Some of the supported audio formats are:
Some of the supported video formats are:
For a complete list, see the FFmpeg sources. Otherwise, it is probably simpler
to try playing back your target file with the
New versions of FFmpeg as they are released may support additional formats, or fix errors in the current implementation. Currently a C bindings was written with ctypes using FFmpeg v4.X. This means that this version of pyglet will support all FFmpeg binaries with the major version set to 4.
You can install FFmpeg for your platform by following the instructions found in the FFmpeg download page. You must choose the shared build for the targeted OS with the architecture similar to the Python interpreter.
This means that the major version must be 4.X. All minor versions are supported. Choose the correct architecture depending on the targeted Python interpreter. If you’re shipping your project with a 32 bits interpreter, you must download the 32 bits shared binaries.
On Windows, the usual error message when the wrong architecture was downloaded is:
WindowsError: [Error 193] %1 is not a valid Win32 application
Finally make sure you download the shared builds, not the static or the dev builds.
For Mac OS and Linux, the library is usually already installed system-wide. For Windows users, it’s not recommended to install the library in one of the windows sub-folders.
Instead we recommend to use the
import pyglet pyglet.options['search_local_libs'] = True
This will allow pyglet to find the FFmpeg binaries in the
located in your running script folder.
Another solution is to manipulate the environment variable. On Windows you can add the dll location to the PATH:
os.environ["PATH"] += "path/to/ffmpeg"
For Linux and Mac OS:
os.environ["LD_LIBRARY_PATH"] += ":" + "path/to/ffmpeg"
Audio and video files are loaded in the same way, using the
pyglet.media.load() function, providing a filename:
source = pyglet.media.load('explosion.wav')
The result of loading a media file is a
Source object. This object provides useful
information about the type of media encoded in the file, and serves as an
opaque object used for playing back the file (described in the next section).
load() function will raise a
MediaException if the format is unknown.
IOError may also be raised if the file could not be read from disk.
Future versions of pyglet will also support reading from arbitrary file-like
objects, however a valid filename must currently be given.
The length of the media file is given by the
duration property, which returns the media’s
length in seconds.
Audio metadata is provided in the source’s
audio_format attribute, which is
silent videos. This metadata is not generally useful to applications. See
AudioFormat class documentation for details.
Video metadata is provided in the source’s
video_format attribute, which is
audio files. It is recommended that this attribute is checked before
attempting play back a video file – if a movie file has a readable audio
track but unknown video format it will appear as an audio file.
You can use the video metadata, described in a
VideoFormat object, to set up display of the video
before beginning playback. The attributes are as follows:
Width and height of the video image, in pixels.
The aspect ratio of each video pixel.
You must take care to apply the sample aspect ratio to the video image size for display purposes. The following code determines the display size for a given video format:
def get_video_size(width, height, sample_aspect): if sample_aspect > 1.: return width * sample_aspect, height elif sample_aspect < 1.: return width, height / sample_aspect else: return width, height
Media files are not normally read entirely from disk; instead, they are streamed into the decoder, and then into the audio buffers and video memory only when needed. This reduces the startup time of loading a file and reduces the memory requirements of the application.
However, there are times when it is desirable to completely decode an audio file in memory first. For example, a sound that will be played many times (such as a bullet or explosion) should only be decoded once. You can instruct pyglet to completely decode an audio file into memory at load time:
explosion = pyglet.media.load('explosion.wav', streaming=False)
explosion = pyglet.media.StaticSource(pyglet.media.load('explosion.wav'))
In addition to loading audio files, the
module is available for simple audio synthesis. There are several basic
The module documentation for each will provide more information on constructing them, but at a minimum you will need to specify the duration. You will also want to set the audio frequency (most waveforms will default to 440Hz). Some waveforms, such as the FM, have additional parameters.
For shaping the waveforms, several simple envelopes are available. These envelopes affect the amplitude (volume), and can make for more natural sounding tones. You first create an envelope instance, and then pass it into the constructor of any of the above waveforms. The same envelope instance can be passed to any number of waveforms, reducing duplicate code when creating multiple sounds. If no envelope is used, all waveforms will default to the FlatEnvelope of maximum volume, which esentially has no effect on the sound. Check the module documentation of each Envelope to see which parameters are available.
An example of creating an envelope and waveforms:
adsr = pyglet.media.synthesis.ADSREnvelope(0.05, 0.2, 0.1) saw = pyglet.media.synthesis.Sawtooth(duration=1.0, frequency=220, envelope=adsr) fm = pyglet.media.synthesis.FM(3, carrier=440, modulator=2, mod_index=22, envelope=adsr)
The waveforms you create with the synthesis module can be played like any other loaded sound. See the next sections for more detail on playback.
Simple audio playback¶
Many applications, especially games, need to play sounds in their entirety without needing to keep track of them. For example, a sound needs to be played when the player’s space ship explodes, but this sound never needs to have its volume adjusted, or be rewound, or interrupted.
explosion = pyglet.media.load('explosion.wav', streaming=False) explosion.play()
You can implement many functions common to a media player using the
class. Use of this class is also necessary for video playback. There are no
parameters to its construction:
player = pyglet.media.Player()
A player will play any source that is queued on it. Any number of sources can be queued on a single player, but once queued, a source can never be dequeued (until it is removed automatically once complete). The main use of this queueing mechanism is to facilitate “gapless” transitions between playback of media files.
queue() method is used to queue
a media on the player - a
StreamingSource or a
StaticSource. Either you pass one instance, or you
can also pass an iterable of sources. This provides great flexibility. For
instance, you could create a generator which takes care of the logic about
what music to play:
def my_playlist(): yield intro while game_is_running(): yield main_theme yield ending player.queue(my_playlist())
When the game ends, you will still need to call on the player:
The generator will pass the
ending media to the player.
StreamingSource can only ever be queued on one
player, and only once on that player.
objects can be queued any number of times on any number of players. Recall
StaticSource can be created by passing
streaming=False to the
In the following example, two sounds are queued onto a player:
Playback begins with the player’s
Standard controls for controlling playback are provided by these methods:
Begin or resume playback of the current source.
Pause playback of the current source.
Dequeue the current source and move to the next one immediately.
Seek to a specific time within the current source.
Note that there is no stop method. If you do not need to resume playback,
simply pause playback and discard the player and source objects. Using the
next_source() method does not guarantee gapless
There are several properties that describe the player’s current state:
The current playback position within the current source, in seconds. This is read-only (but see the
True if the player is currently playing, False if there are no sources queued or the player is paused. This is read-only (but see the
A reference to the current source being played. This is read-only (but see the
The audio level, expressed as a float from 0 (mute) to 1 (normal volume). This can be set at any time.
Trueif the current source should be repeated when reaching the end. If set to
False, playback will continue to the next queued source.
When a player reaches the end of the current source, by default it will move
immediately to the next queued source. If there are no more sources, playback
stops until another source is queued. The
loop attribute which determines
the player behaviour when the current source reaches the end. If
False (default) the
Player starts to play the next queued source.
Player re-plays the current source
loop is set to
next_source() is called.
You can change the
loop attribute at
any time, but be aware that unless sufficient time is given for the future
data to be decoded and buffered there may be a stutter or gap in playback.
If set well in advance of the end of the source (say, several seconds), there
will be no disruption.
To play back multiple similar sources without any audible gaps,
SourceGroup is provided.
SourceGroup can only contain media sources
with identical audio or video format. First create an instance of
SourceGroup, and then add all desired additional
sources with the
Afterwards, you can queue the
on a Player as if it was a single source.
Player is playing back a source with
video, use the
texture property to obtain the
video frame image. This can be used to display the current video image
syncronised with the audio track, for example:
@window.event def on_draw(): player.texture.blit(0, 0)
The texture is an instance of
pyglet.image.Texture, with an internal
format of either
GL_TEXTURE_RECTANGLE_ARB. While the
texture will typically be created only once and subsequentally updated each
frame, you should make no such assumption in your application – future
versions of pyglet may use multiple texture objects.
pyglet includes features for positioning sound within a 3D space. This is particularly effective with a surround-sound setup, but is also applicable to stereo systems.
Player in pyglet has an associated position
in 3D space – that is, it is equivalent to an OpenAL “source”. The properties
for setting these parameters are described in more detail in the API
documentation; see for example
A “listener” object is provided by the audio driver. To obtain the listener for the current audio driver:
This provides similar properties such as
describe the position of the user in 3D space.
Note that only mono sounds can be positioned. Stereo sounds will play back as normal, and only their volume and pitch properties will affect the sound.