Playing audio

You can use Cascades MediaPlayer, the mm-renderer service, or the QNX Sound Architecture to play audio files.

Prerequisites

Before your app can play audio with Cascades, the multimedia renderer service, or the QNX Sound Architecture (QSA), you must follow the prerequisites of the platform you choose.

Prerequisites to play audio with Cascades

When you're using QML, add the multimedia library by using the import bb.multimedia 1.4 import statement.

When you're using C++, add the #include <bb/multimedia/MediaPlayer> include statement for custom sounds, or the #include <bb/multimedia/SystemSound> include statement for system sounds.

For C++, you must also include the following line to your app's .pro file: LIBS += -lbbmultimedia

Prerequisites to play audio with the mm-renderer service

When you're using the mm-renderer service to play audio files, include the following header files:

#include <bps/bps.h>
#include <unistd.h>
#include <mm/renderer.h>

Also, your must use the BlackBerry Platform Services (BPS) library to play audio files with the multimedia renderer service. Before you can call any of the functions in the BPS you must call bps_initialize() to prepare the BPS platform for use and bps_shutdown() to close the platform down.

Playing system sounds does not require the use of the mm-renderer service. Instead use the Sound Player service that's available in the BPS to play system sounds. Include this reference in your source code:

#include <bps/bps.h>
#include <bps/soundplayer.h>

Prerequisites to play audio with the QNX Sound Architecture

When you're using the QSA to play audio you must work with PCM devices. There are a series of steps and processes that must be followed when working with PCM devices. For more information, see Working with PCM devices.

Playing audio with Cascades

You can use the MediaPlayer to play audio files and streams. The MediaPlayer allows your app to play audio files, and also provides your app with the ability to seek, rewind, and fast forward audio tracks.

You can create one MediaPlayer for every sound you want to play, but it's not necessary unless you intend to play multiple sounds at the same time at the same time. You can use one MediaPlayer and programmatically set its sourceUrl property for each sound that you want to play, at the moment that you're ready to play it.

Depending on the resources that are available to the device that runs your app, you could create a MediaPlayer for each sound you want to play. This approach allows you to call the prepare() function for each MediaPlayer in advance and acquire the resources required to play each sound. You can use this technique if you want to play a number of sounds in rapid succession, or if you want to play many sounds at the same time.

The trade-off is that your app acquires a number of resources as it prepares itself to play all of the sounds, in all of the MediaPlayer objects that have been defined. Acquiring all of these resources may cause the playing of audio not to succeed when the device doesn't have enough resources to support the app's demands. Whether you choose to create one or many MediaPlayer objects for your app should be judged on a case-by-case basis.

Playing custom or system sounds with Cascades

With Cascades, you can use a MediaPlayer object to play custom sounds or a SystemSound object to play system sounds in your app.

Follow these steps to play sounds:

  • Create a MediaPlayer or SystemSound object in your code.

  • Set the path of the source URL to the audio file that you want to play. For custom sounds, this source path can be any URI and can point to either a local or remote file. For system sounds, you must set which system sound you want to play, then play it one or more times.

  • The MediaPlayer and the SystemSound classes each have a play() function that you can use to play sounds.

A flow diagram for playing sounds

When using BlackBerry 10 OS version 10.1 to perform the concurrent playback of multiple media items, you’re only guaranteed to have a maximum of 8 items playing at the same time.

Playing audio with mm-renderer

In some apps, you may want to play audio in the background (for example, in a game that includes a soundtrack, or in a music player app). You can use the multimedia renderer service to play an audio file while allowing the user to perform other tasks while the file is playing.

To play audio with the mm-renderer service, you can declare a few variables to store playback information. This information includes the name of the multimedia renderer context (to identify the multimedia renderer to your app), the audio output device (to use to play the audio file), the type of input (for example, a song, a playlist, a DVD, and so on), and the location of the audio file.

Next, you can define the mode of the multimedia renderer context, which indicates the types of actions that you can perform on the audio file. For audio files, it's important for the app to read them and play them and you can use the S_IRUSR and S_IXUSR constants to specify these actions.

Then you can create a string that specifies the location of the audio file. If you know the location of the audio file relative to the current working directory on the device, you can call getcwd() (which is included in unistd.h) to retrieve the current working directory and then call snprintf() to create a string that represents the full path of the file. You must make sure that the current working directory is never set elsewhere in your app. The PATH_MAX constant specifies the maximum number of characters for a file name on the device.

The multimedia renderer contains two important structures. The mmr_connection_t structure represents a connection to the renderer. Your app must connect to the renderer by calling mmr_connect() before any files can be played. You can specify a name for the renderer by passing it as an argument to mmr_connect(), or you can pass NULL to use the default name.

The renderer also contains the mmr_context_t structure. This structure represents a renderer context, which you can use to attach an input (the audio file) and an output (the audio output device). You must create this context by calling mmr_create_context(), specifying the renderer connection, a name for the context, and the mode.

After you connect to the renderer and create the context, you can attach the output to the context by calling mmr_attach_output(), making sure to specify the type of output ("audio"). You can also set output parameters, such as initial volume and output stream selection, by calling mmr_output_parameters(). An argument of NULL indicates that no output parameters are set.

You can attach the input to the context by calling mmr_attach_input(), specifying the location of the audio file and the type of input. You can play the file by calling mmr_play().

After you start playing the audio file, you can perform any other tasks that your app requires. When you're ready to stop playing the file and start cleaning up the renderer's resources, you can call a sequence of functions to stop the playback, detach the input from the context, destroy the context, and disconnect from the renderer.

You may not need to call each function in this sequence, because calls that occur later in the sequence supersede earlier calls. For example, calling mmr_input_detach() also takes care of all of the operations that mmr_stop() performs. Similarly, calling mmr_context_destroy() also takes care of the operations that mmr_input_detach() and mmr_stop() perform, and so on.

After you start playing an audio file, you can use the audio mixer service in the BlackBerry Platform Services (BPS) library to control the playback properties. For example, you can call audiomixer_set_output_level() to set the playback volume, or you can call audiomixer_toggle_output_mute() to mute or unmute the playback.

Before you can use any of these functions, you must initialize the BlackBerry Platform Services library by calling bps_initialize().

For more information, see audiomixer.h.

Using the mm-renderer

The multimedia renderer is a connection-based service that controls the playback of audio and video read from an input and directed to one or many outputs. Before calling any API functions to control play back, your app must connect to the mm-renderer service by calling mmr_connect() .

After you have a valid connection handle for mm-renderer, your app can create contexts for managing content flows, attach inputs and outputs to those contexts, and issue requests to play audio or video.

You can disconnect from mm-renderer by calling mmr_disconnect() .

Defining an input and outputs

Defining an input and outputs centers around the creation and configuration of contexts. Contexts define the flow of audio and video from an input to one or many outputs. You must create and configure a context before you can start playing audio or video.

To create a new context, call the mmr_context_create() function, passing in the mm-renderer connection handle. This function returns a context handle (the primary handle), which you can use to manipulate the context. You can use the context handle to set parameters, attach an input and one or more outputs, and issue playback commands.

You can create multiple contexts, as long as your app manages potentially conflicting playback situations (for example, recording to an audio file while trying to play the same file).

The multimedia renderer doesn't allow contexts to exist after their primary handle is closed, but it does allow you to open existing contexts by calling mmr_context_open() . That is, additional (or secondary) context handles can be opened after the context is created.

Calling mmr_context_close() using the primary context handle not only closes that handle but also stops play back, detaches any input and outputs, and destroys the context.

When your app disconnects from mm-renderer or exits without disconnecting, any contexts created by your app are destroyed, regardless of whether or not you closed their handles. If your app is still running, it's important to close any remaining secondary handles of destroyed contexts to avoid memory leaks.

When a context is no longer needed, you can explicitly destroy it by passing the context handle to the mmr_context_destroy() function.

Parameters for mm-renderer

Parameters allow you to set properties that influence how media files are accessed and rendered during playback. Properties such as the audio volume or video display size can be controlled by defining parameters for a context or its inputs or outputs.

Parameters are represented as dictionary objects, which are collections of key-value pairs where both the key and the value are strings. The parameters that apply to the context and its input and outputs depend on the media content that's being played or recorded.

There are four kinds of parameters that you can set:

Context parameters These parameters are used for media-independent settings. Context parameters include updateinterval and parameters that map to libcurl options. There is one dictionary for each context. Call mmr_context_parameters() to set context parameters.
Input parameters These parameters are used for items that affect the input file. Input parameters include volume, repeat, audio_type, audioman_handle, and parameters that map to libcurl options. There is one dictionary for each context. Call mmr_input_parameters() to set input parameters.
Output parameters These parameters are used for items that affect the output file. Output parameters include volume, audio_type, and audioman_handle. There is one dictionary for each attached output. Call mmr_output_parameters() to set output parameters.
Track parameters These parameters are used for media files or streams. Track parameters include audio_type, audioman_handle, audio_index, subpicture_index, and parameters that map to libcurl options. There is one dictionary for each media file or stream being played. For playlists, the distinction between the input file (the playlist) and the media files listed by the playlist is significant. This distinction doesn't exist when you play a media file without a playlist. Call mmr_track_parameters() to set track parameters.

The input and track parameters can be adjusted by mm-renderer by removing invalid or unsupported entries, changing a parameter value to the nearest supported value, and so on. Context and output parameters can't be changed by mm-renderer.

To create a dictionary object if one doesn't exist, call strm_dict_new() . Use strm_dict_set() to set the key-value pairs for the parameters you want to define.

To set parameters for a context, call mmr_context_parameters() , passing the handle to the dictionary object that holds the context parameters. Similarly, call mmr_input_parameters() to set the input parameters, and call mmr_output_parameters() to set the output parameters. In each case, you must pass a handle to a separate dictionary object that's populated with the appropriate parameter key-value pairs.

You can set input parameters when you don't have any input; these parameters apply to the next input you attach. The main difference is that the input parameters are reset to an empty dictionary when you detach the input, but the dictionary containing the context parameters is not affected.

To change parameters, call the required function again, passing a handle to a dictionary object populated with the new parameters. The mmr_*_parameters() functions consume the dictionary object handle. If you want to keep the object, call the function strm_dict_clone() to duplicate the handle before calling one of the parameter functions.

Parameter precedence

Some parameter names are recognized as context parameters and also as track or input parameters. An input or track parameter takes precedence over any context parameter with the same name. This precedence allows an app to set default settings in context parameters and use input or track parameters to override the default values for specific items. The updateinterval and the HTTP OPT_... parameters belong to this category and are often used in this manner.

Configuring track and playlist inputs

The behavior of track parameters depends on the input type that uses them.

For track inputs, the input parameters and track parameters are the same dictionary. You can change the parameters by calling mmr_input_parameters() , not mmr_track_parameters() .

For playlist inputs, input parameters affect the reading of the playlist file and track parameters affect the reading and playback of tracks. Set the input and track parameters by calling mmr_input_parameters() and mmr_track_parameters() . Track parameters can be set only for tracks in the playlist window. Set the default parameters for tracks that enter the window by calling mmr_track_parameters() with an index of 0.

For autolist inputs, there's a simulated playlist file, but the input URL specifies a track. The initial track parameters for the track are the same as the input parameters (for more consistency with track inputs). After the input is attached, the behavior is more consistent with the playlist input type, where mmr_track_parameters() controls the track parameters and mmr_input_parameters() controls the input parameters. Because there’s no real playlist file, all input parameters, except for repeat, are ignored.

Play audio with Cascades or mm-renderer

Each platform has its own way of playing audio. For example, in Cascades you can use the MediaPlayer to play audio files and streams. In the mm-renderer service, you can use function calls to its C API to play audio files. In the QNX Sound Architecture, you would have to work with PCM devices, hardware, and hardware drivers to play audio files.

A code sample that shows you how to play audio with the QSA wasn't included here because playing audio with that platform is a process that provides you with many options. One brief code sample could not adequately represent all of the options available for playing audio with the QSA. For more information about using the QSA, see QNX Sound Architecture APIs.

Here are some code samples that demonstrate how to play audio from various sources. The first two (in the QML and C++ tabs) show you how to play audio in Cascades. The last code sample (in the C tab) shows you how to create a context to play audio files with the mm-renderer service.

This code sample shows you how to play and stop a custom audio file:

import bb.cascades 1.4
import bb.multimedia 1.4

Page {
    Container {
        layout: AbsoluteLayout {
        }
        
        attachedObjects:[
            MediaPlayer {
                id: audioPlayer
                sourceUrl: "/path/to/audio.wav"
            }
        ]
        
        Button {
            text: "Play Audio"
            onClicked: {
                if(audioPlayer.play() != MediaError.None) {
                    // Your error handling code here
                }
            }
        }
        
        Button {
            text: "Stop Audio"
            onClicked: {
                if(audioPlayer.stop() != MediaError.None) {
                    // Your error handling code here
                }
            }
        }
    }
}
#include <bb/multimedia/MediaPlayer>
#include <bb/multimedia/MediaError>

//...

audioPlayer = new bb::multimedia::MediaPlayer();
audioPlayer->setSourceUrl(QUrl("/path/to/audio.wav"));

playAudio = Button::create()
    .text("Play Audio")
    .onClicked(this, SLOT(playAudioClicked()));

stopAudio = Button::create()
.text("Stop Audio")
.onClicked(this, SLOT(stopAudioClicked()));

//...

Here's the code for the slots that handle the button click signals.

void ApplicationUI::playAudioClicked()
{
    if(audioPlayer->play() != bb::multimedia::MediaError::None)
    {
        // Your error handling code here
    }
}

void ApplicationUI::stopAudioClicked()
{
    if(audioPlayer->stop() != bb::multimedia::MediaError::None)
    {
        // Your error handling code here
    }
}

The audio file that is being played in this code sample is located in the /app/native/ folder on the device. The name of the audio file is test.mp3.

const char *ctxt_name = "testplayer";
const char *audio_out = "audio:default";
const char *input_type = "track";
char cwd[PATH_MAX];
char input_url[PATH_MAX];
int audio_oid;

/* S_IRUSR indicates read permission
 * and S_IXUSR indicates
 * execute/search permission
 */
mode_t mode = S_IRUSR | S_IXUSR;

getcwd(cwd, PATH_MAX);
snprintf(input_url, PATH_MAX, "file://%s%s", cwd,
         "/app/native/test.mp3");

mmr_connection_t *connection;
mmr_context_t *ctxt;

connection = mmr_connect(NULL);
ctxt = mmr_context_create(connection, ctxt_name, 0, mode);

audio_oid = mmr_output_attach(ctxt, audio_out, "audio");
mmr_output_parameters(ctxt, audio_oid, NULL);

mmr_input_attach(ctxt, input_url, input_type);

mmr_play(ctxt);

// Perform other tasks here

mmr_stop(ctxt);
mmr_input_detach(ctxt);
mmr_context_destroy(ctxt);
mmr_disconnect(connection);

The following code samples show you how to play system sounds in your apps.

Notifications must be enabled on your device in order to hear system sounds. When Notifications are set to be silent on the device, no system sounds are heard.

Here's a code sample that shows you how to play a camera shutter system sound:

import bb.cascades 1.4
import bb.multimedia 1.4

Page {
    Container {
        // ...
        
        attachedObjects: [
            SystemSound {
                id: sysSound
                sound: SystemSound.CameraShutterEvent
            }
        ]
        
        Button {
            text: "Play System Sound"
                    
            onClicked: {
                sysSound.play();
            }
        }
    }
}
#include <bb/multimedia/SystemSound>

//...

// Code sample showing the single line of code
// needed to asynchronously play a system sound
SystemSound::play(SystemSound::CameraShutterEvent);

Playing system sounds does not require the use of the multimedia renderer service. Instead there is the soundplayer_play_sound_blocking() function in the Sound Player Service, which is used with the BlackBerry Platform Services (BPS) to play system sounds.

#include <bps/bps.h>
#include <bps/soundplayer.h>

int main(int argc, char **argv)
{
    bps_initialize();

    /* Play the camera shutter sound */
    soundplayer_play_sound_blocking("event_camera_shutter");

    bps_shutdown();

    return 0;
}

In Cascades, you can use the play() function and a loop to play a sound many times in rapid succession. In the sound player service, you can use the soundplayer_play_sound_blocking() function and a loop to play a sound many times in rapid succession. Here's a code sample that plays the camera shutter system sound multiple times.

import bb.cascades 1.4
import bb.multimedia 1.4

Page {
    Container {
        attachedObjects: [
            SystemSound {
                id: sysSound
                sound: SystemSound.CameraShutterEvent
            }
        ]
        onCreationCompleted: {
            for(var i = 0; i < 2; i++)
            {
                sysSound.play();
            }
        }
    }
}
#include <bb/multimedia/SystemSound>

//...

// Define the system sound that you want to play
SystemSound sysSound(SystemSound::CameraShutterEvent);

// Play the camera shutter system sound 2 times
for(int i = 0; i < 2; i++)
{
    sysSound.play();
}
#include <bps/bps.h>
#include <bps/soundplayer.h>

int main(int argc, char **argv)
{
    int i = 0;
    bps_initialize();

    for(i = 0; i < 2; i++)
    {
        /* Play the camera shutter system sound 2 times */
        soundplayer_play_sound_blocking("event_camera_shutter");
    }

    bps_shutdown();
}

Playing audio with QSA

Before you can play audio, you must open and configure a PCM playback device, and prepare the PCM subchannel. For more information, see Working with PCM devices.

If your app gives users the option to play audio in multiple formats, choosing a format that's directly supported by their device's hardware reduces the work done by their CPU.

Audio mixers

Working with audio mixers allows you to control many of the properties of the files that you play. For more information, see Working with audio mixers.

Playback states

This diagram shows the state transitions that occur in a PCM device during playback.

State diagram showing state transitions in PCM devices during playback.

The transition between SND_PCM_STATUS_* states can occur as the result of making a function call, or they can occur because of conditions that exist in the hardware. The following table shows each transition along with a possible cause:

From

To

Cause

NOTREADY

READY

Calling the snd_pcm_channel_params() or snd_pcm_plugin_params() functions

READY

PREPARED

Calling the snd_pcm_channel_prepare(), snd_pcm_playback_prepare(), or snd_pcm_plugin_prepare() functions

PREPARED

RUNNING

Calling the snd_pcm_write(), snd_pcm_plugin_write(), snd_pcm_channel_go(), or against the record file descriptors, snd_pcm_playback_go() functions

RUNNING

PAUSED

Calling the snd_pcm_channel_pause() or snd_pcm_playback_pause() functions

PAUSED

RUNNING

Calling the snd_pcm_channel_resume() or snd_pcm_playback_resume() functions

PAUSED

PREPARED

Calling the snd_pcm_playback_resume() function

PAUSED

CHANGED

The stream changed or an event has occurred

PREPARED

CHANGED

The stream changed or an event has occurred

RUNNING

UNDERRUN

The hardware buffer was emptied during playback

RUNNING

UNSECURE

The app has marked the stream as protected, the hardware level supports a secure transport (such as HDCP for HDMI), and authentication was lost

RUNNING

CHANGED

The stream has changed

RUNNING

ERROR

A hardware error has occurred

UNDERRUN, UNSECURE, CHANGE, or ERROR

PREPARED

Calling the snd_pcm_channel_prepare(), snd_pcm_playback_prepare(), or snd_pcm_plugin_prepare() functions

RUNNING

PREEMPTED

Audio is blocked because a new libasound session has begun playback and the audio driver has determined that the new session has a higher priority

For more information about transitions, see Audio Library .

Send data to the PCM subchannel

The function that you use to send data to a PCM subchannel depends on whether or not you're using plug-in converters.

snd_pcm_write()
The number of bytes written must be a multiple of the fragment size, or the write task does not succeed.
snd_pcm_plugin_write()
The plug-in accumulates partial write data until a complete fragment can be sent to the driver.

A full nonblocking write mode is supported when your app can't afford to be blocked on the PCM subchannel. You can enable nonblocking mode when you open the handle or by calling the snd_pcm_nonblock_mode() function.

This approach results in a polled operation mode, which is not recommended.

Another approach that your app can use to avoid blocking while in the write mode is to call the select() function and wait until the PCM subchannel can accept more data. This function allows the app to wait on user input while sending playback data to the PCM subchannel.

To get the file descriptor to use with select(), call the snd_pcm_file_descriptor() function.

The select() function returns when there's enough space in the PCM subchannel for a number of bytes equal to frag_size. When your app tries to write more data than this amount, the select() function may block when called.

PCM subchannel stops during playback

The PCM subchannel stops during playback when the hardware consumes all of the data in its buffer. This situation can happen when the app can't produce data at the rate that it's consumed by the hardware. A real-world example is when a higher-priority process preempts the app for a long time. When this preemption continues long enough, all data in the buffer may be played before the app can add more data.

When this condition occurs, the subchannel changes state to SND_PCM_STATUS_UNDERRUN. In this state, the subchannel doesn't accept any more data (in other words, snd_pcm_write() and snd_pcm_plugin_write() functions do not succeed) and the PCM subchannel doesn't restart playing.

The only way to move out of this state is to close the PCM subchannel or to reprepare the PCM subchannel as you did before. For more information about preparing the PCM subchannel, see Prepare the PCM subchannel.

Preparing the PCM subchannel forces the app to recognize the underrun state and try to get out of it. This approach is for apps that want to synchronize audio with something else. Consider the difficulties involved with synchronization in the case where the PCM subchannel moves back to the SND_PCM_STATUS_RUNNING state from the underrun state and more data becomes available.

Stop playback

When your app wants to stop playback, it can stop sending data and let the subchannel underrun. However, there are better ways to stop playback.

When you want your app to stop playback immediately, call one of the following drain functions to remove any unplayed data from the hardware buffer.

When you want to play out all data in the buffers before stopping, call one of the following functions.

Last modified: 2015-07-24



Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus