Playing media

Playing media in mm-renderer requires configuring a context, attaching outputs and an input, and then issuing playback commands. These actions are all done with function calls to the Multimedia Renderer Client API.

To play media in mm-renderer:
  1. Connect to mm-renderer using the function mmr_connect().
  2. Create a context and set the appropriate context parameters. Use the functions mmr_context_create() and mmr_context_parameters().
  3. Attach an output and set its output parameters. Use the functions mmr_output_attach() and mmr_output_parameters(). You can attach multiple outputs.
  4. Attach the input and set the input parameters. Use the functions mmr_input_attach() and mmr_input_parameters().
  5. Start playback for the context by calling mmr_play().

Play states

The possible play states of the context are:

Idle
No input is attached.
Stopped
Input is attached but isn't playing.
Playing
Input is attached and is playing.

There is no paused play state. A play speed of 0 represents paused playback.

Play speed

In mm-renderer, the play speed is represented by an integer. Normal speed is represented by a value of 1000, and 0 means paused. Trick play refers to playing media at other speeds such as negative (reverse), slower than normal, or faster than normal. The context's input media determines whether trick play support is supported.

Use the mmr_speed_set() function to change the current play speed. You can change the speed when the state is stopped; mm-renderer saves the setting and applies it when playback restarts.

Seeking to positions

Use the mmr_seek() function to seek to a known position in a single track or a track in a playlist. If the current context input is a track, specify the track position in milliseconds (for example "2500"). If the context input is a playlist, the position must be a string in the format "99:9999" (for example "2:1200") where the first number is the track position in the current playlist and the second number is the number of milliseconds from the beginning of the specified track.

Metadata

A metadata event occurs when mm-renderer publishes metadata. One such example is when a radio station publishes metadata about song that is playing, and updates the metadata whenever a new song starts.

You can retrieve mm-renderer events by calling mmr_event_get(). If the type of the returned mmr_event_t is MMR_EVENT_METADATA, then a metadata event occurred and the published metadata can be found in the data dictionary of the mmr_event_t structure.

If you're playing a playlist, it's important to distinguish input metadata from track metadata, similar to distinguishing input parameters from track parameters. A track index of 0 in the mmr_event_t structure indicates that the metadata describes the playlist. A nonzero track index indicates that the metadata describes a track.

Track metadata is published as soon as it's known, which can be long before the track is played. There are typically two events that occur for a track that includes metadata. The first event occurs when the track enters the playlist window, and the metadata consists of the track's URL. The second event occurs when mm-renderer opens the file and examines its content, and the event structure is filled with more detailed information. When the track leaves the playlist window, an event occurs where the event structure contains that track's index, but has a null pointer for the metadata dictionary.

A static file usually doesn't generate more than two metadata events. Currently, dynamic metadata with changing titles are generated only for radio-style streams.

You can use the strm_dict_find_*() functions from the Dictionary Object Library to search the data dictionary for the following metadata:
  • Track name (md_title_name)
  • Artist name (md_title_artist)
  • Album name (md_title_album)
  • Genre (md_title_genre)
  • Text comment (md_title_comment)
  • Track length (md_title_duration)
  • Track number (md_title_track)
  • Disc number (md_title_disc)
  • Number of samples per time unit (md_title_samplerate)
  • Track bit rate (md_title_bitrate)
  • Whether the track is DRM-protected (md_title_protected)
  • Whether the track supports seeking (md_title_seekable)
  • Whether the track can be paused (md_title_pausable)
  • Track media type (md_title_mediatype)
  • Video width, in physical units (md_video_width)
  • Video height, in physical units (md_video_height)
  • Track art (md_title_art)
  • Video width, in pixels (md_video_pixel_width)
  • Video height, in pixels(md_video_pixel_height)
  • Compilation title (md_title_compilation)
  • Album artist name (md_title_albumartist)
  • Composer name (md_title_composer)
  • Track year (md_title_year)
The data dictionary may also contain the following metadata for tracks or images at a specific index (%u):
Index metadata Parsed metadata
md_audio_track%u sample_rate, bitrate, fourcc, lang
md_video_track%u frame_rate, bitrate, width, height, pixel_width, pixel_height, fourcc, lang
md_subpicture_track%u width, height, pixel_width, pixel_height, fourcc, lang
md_title_image%u width, height, mime, type, url, file
You can call mmr_metadata_split() to parse the track and image data. To parse the metadata for the audio track whose track index is 12, call:
mmr_metadata_split(md, "audio", 12);                
            
This call searches the dictionary that md points to for an entry whose name is "md_audio_track12". Indexed metadata items range from 0 to N-1 with no gaps in the numbering. Any parsed properties are returned in a dictionary that you can search using the strm_dict_find_*() functions.

Switching tracks

In API level 10.3.1 or higher, mm-renderer allows switching audio tracks in supported file formats, and publishes track-specific metadata for audio, video, and subpicture tracks.

You can use the audio_index parameter to specify which audio track to play and the subpicture_index parameter to control which subtitle to display.

Metadata for each track is encoded in a single attribute in the dictionary object received in a metadata event. Call mmr_metadata_split() to search this dictionary for the attributes describing the requested track. The function returns the track metadata as a new dictionary object.

For example, use a loop similar to the following to search a dictionary for available audio tracks:
for (int ndx = 0; 
     (trackinfo = mmr_metadata_split(metadata, "audio", ndx)) != NULL;
     ++ndx) {
    //  Now you can search the trackinfo dictionary for useful stuff, 
    //  details, or maybe display things in a list and let the
    //  user pick a track.
    strm_dict_destroy(trackinfo);
}                
            

To use the audio track from position ndx, convert ndx to a decimal string and add the string as the audio_index parameter to your track parameter dictionary. Switch to that audio track by calling mmr_track_parameters() or mmr_input_parameters() if "track" is your input type.

Managing video windows

You can render video to a display using the Screen Graphics Subsystem library.

The following example shows how to give mm-renderer a window group and window ID to use in creating a window on the application's behalf, configure mm-renderer for audio and video output, and get a handle to the window and use the Screen API functions to manipulate the output.

This example omits many steps for the sake of simplicity. For the full example, see the Video overlay sample.

To begin, define a window name to use as the window ID and retrieve the unique group name created by screen_create_window_group(). You use these two properties to set the output URL, video_device_url.

const char *window_name = "appwindow";            
char *window_group_name;
int MAX_WINGRP_NAME_LEN = 49;
window_group_name = (char *)malloc(MAX_WINGRP_NAME_LEN);

// Create the video URL for mm-renderer
static char video_device_url[PATH_MAX];

// Create a window group. Pass NULL to generate a unique window group name.
if (screen_create_window_group(g_screen_win, NULL) != 0) {
    return EXIT_FAILURE:
}

// Get the window group name.
rc = screen_get_window_property_cv(g_screen_win, SCREEN_PROPERTY_GROUP, PATH_MAX, window_group_name);
if (rc != 0) {
    fprintf(stderr, "screen_get_window_property(SCREEN_PROPERTY_GROUP) failed.\n");
    return EXIT_FAILURE;
}

rc = snprintf(video_device_url, PATH_MAX, 
       "screen:?winid=%s&wingrp=%s", window_name, window_group_name);
if (rc < 0) {
    fprintf(stderr, "Error building video device URL string\n");
    return EXIT_FAILURE;
}
else if (rc >= PATH_MAX) {
    fprintf(stderr, "Video device URL too long\n");
    return EXIT_FAILURE;
}

// Create the video context name for mm-renderer
static const char *video_context_name = "videoContext";
...

Once the window group is created, connect to mm-renderer and create a context. Finally, attach the video output to the context by calling mmr_output_attach(), specifying the URL variable we set up earlier. Use the same function to attach the audio output.

// Configure mm-renderer
mmr_connection = mmr_connect(NULL);
if (mmr_connection == NULL) {
    fprintf(stderr, "Error connecting to renderer service: %s\n", 
            strerror(errno));
    return EXIT_FAILURE;
}

mmr_context = mmr_context_create( mmr_connection, 
                                  video_context_name, 
                                  0, 
                                  S_IRWXU|S_IRWXG|S_IRWXO );
if (mmr_context == NULL) {
    fprintf(stderr, "Error creating renderer context: %s\n", 
            strerror(errno));
    return EXIT_FAILURE;
}

// Configure video and audio output
const mmr_error_info_t* errorInfo;
video_device_output_id = mmr_output_attach( mmr_context, 
                                            video_device_url,
                                            "video" );
if (video_device_output_id == -1) {
    errorInfo = mmr_error_info(mmr_context);
    fprintf(stderr, "Attaching video output produced error code \
            %d\n", errorInfo->error_code); 
    return EXIT_FAILURE;
}

audio_device_output_id = mmr_output_attach( mmr_context, 
                                            audio_device_url, 
                                            "audio" );
if (audio_device_output_id == -1) {
    // Call mmr_error_info(), display an error message, and exit
    ...
}

Next, retrieve the handle of the video window from the screen event received when the window is created, and check that the ID of the window indicated in the event matches the output video window. For more complicated applications, this is important so that you can distinguish between the video window and another child window belonging to the same window group.

Note that the function screen_event_get_event() is from the BlackBerry Platform Services (BPS) API. All other functions for getting event and window properties are from the Screen API.

screen_event_t screen_event = screen_event_get_event(event);
int event_type;
screen_get_event_property_iv( screen_event, 
                              SCREEN_PROPERTY_TYPE, 
                              &event_type );

// Check if it's a creation event and the video output window 
// has not yet been initialized.
if ((event_type == SCREEN_EVENT_CREATE) && 
        (video_window == (screen_window_t)NULL)) {
    char id[256];

    rc = screen_get_event_property_pv( screen_event, 
                                       SCREEN_PROPERTY_WINDOW,
                                       (void**)&video_window );
    if (rc != 0) {
        fprintf(stderr, "Error reading event window: %s\n", 
                strerror(errno));
        return EXIT_FAILURE;
    }

    rc = screen_get_window_property_cv( video_window, 
                                        SCREEN_PROPERTY_ID_STRING, 
                                        256, 
                                        id );
    if (rc != 0) {
        fprintf(stderr, "Error reading window ID: %s\n", 
            strerror(errno));
        return EXIT_FAILURE;
    }

    if (strncmp(
            id, window_group_name, strlen(window_group_name)) != 0)
        fprintf(stderr, "Mismatch in window group names\n");
        return EXIT_FAILURE;
}
...

When you have this handle, you can manipulate the video window directly with Screen API calls.

// Set the z-order of the video window to put it above or below 
// the main window. Alternate between +1 and -1 to implement
// double-buffering to avoid flickering of output.
app_window_above = !app_window_above;
if (app_window_above) {
    screen_val = 1;
}
else {
    screen_val = -1;
}

if (screen_set_window_property_iv( video_window, 
                                   SCREEN_PROPERTY_ZORDER, 
                                   &screen_val ) != 0) {
    fprintf(stderr, "Error setting z-order of video window: %s\n", 
            strerror(errno));
    return EXIT_FAILURE;
}

// Set the video window to be visible.
screen_val = 1;
if (screen_set_window_property_iv( video_window, 
                                   SCREEN_PROPERTY_VISIBLE, 
                                   &screen_val) != 0 ) {
    fprintf(stderr, "Error making window visible: %s\n", 
            strerror(errno));
    return EXIT_FAILURE;
}
...

Last modified: 2014-09-30



Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus