Set up your video

The component of the Native SDK that manages media flow is called mm-renderer. It needs an output window to display your video. We use the screen.h library to create and initialize a top-level window and window group.

Because the code for this tutorial does not check return codes, it doesn't do proper error handling. When you are developing your own app, make sure that you check for return codes and handle errors appropriately.

In main.c, include the following header files:

#include <fcntl.h>
#include <math.h>
#include <stdlib.h>

#include <bps/bps.h>
#include <bps/navigator.h>
#include <bps/screen.h>

#include <mm/renderer.h>

Next, we declare the following variables. We use these variables later in the tutorial.

  • video_device_url and audio_device_url: These variables specify the device location for video and audio output. We use these variables to attach the mm-renderer context to a specific video or audio device.
  • video_context_name: This variable uniquely identifies the mm-renderer context.
static const char *audio_device_url = "audio:default";
static char video_device_url[PATH_MAX];
static const char *video_context_name = "samplevideocontextname";

In main(), we set up some variables and initialize the BlackBerry Platform Services (BPS) library. Initializing the BlackBerry Platform Services library lets us handle user input later on in the tutorial.

int                   rc;
int                   exit_application = 0;

// Screen variables
screen_context_t      screen_context = 0;
screen_window_t       screen_window = 0;
int                   screen_size[2] = {0,0};

// Renderer variables
mmr_connection_t*     mmr_connection = 0;
mmr_context_t*        mmr_context = 0;

// Variable to store unique window group name
static char window_group_name[PATH_MAX];

// I/O variables
int                   video_device_output_id = -1;
int                   audio_device_output_id = -1;


Now we create a screen context and a window that gets assigned to a window group. The window group allows the window to have children. The window group name is generated by screen_create_window_group(), which we retrieve by calling screen_get_window_property_cv(). It's important to note that in the sample, each function call is tested for a valid return code. If the return code is not valid, the program exits with an appropriate error code. To make the code easier to read in this tutorial, we aren't showing the testing code for each function call.

screen_create_context(&screen_context, SCREEN_APPLICATION_CONTEXT);
screen_create_window(&screen_window, screen_context);
screen_create_window_group(screen_window, window_group_name);
screen_get_window_property_cv(screen_window, SCREEN_PROPERTY_GROUP, PATH_MAX, window_group_name);

We build up the video_device_url using the window group name that we retrieved.

snprintf(video_device_url, PATH_MAX, "screen:?winid=videosamplewindowgroup&wingrp=%s", window_group_name);        

We set the properties of the window to specify the screen pixel format and how the window will be used. For this video, we want to have the highest possible image quality so we specify a 32-bit pixel format (RGBA8888). We also set the intended usage of the window to specify that the buffers associated with the window can be used for native API operations.

int format = SCREEN_FORMAT_RGBA8888;
screen_set_window_property_iv(screen_window, SCREEN_PROPERTY_FORMAT,

screen_set_window_property_iv(screen_window, SCREEN_PROPERTY_USAGE,

We set the number of render buffers allocated to the window. A native window needs at least one buffer to become visible. Always create two buffers for an app to avoid flickering, artifacts, or tearing on the scene that is displayed. Attaching only one buffer means that changes are made to the same buffer that is displayed. Using two buffers, we can make changes in one buffer while the other is displayed, then swap the buffers to show the changes.

screen_create_window_buffers(screen_window, 2);

Connect and configure the renderer

Now that the window properties are all set up, it's time to connect the app to the multimedia renderer to take advantage of its playback API. After we make the connection, a context is required to uniquely identify the app to the renderer. We create a connection to the mm-renderer, passing in NULL to use the default name provided by the library.

mmr_connection = mmr_connect(NULL);

Next, we create an mm-renderer context for the app, giving it a unique name and setting the file permissions of the folder it uses to allow global read and write operations.

mmr_context = mmr_context_create(mmr_connection, video_context_name,
                                 0, S_IRWXU|S_IRWXG|S_IRWXO);

Configure the input and output

Now that we have an mm-renderer context, we need to tell mm-renderer where to play the media. Because we want to see as well as hear the video file, we must attach two outputs to the context. First, we attach the output window by providing a URL with the name of the window and window group that we created earlier.

video_device_output_id = mmr_output_attach(mmr_context,
                             video_device_url, "video");

Next, we attach the output audio device to the context. In this case, we're using the default audio output device.

audio_device_output_id = mmr_output_attach(mmr_context,
                             audio_device_url, "audio");

Last modified: 2015-07-24

Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus