Start your video

Now we're ready to start playing our video! Assuming the video is located somewhere on your device (it will be if you've bundled it in your application's package), we need to let mm-renderer know where it is and tell it to start playing. First, we create a temporary render buffer to fill with color.
screen_buffer_t temp_buffer[1];
screen_get_window_property_pv(screen_window,
    SCREEN_PROPERTY_RENDER_BUFFERS, (void**)temp_buffer);
Then, we fill the temporary render buffer with the color black.
int fill_attributes[3] = {SCREEN_BLIT_COLOR, 0x0, SCREEN_BLIT_END};
screen_fill(screen_context, temp_buffer[0], fill_attributes);
Next, we define a rectangle that bounds the area to change (in this case, the entire width and height of the window). The buffer is then made visible through the window by calling screen_post_window().
screen_get_window_property_iv(screen_window, SCREEN_PROPERTY_SIZE, screen_size);
int temp_rectangle[4] = {0, 0, screen_size[0], screen_size[1]};
screen_post_window(screen_window, temp_buffer[0], 1, temp_rectangle, 0);

If you were to run the application at this point, you would see a black screen:

Device image showing the black screen.

This might be very calming but there are a few more steps to get something really playing. We want to make sure that the screen doesn't dim while the user is enjoying the video. To do this, we set the SCREEN_PROPERTY_IDLE_MODE property to SCREEN_IDLE_MODE_KEEP_AWAKE. Note that you don't need to do this for videos that are actively playing but it's something useful to know for your other applications.
int idle_mode = SCREEN_IDLE_MODE_KEEP_AWAKE;
screen_set_window_property_iv(screen_window,
    SCREEN_PROPERTY_IDLE_MODE, &idle_mode);
We then create the path that represents the location of the media file on your device. In this case, we use the location of the bundled application resource but it could be in the shared folder as well.
char cwd[PATH_MAX];
char media_file[PATH_MAX];
getcwd(cwd,PATH_MAX);

snprintf(media_file, PATH_MAX, "file://%s/app/native/pb_sample.mp4", cwd);
Next, we attach the media file as the input for the mm-renderer context you created. Because we're playing only one file, we use the track input type which plays one track in isolation from the rest of the media.
if (mmr_input_attach(mmr_context, media_file, "track"));
Now, we can start playing the video.
mmr_play(mmr_context);

Configure the aspect ratio

When the video starts playing, the play function needs to work in the background and also allow other work to be performed by the app. One thing we do is check that the aspect ratio is correct to make sure the video is displayed in the best possible way.

We use the function calculate_rect() to calculate the lower-left coordinate, width, and height of the rectangle that the video is displayed in. This information is based on the source video's resolution. The calculation maximizes the source video so that if fits into the actual screen of the device (because the video may have a different size).
dict = calculate_rect(screen_size[0], screen_size[1]);

Let's have a look at what's happening in calculate_rect(). The video that we're playing in this tutorial is an .mp4 file with a resolution of 640 x 480. The aspect ratio of the video is 1.33 and we allow an aspect ratio tolerance of .1. We are using a device with a resolution of 1280 x 768, which has an aspect ratio of 1.66. So, we need to calculate a new video size to make it fit a screen size that is as big as possible. When you create your own app, your video and device might be different and you might have to query your video to find its resolution and aspect ratio.

The function calculate_rect() uses the width and height of the screen as parameters. We calculate the aspect ratio of the video by dividing the width by the height.
strm_dict_t* calculate_rect(int width, int height) {
    const int image_width = 640;
    const int image_height = 480;
    const float image_aspect = (float)image_width / (float)image_height;
    const float aspect_tolerance = 0.1;

    char buffer[16];
    strm_dict_t *dict = strm_dict_new();

    if (NULL == dict) {
        return NULL;
    }

We want to adapt the size of the video frame to the size of the device screen that the video plays on. We want to make it the biggest possible size that fits on the screen of the device.

To calculate this, we set video_dest_x and video_dest_y to 0,0 on the screen. We then set the desired size of the video window using the width and height of the screen.
dict = strm_dict_set(dict, "video_dest_x", "0");
dict = strm_dict_set(dict, "video_dest_y", "0");
dict = strm_dict_set(dict, "video_dest_w", itoa(width, buffer, 10));
dict = strm_dict_set(dict, "video_dest_h", itoa(height, buffer, 10));
Now we are going to figure out the screen aspect ratio on the device. We calculate the screen aspect ratio by dividing the width by the height. If the difference between the screen aspect ratio and the video aspect ratio is less than our tolerance value of 0.1, we consider the video a full-screen video. Note that if the first if statement evaluates to true, no operations are performed, execution falls through, and the desired aspect ratio remains full screen.
float screen_aspect = (float)width/(float)height;
    if (fabs(screen_aspect - image_aspect) < aspect_tolerance) {
       
    } else if (screen_aspect < image_aspect) {
If the screen height is taller than the video's height, we need to center the video vertically and set the video width to be the same as the screen width while maintaining the aspect ratio. We have to figure out how to map the video to the narrow screen. To do that, we want to change the video_dest_y value of the origin of the video window rectangle. We need to take the difference between the screen height and the video height and divide by two so we get an equal amount of unused space above and below. We adjust the height of the video image by dividing the screen width by the video aspect ratio to scale it down. This operation adjusts the video height so that it appears in the same aspect ratio.
dict = strm_dict_set(dict, "video_dest_y", itoa((height - image_height) / 2, buffer, 10));
height = width / image_aspect;
dict = strm_dict_set(dict, "video_dest_h", itoa(height, buffer, 10));
If the screen width is wider than the video's width, we need to center the video horizontally and set the video height to be the same as the screen height while maintaining the aspect ratio. We have to figure out how to map the video to the wider screen. To do that, we want to change the video_dest_x value of the origin of the video window rectangle. We need to take the difference between the screen width and the video width and divide by two so that we get an equal amount of unused space on the right and left. We adjust the width of the video image by multiplying the screen height by the video aspect ratio to scale it up. This operation stretches the video width so that it appears in the same aspect ratio.
dict = strm_dict_set(dict, "video_dest_x", itoa((width - image_width) / 2, buffer, 10));
width = height * image_aspect;
dict = strm_dict_set(dict, "video_dest_w", itoa(width, buffer, 10));
We update the dict variable with the new width and height values, and pass the dict variable into the function mmr_output_parameters() to set the desired video window size.
mmr_output_parameters(mmr_context, video_device_output_id, dict)
When we're finished with the dict variable, we set it to NULL.
dict = NULL;

Stop the video

Now that the video is playing, you probably want to allow the user to stop it. Before your application can do that, it needs the ability to listen for screen events and handle any user input. When the user indicates that they've had enough of the video, the application can stop playback and close the window.

To allow the user to tell the application to stop playing the video, we need to set up an event handling loop and request events from the screen and the navigator:
screen_request_events(screen_context);
navigator_request_events(0);

for (;;) {
    bps_event_t *event = NULL;
    if (bps_get_event(&event, -1) != BPS_SUCCESS) {
    	return EXIT_FAILURE;
    }

    if (event) {
        if (bps_event_get_domain(event) == navigator_get_domain() &&
                bps_event_get_code(event) == NAVIGATOR_EXIT) {

                exit_application = 1;
            }
        if (exit_application) {
            break;
        }
    }
 }

We use -1 in the call to bps_get_event() to make it wait for an event. This value prevents the loop from running continuously and consuming processor cycles when it's not really necessary. When the application receives an event, we look for the kind of event that describes the user swiping up from the bezel and closing the application from the navigator (NAVIGATOR_EXIT). It would look something like this:

Device image showing the video playback window sample.

When we know that the user wants to exit, we can stop the video and clean up all the application services we've created.
screen_stop_events(screen_context);
mmr_stop(mmr_context);
mmr_output_detach(mmr_context, audio_device_output_id);
mmr_output_detach(mmr_context, video_device_output_id);
mmr_context_destroy(mmr_context);
mmr_context = 0;
video_device_output_id = -1;
audio_device_output_id = -1;
mmr_disconnect(mmr_connection);
bps_shutdown();
screen_destroy_window(screen_window);
screen_destroy_context(screen_context);
screen_context = 0;
screen_window = 0;
mmr_connection = 0;

That's it, you can now play a video! You can try altering the application to use different videos or perhaps add user interface elements to control playback.

Last modified: 2013-12-21

comments powered by Disqus