Setting up a camera

Before your app can take photos or record a video, you must set up a camera. To set up a camera, you need to:

If you're using the Camera C API, you might also need to handle screen events.

When you are finished using a camera, close the camera so resources are properly cleaned up.

Prerequisites

To use the Camera service, you must add the use_camera permission to your app's bar-descriptor.xml file. You might also need to set other permissions listed below, depending on what you want to do:

If you're using the Camera C API, link the Camera library ( camapi ) to your project.

If you're using the Camera C++ API, link the Camera libraries by adding the following line to your project's .pro file.

LIBS += -lbbcascadesmultimedia -lbbmultimedia -lbbsystem

You can add using statements to your C++ source code to reduce the amount of code that you have to write when you declare variables. Here are the using statements found in the C++ version of the camera sample app used in this discussion.

                using namespace bb::cascades;
                using namespace bb::multimedia;
                using namespace bb::cascades::multimedia;
            

Find supported cameras

Some devices have both a front and a rear camera. After finding out which cameras are supported on your device, call Camera::open() in C++ or camera_open() in C to open the camera you have decided to use.

You can use the supportedCameras property to determine which cameras are supported on the device. When the creationCompleted() signal is emitted, use the supportedCameras property to get a QVariantList of supported cameras.

Here's a code sample that shows you how a custom function, called getCameraUnit(), which takes a QVariantList list of camera units as its only parameter, can be used along with the supportedCameras property to find an accessible CameraUnit on the device.

// This function returns an available camera unit.
// It looks first for the rear camera, then for the front 
// camera. If no camera is found, it returns null and shows
// a toast to the user.
function getCameraUnit(camUnitList) {
    if (camUnitList.length == 0 || camUnitList[0] == CameraUnit.None) {
        qmlToast.body = "No camera units are available";
        qmlToast.show();

        return null;
    }
    
    // Try to open the rear camera unit first
    // If the rear camera isn't available,  
    // look for the front camera
    for (var i = 0; i < camUnitList.length; ++ i) {
        if (camUnitList[i] == CameraUnit.Rear)
            return camUnitList[i];
    }
    
    for (var j = 0; j < camUnitList.length; ++ j) {
        if (camUnitList[i] == CameraUnit.Front)
            return camUnitList[i];
    }    
}

When a camera unit is found using the getCameraUnit() function, you can open it in your QML code as shown below:

Container {
    // ...

    onCreationCompleted: {
        var cameraUnit = getCameraUnit(qmlCameraObj.supportedCameras);
    
        if (cameraUnit != null) {
            // Check to see if this camera is accessible
            // A supported camera might not be accessible
            // because it might be in use
            if (qmlCameraObj.isCameraAccessible(cameraUnit)) {
                // Open the camera unit
                qmlCameraObj.open(cameraUnit);
            }
            // ...
        }
    
        // ...
    }
}                    

You can use the supportedCameras() function to determine which cameras are supported. This function returns a QList containing all of the supported cameras on the device. When you find a supported camera, check if this camera is accessible. A supported camera might not be accessible because it might be in use. You can use the following C++ code in your app's constructor to open a camera unit.

 
// App's constructor
ApplicationUI::ApplicationUI(Application *app) : QObject(app)
{
    // ...

    // Retrieve a list of accessible camera units
    QList<CameraUnit::Type> list = m_pCamera->supportedCameras(); 

    if(list.isEmpty() || list.contains(CameraUnit::None))
    {
        // Use a custom showToast() function to display
        // a message to the user
        showToast("No cameras are accessible");
    }
    // Check to see if the rear camera is 
    // supported and accessible
    else if(list.contains(CameraUnit::Rear) &&
            m_pCamera->isCameraAccessible(CameraUnit::Rear))
    { 
        // The default camera unit is rear
        // so no parameter is necessary
        m_pCamera->open();
    } 
    // Check to see if the front camera is 
    // supported and accessible
    else if (list.contains(CameraUnit::Front) &&
            m_pCamera->isCameraAccessible(CameraUnit::Front))
    { 
        // Open the front camera unit
        m_pCamera->open(CameraUnit::Front);
    }
    else
    {
        // Report error
        showToast("An error has occurred.");
    }

    // ...
}

There are two ways to find the supported cameras on your device using the C API. The camera_get_supported_cameras() function returns a list of all supported cameras and camera_find_capable() allows you to query supported cameras with certain features.

camera_error_t err;

/* Method 1: Find all supported cameras */
unsigned int num;
unsigned int i;
camera_unit_t cams[CAMERA_UNIT_NUM_UNITS];
camera_unit_t unit;
camera_error_t err;

err = camera_get_supported_cameras(CAMERA_UNIT_NUM_UNITS,
                                   &num,
                                   cams);
if (err == CAMERA_EOK) {
    for (i = 0; i < num; i++) {
        fprintf(stderr, "found camera unit %d\n", cam[i]);
    } 
else {  
    /* Handle error */
}                 
/* Method 2: Find cameras with a certain feature set */
/* Look for both photo and video capabilities */
camera_feature_t features[] = {
      CAMERA_FEATURE_PHOTO, CAMERA_FEATURE_VIDEO };   
camera_unit_t next = CAMERA_UNIT_NONE;   
unsigned int num = 0;
camera_unit_t cams[CAMERA_UNIT_NUM_UNITS];

/* Find all cameras with the required feature set,
   checking iteratively over all camera units 
 */
while (camera_find_capable(features,       
                           sizeof(features)/sizeof(*features),
                           next,
                           &next) == CAMERA_EOK) {        
    cams[num++] = next;
    fprintf(stderr, "camera unit %d supports the required features\n",
        next);   
}
/* Open the first camera found using either method above */
unit = cams[0];   
fprintf(stderr, "selecting camera unit %d\n", unit);   
err = camera_open(unit,
                  CAMERA_MODE_RW|CAMERA_MODE_ROLL,
                  &handle);   
if (err != CAMERA_EOK) { 
    fprintf(stderr, "camera_open() failed: %d\n", err);
    return err;   
}

Start a viewfinder

The Camera service allows your app to use the viewfinder to access low-level graphics hardware on the device. This ability makes the viewfinder useful and efficient for previewing any image being photographed or video being recorded.

Before you can start a viewfinder, you must open an accessible camera unit. To start a viewfinder, your app can call the startViewfinder() function in C++ or the camera_start_viewfinder() function in C.

The cameraOpened() signal indicates that the camera has been successfully opened. Use the onCameraOpened() signal handler to start the viewfinder.

The code sample below shows you how to define a Camera control and set its id property so it can be accessed in the onCameraOpened() signal handler to start the viewfinder.

Camera {
    id: qmlCameraObj
    
    // ...

    onCameraOpened: {
        qmlCameraObj.startViewfinder();
    }
}

Here's a QML and C++ code sample that shows you how to define a Camera control in your QML code and sets its objectName property so it can be accessed in your app's constructor, from your C++ code, where you can use the onCameraOpened() slot to start the viewfinder.

Camera {
   objectName: "qmlCameraObj"
}
ApplicationUI::ApplicationUI ( bb::cascades::Application *app ) : QObject(app)
{
    // ...

    // Initialize a pointer to the camera object
    m_pCamera = root->findChild<bb::cascades::multimedia::Camera*>("qmlCameraObj");

    // ...

    // Connect the cameraOpened() signal to your onCameraOpened() slot
    result = QObject::connect(m_pCamera,
                              SIGNAL(cameraOpened()),
                              this,
                              SLOT(onCameraOpened()));    
} 

void ApplicationUI::onCameraOpened ()
{
    // ...

    // With the camera unit now opened, start a viewfinder
    m_pCamera->startViewfinder();
}

The code samples below use the C Screen API to create a screen window. If your app uses Cascades, you don't need to use the C Screen API; you use the ForeignWindowControl class instead. For an example that uses ForeignWindowControl, see the Best Camera sample app in GitHub.

The code sample below uses Callbacks to monitor the progress of viewfinder activities. First, you define the callbacks that you need. Callbacks are optional; however the status callback is useful for detecting asynchronous events. If you don't need to process viewfinder frame data, you don't need a viewfinder callback. A viewfinder callback incurs additional IPC overhead.

You must not call any API function inside a callback that causes the callback to terminate, for example, camera_stop_viewfinder() or camera_close(), because such an operation would cause your program to deadlock.

The following example uses the viewfinder_callback to record the width and height of the frame, and the status_callback to record the status notification.

static void
viewfinder_callback(camera_handle_t handle,
                    camera_buffer_t* buf,
                    void* arg)
{
    if (buf->frametype == CAMERA_FRAMETYPE_NV12) {
        fprintf(stderr, "viewfinder frame %d x %d\n",
                buf->framedesc.nv12.width,
                buf->framedesc.nv12.height);
    } else {
        fprintf(stderr, "that's odd - not NV12\n");
    }
}

static void
status_callback(camera_handle_t handle,
                camera_devstatus_t status,
                uint16_t extra,
                void* arg)
{
    /* Log the status notification.
       The void* is the user argument passed in
       when the viewfinder is started
     */
    fprintf(stderr, "status notification: %d, %d (user arg: %d)\n", status, extra, arg);
}

Before starting a viewfinder window, you need to use the Screen API to create a screen window to be the parent of the child viewfinder window.

const int usage = SCREEN_USAGE_NATIVE;
screen_window_t screen_win;   
screen_buffer_t screen_buf = NULL;   
int rect[4] = { 0, 0, 0, 0 };  
  
/* Create an application window that acts as a background */
if ( screen_create_context(&screen_ctx, 0) != 0 ) {
    /* Handle error */
}   
if ( screen_create_window(&screen_win, screen_ctx) != 0 ) {
    /* Handle error */
}   
if ( screen_create_window_group(screen_win, vf_group) != 0 ) {
    /* Handle error */
}   
if ( screen_set_window_property_iv(screen_win, 
                              SCREEN_PROPERTY_USAGE, 
                              &usage) != 0 ) {
    /* Handle error */
}   
if ( screen_create_window_buffers(screen_win, 1) != 0 ) {
    /* Handle error */
}   
if ( screen_get_window_property_pv(screen_win, 
                              SCREEN_PROPERTY_RENDER_BUFFERS, 
                              (void**)&screen_buf) != 0 ) {
    /* Handle error */
}   
if ( screen_get_window_property_iv(screen_win, 
                              SCREEN_PROPERTY_BUFFER_SIZE, 
                              &rect[2]) != 0 ) {
    /* Handle error */
}
    
/* Fill the window with black */ 
int attribs[] = { SCREEN_BLIT_COLOR, 0x00000000, SCREEN_BLIT_END };   
if ( screen_fill(screen_ctx, screen_buf, attribs) != 0 ) {
    /* Handle error */
}   
if ( screen_post_window(screen_win, screen_buf, 1, rect, 0) != 0 ) {
    /* Handle error */
}   
        

Before starting the viewfinder, set the viewfinder properties. The code below sets the required properties. To see other properties, refer to camera_set_vf_property(). You can also change the default photo output properties such as width, height, and rotation. Start the viewfinder when all properties are successfully set.

camera_error_t err;

/* This is the minimal required configuration for a viewfinder */
err = camera_set_vf_property(handle,          
              CAMERA_IMGPROP_WIN_GROUPID, vf_group,
              CAMERA_IMGPROP_WIN_ID, "my_viewfinder");
if (err == CAMERA_EOK) {
    /* Change any photo properties,
       or use the default photo properties.
       Here is an example for updating the rotation
     */
    err = camera_set_photo_property(handle,
                               CAMERA_IMGPROP_ROTATION, 180);
    if (err != CAMERA_EOK) {
        /* Handle error */
    }

    /* Start a viewfinder.
       Callbacks are optional.
       Pass in NULL if you don't need a particular callback
     */
    err = camera_start_viewfinder(handle,
                           &viewfinder_callback,
                           &status_callback,
                           (void*)123);  /* user-defined argument */
    if (err != CAMERA_EOK) {
        /* Handle error */
    } 
} else {
    /* Handle error */
}

Handle screen events

In the Camera C API, the camera_start_viewfinder() function creates a viewfinder window that is, by default, not visible. To make the viewfinder window visibile, you need to handle the appropriate screen events. You can use BPS events to interact with the screen. When registered with BPS, your app receives notifications regarding screen events, such as a newly created window or a touch on the screen.

Alternatively, as mentioned earlier, if your app is written primarily in QML and C++, you can use the ForeignWindowControl class instead of the BPS and screen APIs. For an example that uses ForeignWindowControl, see the Best Camera sample app in GitHub.

The following steps summarize the API call sequence for receiving and handling screen events:

  1. Initialize BPS by calling bps_initialize().
  2. Register with BPS to receive screen events by calling screen_request_events().
  3. Check for the SCREEN_EVENT_CREATE event. This event indicates that a child window has been created. If the child window is your viewfinder window, modify the window properties to make the window visible by calling screen_set_window_property_iv().

Not applicable

Not applicable

The following code samples demonstrate how to receive screen events. First, you register with BPS to receive events.

#include <bps/bps.h>
#include <bps/event.h>
#include <bps/screen.h>
#include <fcntl.h>
#include <screen/screen.h>
#include <camera/camera_api.h>

/* ... */

static screen_context_t screen_ctx;
int rc;

/* In your app's initialization code
   initialize BPS and 
   register for screen events
 */
rc = bps_initialize();
if (rc != BPS_SUCCESS) {
    /* Handle error */
}
rc = screen_request_events(screen_ctx);   
if (rc != BPS_SUCCESS) {
    /* Handle error */
}  

/* ... */

/* Receive screen events from BPS */
static void handle_event()
{
    int domain;

    bps_event_t *event = NULL;
    rc = bps_get_event(&event, -1);
    if (rc != BPS_SUCCESS) {
        /* Handle error */
    }

    if (event) {
        domain = bps_event_get_domain(event);
        if (domain == screen_get_domain()) {
            handle_screen_event(event);
        } else {
            /* Handle other events */
        }
    }
}    

In the handle_screen_event() function, check the received screen events.

camera_error_t err;

/* Check for screen events */
static void
handle_screen_event(bps_event_t *event)
{
    int screen_val;

    screen_event_t screen_event = screen_event_get_event(event);
    if ( screen_get_event_property_iv(screen_event, 
                                      SCREEN_PROPERTY_TYPE, 
                                      &screen_val)!= 0 ) {
        /* Handle error */
        return;
    }

    switch (screen_val) {
    case SCREEN_EVENT_CREATE:
        if (screen_get_event_property_pv(screen_event, 
                                         SCREEN_PROPERTY_WINDOW, 
                                         (void **)&vf_win) == -1) {
            /* Not a viewfinder window */
            perror("screen_get_event_property_pv(SCREEN_PROPERTY_WINDOW)");
        } else {
            fprintf(stderr,"Viewfinder window found!\n");
            
            /* Set viewfinder window properties.
               For example, set the z-order and the visibility
               of the viewfinder window
             */

            /* Set the z-order so that the viewfinder window
               appears in front of the background window
             */
            int i = 1;
            if ( screen_set_window_property_iv(vf_win, 
                                 SCREEN_PROPERTY_ZORDER, 
                                               &i) != 0 ) {
                /* Handle error */
            }

            /* Make the viewfinder window visible */
            if ( screen_set_window_property_iv(vf_win, 
                                SCREEN_PROPERTY_VISIBLE, 
                                               &i) != 0 ) {
                /* Handle error */
            }

            screen_flush_context(screen_ctx, 0);
            touch = false;  
            break;

    /* Handle other cases */
    /* ... */
    }
}

Close the camera

When your app is finished with the camera, you must stop the viewfinder and then close the camera to release the resources that it currently holds.

If you want to switch to another camera (for example, switching from the rear camera to the front camera), you must close the camera that you are using before opening a new one.

If you are using QML or C++, you can set the Application::autoExit flag to false and connect the manualExit() signal to your onManualExit() custom signal handler. In the onManualExit() slot, add code to stop the viewfinder and close the camera.

You can add the camera cleanup code to the onCreationCompleted() signal handler. The onManualExit() custom signal handler is called when the app is closed.

onCreationCompleted: {
    // Connect the manualExit() signal to your 
    // onManualExit() slot where you clean up
    Application.manualExit.connect(onManualExit)
    
    // Set the autoExit() signal to false to take 
    // control of the app's exiting procedure
    Application.autoExit = false
}

// ...

// Stop the viewfinder and close the camera
function onManualExit() {
    keepAlive = false;
    qmlCameraObj.stopViewfinder();
    qmlCameraObj.close();

    // Close the app
    Application.quit();
}

You can use the setAutoExit() function in your app's constructor to set the autoExit flag to false, and connect the manualExit() signal to your custom onManualExit() slot.

ApplicationUI::ApplicationUI ( bb::cascades::Application *app ) : QObject(app)
{
    // ...
        
    // Connect the manualExit() signal to the 
    // onManualExit() slot where you clean up
    res = QObject::connect(app,
                           SIGNAL(manualExit()),
                           this,
                           SLOT(onManualExit()));    
    Q_ASSERT(res);
    
    // Set the autoExit signal to 'false'
    // to take control of the app's exiting
    // procedure
    app->setAutoExit(false);        
    
    // ...
}


// Stop the viewfinder and close the camera
// When the closeCamera() signal is 
// emitted, the onCameraClosed() slot
// closes the app.
void ApplicationUI::onManualExit ()
{
    m_pCamera->stopViewfinder();
    m_pCamera->close();
    keepAlive = false;
    
    // Close the app
    bb::cascades::Application::quit();
}

When you exit the app's main loop, clean up the camera resources:

/* Stop the viewfinder and close the camera */
camera_stop_viewfinder(handle);
camera_close(handle);

Last modified: 2015-05-07



Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus