The Camera APIs

The Camera service provides two levels of API. The Camera C API provides access to the full range of camera capabilities. The Camera Cascades API includes a subset of the camera capabilities, which provides a convenient method to create a camera control in QML.

Using the Cascades API, you can access commonly used camera functionalities such as:

  • Image and video capture
  • Flash, scene, focus, and shooting modes
  • Frame preview

Using the C API, you can also access more advanced camera features such as:

  • Access to image data at difference stages of image processing
  • Viewfinder modes such as continuous or fixed burst, bracketing, and high-speed video
  • Additional properties such as auto exposure, auto white balance, and zoom
  • Settings such as aperture, ISO, and shutter speed
  • Features such as face detection, focus assistance, and video encoding

If your app is written in C++, you can still incorporate advanced camera capabilities using the Camera C API. However, do not mix the C API and C++ API in the same app. The Best Camera sample app in GitHub demonstrates how to incorporate the C API in an app written primarily in C++ and QML.

Camera settings

Both the Cascades and the C APIs allow your app to change camera settings.

Using the Cascades API, your app can change settings such as the flash mode, the scene mode, and the capture resolution. The available settings are defined in the CameraSettings class. For new settings to take effect, your app must restart the viewfinder.

The C API provides access to additional camera settings such as auto white balance, exposure bracketing, and viewfinder face detection.

The C API also allows your app to configure various manual settings, such as ISO, shutter speed, white balance, and aperture, while the viewfinder is running. Changes in manual settings might not be visible until several frames later due to latency in the image-processing pipeline.

For example, your app can use an exposure mode (defined in camera_exposuremode_t ) that has a manual component such as CAMERA_EXPOSUREMODE_ISO_PRIORITY. When your app changes the ISO setting (using camera_set_manual_iso() ), you might not see the results of the ISO change until several frames later.

Not all cameras support all manual settings. Call camera_get_exposure_modes() to determine which manual exposure settings can be adjusted.

Image buffers

You can access image buffers using either the Cascades or C API.

The Cascades API defines a preview buffer pool. An app can allocate preview buffers to the buffer pool, and access the buffers to view viewfinder frames when they become available.

With the C API, an app can access image buffers at different points in the image creation process, including:

  • Viewfinder buffer: to view a viewfinder frame
  • Postview buffer: to view the postview image, which is a preview-sized version of a captured still image
  • Image buffer: to view the final image
  • Video buffer: to view an uncompressed video frame

Using the C API, your app can access these image buffers by using either threaded callbacks or events received in an event-loop, or both callbacks and events.

Whether you decide to use callbacks, events, or both, depends on the requirements of your app.

Callbacks are simple to use and the callback code runs in its own thread. Callbacks give you read-only access to one buffer at a time. Any memory allocated for the buffer is released when the callback completes.

Events allow you to read and write to the buffers and allow you to select your own threading model. Using events, you can also access more than one buffer at a time because you can manage when the buffer is released. Using events requires that you write your own event loop.


If you are using the C API, callbacks are one way to asynchronously access camera image data and status information. Using callbacks gives you flexibility to control what occurs in your app when a function executes. For example, you can use callbacks to perform image processing or to save data to disk.

Callback functions execute in a separate thread, so you must be sure that your code is thread-safe by using appropriate thread synchronization primitives (such as mutexes, semaphores, and condvars).

Unlike events, which can be explicitly bound to a specific location in the image datapath, callbacks are implicitly registered only when invoking the following functions:

Callbacks are deregistered when the operation started by one of these functions completes. For example, when the camera_stop_viewfinder() function is invoked, any callbacks registered during the camera_start_viewfinder() function call are deregistered.

You must not call any API function inside a callback that causes the callback to terminate, because such an operation would cause your app to deadlock. For example, do not call camera_stop_viewfinder() or camera_close() inside a callback.

The following table lists the callbacks and the API functions that you can register the callbacks in:

Callback Description Relevant API functions

This callback is invoked when the final image data becomes available. This image is typically a full-resolution photograph. You can choose to save the image to disk or perform other postprocessing algorithms on the image.

  • camera_start_burst()
  • camera_take_burst()
  • camera_take_photo()

This callback is invoked when the postview image data is available. The image data provided is a preview-sized version of the captured still image. For example, you can display the preview-sized image instead of decompressing and down-sampling the final image.

  • camera_start_burst()
  • camera_take_burst()
  • camera_take_photo()

This callback is invoked when the shutter activates on the camera.

It's your responsibility to play an audible shutter sound when a picture is taken or a video is recorded. Although you can choose to have no shutter sound, you are responsible for ensuring that your app adheres to the local laws of the regions where you distribute your app.

If you use burst mode to capture images in rapid succession, choose an appropriate moment to play the shutter sound rather than playing the shutter sound repeatedly.

  • camera_start_burst()
  • camera_take_burst()
  • camera_take_photo()

This callback is invoked to report nonimage data relevant to the state of the camera. For example, a change in autofocus state, or a disk space warning.

  • camera_start_encode()
  • camera_start_video()
  • camera_start_viewfinder()

This callback is invoked when an uncompressed video frame becomes available.

  • camera_start_encode()
  • camera_start_video()

This callback is invoked when a viewfinder buffer becomes available. The viewfinder is rendered to a screen window by the operating system. You aren't required to add display code, unless you need to perform custom output using some other mechanism.

  • camera_start_viewfinder()

This callback is invoked when an encoded video frame becomes available.

  • camera_start_encode()

This callback is invoked when an encoded audio frame becomes available.

  • camera_start_encode()


Using the C API, an app can receive notifications of camera events asynchronously. The Camera service sends two types of events to interested apps, status events and imaging events. A status event reports changes in status information and might indicate that the focus has changed, the shutter has fired, or a video recording has run out of disk space.

Status events don't have buffers associated with them. In comparison, an imaging event signals to the app that a data buffer is available and can be retrieved and processed. For example, an app receives an imaging event when a viewfinder buffer or a still image buffer becomes available.

To use imaging events, an app should take the following steps:

  1. Bind an imaging event to a given point in the camera datapath.
  2. When receiving the event, call the corresponding get-buffer function.
  3. Process the image data.
  4. Release the buffer back to the Camera service using the camera_return_buffer() function.

To bind an event to a given point in the camera datapath, use one of the following functions:

You can bind multiple events to the same point in the datapath, but it's more efficient to dispatch multiple tasks after receiving a single event in your app.

To unbind an event from a given point in the camera datapath, use the camera_disable_event() function.

When receiving a non-status event, such as an image or viewfinder event, you can retrieve the buffer associated with this event by calling the corresponding get function:

Use caution if your app needs to process frames in an interval. If concurrent events occur in your app where the time to process one event could interfere with the deadline to complete another event, consider handling the events in separate threads. For example, you might implement an algorithm to detect smiles in the viewfinder frames, but concurrently, the user might want to capture a still image to disk. Because saving the image to disk might take longer than the interframe period of the viewfinder frames, you should process the image-saving task in a different thread. You can also use callbacks to resolve this problem because callbacks inherently execute in separate threads.

Viewfinder modes

Viewfinder modes are only available in the C API in camera API version 3 (API level 10.3.0) and higher. You can use the camera_get_api_version() function to determine the Camera API version of the device that your app is running on.

Viewfinder modes are defined in camera_vfmode_t of the C API. You can specify the operating mode of the camera by setting the viewfinder mode. For example, by selecting the viewfinder mode of CAMERA_VFMODE_FIXED_BURST, you are indicating that your app needs to capture photos in rapid succession.

Specifying a viewfinder mode enables the operating system to optimize configuration of the camera hardware to provide the best user experience and image quality. Using viewfinder modes also helps you to discover the camera capabilities and available modes and settings. With a specified viewfinder mode, you can use capability query functions (for example,  camera_has_feature() and camera_get_supported_vf_resolutions() ) to retrieve camera capabilities that are guaranteed to work with your intended use case.

You must select a viewfinder mode before you can configure and start the viewfinder. You can switch between viewfinder modes only when the viewfinder is not running.

To use viewfinder modes, follow the steps below:

  1. Use  camera_set_vf_mode() to select the appropriate mode for your specific use case.
  2. Change the viewfinder settings using camera_set_vf_property().
  3. Start the viewfinder using camera_start_viewfinder().

Releases of the Camera library before Camera API version 3 (API level 10.3.0) don't support viewfinder modes. Instead, the viewfinder functions were divided into photovf and videovf variants (for example, camera_set_photovf_property()). This usage pattern is now deprecated. However, for backwards compatibility purposes, it is still possible to operate the camera using these deprecated functions. When using the deprecated functions, the viewfinder mode is equivalent to CAMERA_VFMODE_DEFAULT. After you change the viewfinder mode using the camera_set_vf_mode() function, you can no longer use the deprecated functions until you close and reopen the camera. If your app doesn't need to use more advanced viewfinder modes, then you can continue using the legacy photovf and videovf functions for backwards compatibility.

Certain advanced functionalities are only available using viewfinder modes. If your app requires any of the new functionalities, you must use viewfinder modes in your app. Your app cannot run on devices that are running BlackBerry 10 OS version 10.2.1 or earlier. The viewfinder modes listed in camera_vfmode_t include notes that indicate whether the functionality can be accessed in older software versions using alternate configuration means.

Last modified: 2015-03-31

Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus