Message passing

We'll look at the most distinctive feature of BlackBerry 10 OS, message passing. Message passing lies at the heart of the operating system's microkernel architecture, giving the OS its modularity.

A small microkernel and message passing

One of the principal advantages of BlackBerry 10 OS is that it's scalable. By scalable we mean that it can be tailored to work on tiny embedded boxes with tight memory constraints, right up to large networks of multiprocessor SMP boxes with almost unlimited memory. BlackBerry 10 OS achieves its scalability by making each service-providing component modular. This way, you can include only the components you need in the final system. By using threads in the design, you also help to make it scalable to SMP systems.

This is the philosophy that was used during the initial design of the QNX family of operating systems and has been carried through to this day. The key is a small microkernel architecture, with modules that would traditionally be incorporated into a monolithic kernel as optional components.

Diagram showing modular architecture.

You, the system architect, decide which modules you want. Do you need a filesystem in your project? If so, then add one. If you don't need one, then don't bother including one. Do you need a serial port driver? Whether the answer is yes or no, this doesn't affect (nor is it affected by) your previous decision about the filesystem.

At run time, you can decide which system components are included in the running system. You can dynamically remove components from a live system and reinstall them, or others, at some other time. Is there anything special about these drivers? Nope, they're just regular, user-level programs that happen to perform a specific job with the hardware.

The key to accomplishing this is message passing. Instead of having the OS modules bound directly into the kernel, and having some kind of special arrangement with the kernel, under BlackBerry 10 OS the modules communicate via message passing among themselves. The kernel is basically responsible only for thread-level services (for example, scheduling). In fact, message passing isn't used just for this installation and deinstallation trick — it's the fundamental building block for almost all other services (for example, memory allocation is performed by a message to the process manager). Of course, some services are provided by direct kernel calls.

Consider opening a file and writing a block of data to it. This is accomplished by a number of messages sent from the application to an installable component of BlackBerry 10 OS called the filesystem. The message tells the filesystem to open a file, and then another message tells it to write some data (and contains that data). Don't worry though — the BlackBerry 10 OS operating system performs message passing very quickly.

Message passing and client/server

Imagine an application reading data from the filesystem. The application is a client requesting the data from a server. This client/server model introduces several process states associated with message passing (we talked about these in Processes and threads). Initially, the server is waiting for a message to arrive from somewhere. At this point, the server is said to be receive-blocked (also known as the RECV state). Here's some sample pidin output:

pid    tid name               prio STATE       Blocked       
     4   1 devc-pty            10r RECEIVE     1             

In the above sample, the pseudo-tty server (called devc-pty ) is process ID 4, has one thread (thread ID 1), is running at priority 10 Round-Robin, and is receive-blocked, waiting for a message from channel ID 1.

Diagram showing the state transitions of a server.

When a message is received, the server goes into the READY state, and is capable of running. If it happens to be the highest-priority READY process, it gets the CPU and can perform some processing. Since it's a server, it looks at the message it just got and decides what to do about it. At some point, the server completes whatever job the message told it to do, and then replies to the client.

Let's switch over to the client. Initially the client is running along, consuming CPU, until it decides to send a message. The client changes from READY to either send-blocked or reply-blocked, depending on the state of the server that it sent a message to.

Diagram showing state transitions of clients.

Generally, you see the reply-blocked state much more often than the send-blocked state. That's because the reply-blocked state means:

The server has received the message and is now processing it. At some point, the server completes processing and replies to the client. The client is blocked waiting for this reply.

Contrast that with the send-blocked state:

The server hasn't yet received the message, most likely because it was busy handling another message first. When the server gets around to receiving your (client) message, then you can go from the send-blocked state to the reply-blocked state.

In practice, if you see a process that is send-blocked it means one of two things:

  1. You happened to take a snapshot of the system in a situation where the server was busy servicing a client, and a new request arrived for that server.

    This is a normal situation; you can verify it by running pidin again to get a new snapshot. This time you can probably see that the process is no longer send-blocked.

  2. The server has encountered a bug and for whatever reason isn't listening to requests anymore.

    When this happens, you can see many processes that are send-blocked on one server. To verify this, run pidin again, observing that there's no change in the blocked state of the client processes.

Here's a sample showing a reply-blocked client and the server it's blocked on:

   pid tid name               prio STATE       Blocked      
     1   1 to/x86/sys/procnto   0f READY                    
     1   2 to/x86/sys/procnto  10r RECEIVE     1            
     1   3 to/x86/sys/procnto  10r NANOSLEEP                
     1   4 to/x86/sys/procnto  10r RUNNING                  
     1   5 to/x86/sys/procnto  15r RECEIVE     1            
 16426   1 esh                 10r REPLY       1            

This shows that the program esh (the embedded shell) has sent a message to process number 1 (the kernel and process manager, procnto) and is now waiting for a reply.

Now you know the basics of message passing in a client/server architecture.

Do you have to write special BlackBerry 10 OS message-passing calls just to open a file or write some data?

You don't have to write any message-passing functions, unless you want to get under the hood. Here is some client code that does message passing:

#include <fcntl.h>
#include <unistd.h>

main (void)
    int     fd;

    fd = open ("filename", O_WRONLY);
    write (fd, "This is message passing\n", 24);
    close (fd);

    return (EXIT_SUCCESS);

Standard C code, nothing tricky.

The message passing is done by the BlackBerry 10 OS C library. You issue standard POSIX 1003.1 or ANSI C function calls, and the C library does the message-passing work for you.

In the above example, we saw three functions being called and three distinct messages being sent:

Let's step back for a moment and contrast this to the way the example would have worked in a traditional operating system.

The client code would remain the same and the differences would be hidden by the C library provided by the vendor. On such a system, the open() function call would invoke a kernel function, which would then call directly into the filesystem, which would execute some code, and return a file descriptor. The write() and close() calls would do the same thing.

Network-distributed message passing

Suppose we want to change our example above to talk to a different node on the network. You might think that we'll have to invoke special function calls to get networked. Here's the network version's code:

#include <fcntl.h>
#include <unistd.h>

main (void)
    int     fd;

    fd = open ("/net/wintermute/home/rk/filename", O_WRONLY);
    write (fd, "This is message passing\n", 24);
    close (fd);

    return (EXIT_SUCCESS);

You're right if you think the code is almost the same in both versions. It is.

In a traditional OS, the C library open() calls into the kernel, which looks at the filename and says oops, this is on a different node. The kernel then calls into the network filesystem (NFS) code, which figures out where /net/wintermute/home/rk/filename actually is. Then, NFS calls into the network driver and sends a message to the kernel on node wintermute, which then repeats the process that we described in our original example. Note that in this case, there are really two filesystems involved; one is the NFS client filesystem, and one is the remote filesystem. Unfortunately, depending on the implementation of the remote filesystem and NFS, certain operations may not work as expected (for example, file locking) due to incompatibilities.

Under BlackBerry 10 OS, the C library open() creates the same message that it would have sent to the local filesystem and sends it to the filesystem on node wintermute. In the local and remote cases, the exact same filesystem is used.

This is another fundamental characteristic of BlackBerry 10 OS: network-distributed operations are essentially free, as the work to decouple the functionality requirements of the clients from the services provided by the servers is already done, by virtue of message passing.

On a traditional kernel there's a double standard where local services are implemented one way, and remote (network) services are implemented in a totally different way.

What it means for you

Message passing is elegant and network-distributed. So what? What does it buy you, the programmer? Well, it means that your programs inherit those characteristics — they too can become network-distributed with far less work than on other systems. You can test software in a nice, modular manner.

You've probably worked on large projects where many people have to provide different pieces of the software. Of course, some of these people are done sooner or later than others.

These projects often have problems at two stages: initially at project definition time, when it's hard to decide where one person's development effort ends and another's begins, and then at testing/integration time, when it isn't possible to do full systems integration testing because all the pieces aren't available.

With message passing, the individual components of a project can be decoupled very easily, leading to a very simple design and reasonably simple testing. If you want to think about this in terms of existing paradigms, it's very similar to the concepts used in Object Oriented Programming (OOP).

What this boils down to is that testing can be performed on a piece-by-piece basis. You can set up a simple program that sends messages to your server process, and since the inputs and outputs of that server process are (or should be!) well documented, you can determine if that process is functioning. Heck, these test cases can even be automated and placed in a regression suite that runs periodically!

Last modified: 2014-11-17

Got questions about leaving a comment? Get answers from our Disqus FAQ.

comments powered by Disqus