Ad

POSIX Shared Memory - Method For Automatic Client Notification

- 1 answer

I am investigating POSIX shared memory for IPC in place of a POSIX message queue. I plan to make a shared memory area large enough to hold 50 messages of 750 bytes each. The messages will be sent at random intervals from several cores (servers) to one core (client) that receives the messages and takes action based on the message content.

I have three questions about POSIX shared memory:

(1) is there a method for automatic client notification when new data are available, like the methods available with POSIX pipes and message queues?

(2) What problems would arise using shared memory without a lock where the data are write-once, read-once?

(3) I have read that shared memory is the fastest IPC method because it has the highest bandwith and data become available in both server and client cores immediately. However, with message queues and pipes the server cores can send the messages and continue with their work without waiting for a lock. Does the need for a lock slow the performance of shared memory over message queues and pipes in the type of scenario described above?

Ad

Answer

(1) There is no automatic mechanism to notify threads/processes that data was written to a memory location. You'd have to use some other mechanism for notifications.

(2) You have a multiple-producer/single-consumer (MPSC) setup. Implementing a lockless MPSC queue is not trivial. You would have to pay careful attention to doing atomic compare-and-swap (CAS) operations in right order with correct memory ordering and you should know how to avoid false cache line sharing. See https://en.cppreference.com/w/c/atomic for the atomic operations support in C11 and read up about memory barriers. Another good read is the paper on Disruptor at http://lmax-exchange.github.io/disruptor/files/Disruptor-1.0.pdf.

(3) Your data size (50*750) is small. Chances are that it all fits in cache and you'll have no bandwidth issues accessing it. Lock vs. pipe vs. message queue: none of these is free at times of contention and when the queue is full or empty.

One benefit of lockless queues is that they can work entirely in user-space. This is a huge benefit when extremely low latency is desired.

Ad
source: stackoverflow.com
Ad