Introduction of Shared Memory Segment
Introduction of Shared Memory Segment :
The quickest kind of IPC accessible is shared memory. There is no kernel participation in transmitting data between processes after the memory is mapped into the address space of the processes that are sharing the memory region. However, some type of synchronization between the processes that save and retrieve data to and from the shared memory region is usually necessary. Mutexes, condition variables, read-write locks, record locks, and semaphores were all explored in Part 3 of this series.
Consider the typical stages in the client-server file copying application we used to demonstrate the various forms of message passing.
- The input file is read by the server. The kernel reads the data from the file into its memory and then copies it to the process.
- Using a pipe, FIFO, or message queue, the server writes this data to a message.
These types of IPC usually need data transfer from the process to the kernel.
In most cases, four copies of the data are necessary. Furthermore, these four copies are made between the kernel and a process, which is typically a costly copy (costlier than copying data within the kernel or inside a single process). The data transfer between the client and server through the kernel is depicted in Figure 1.
The difficulty with various types of IPC—pipes, FIFOs, and message queues—is that information must pass through the kernel for two processes to communicate.
By allowing two or more processes to share a memory space, shared memory provides a workaround. Of course, the processes must work together to coordinate or synchronize their use of the shared memory.
This synchronization can be accomplished using any of the strategies and the following are the stages for the client-server example :
- A semaphore is used by the server to get access to a shared memory object.
- The server loads the shared memory object from the input file. The address of the data buffer, which is the second parameter to the read, points to the shared memory object.
- When the read is finished, the server sends a semaphore message to the client.
- The data from the shared memory object is written to the output file by the client.
Create and initialize a semaphore :
We establish and initialize a semaphore to safeguard a variable that we believe is shared (the global count). This semaphore is unnecessary since this assumption is incorrect. Notice how we execute sem unlink to remove the semaphore name from the system; however, while this removes the pathname, it has no effect on the semaphore that is already open. We do this so that even if the program crashes, the pathname is erased from the filesystem.
Set unbuffered standard output and fork :
Because both the parent and the child would be writing to standard output, we left it unbuffered. The output from the two processes is not interleaved as a result of this. Both the parent and the child run a loop that increases the counter for the set number of times, only incrementing the variable while the semaphore is held.
Both processes, as can be seen, have their own copy of the global count. Each starts with a value of 0 for this variable and then increases its own copy of it.
Because one copy of the data in shared memory is available to all threads or processes that share the memory, shared memory is the quickest type of IPC accessible. However, to coordinate the numerous threads or processes that share the memory, some type of synchronization is usually necessary.
Because this is one technique to transfer memory across related or unrelated processes, this chapter has focused on the mMap function and the mapping of regular files into memory. We no longer need to read, write, or seek to access a file that has been memory-mapped; instead, we just obtain or save the memory regions that have been mapped to the file by mMap.