Examples of shared memory systems Atomic operations

5 min


Examples of shared memory systems

In many OS, it is possible for two (or more) processes running on the same machine to ask the OS to allow them to share access to a block of memory, a technique known as shared memory. Usually, this is done by having one process call the OS and asking for a segment of memory (of some specified length) to be allocated and given a name. Examples of shared memory systems Other processes wishing to share this memory will have to know the same name. They will then give the OS the name and ask for access to the segment. The memory address settings of both processes will be altered so that they can have access to the shared block of memory.

The exact mechanism is discussed. Some applications are very simple and will not need complex synchronization to make sure that the two processes do not interfere with one another. Other systems may require more elaborate synchronization mechanisms to control access to the data in the shared memory block. This topic will be elaborated upon in the next section. You should recall that while separate processes must use such an elaborate mechanism to share a memory, threads within a single process always share their memory by definition. A special case of shared memory is sometimes provided in the form of memory-mapped files. This is a slight modification of the shared memory technique.

In this case, the initiating procedure calls the OS and gives it the name of a file on secondary storage. The OS will locate the file and will allocate space in the calling process to contain the entire file. However, the file will not immediately be loaded into memory. As parts of the shared file are accessed for the first time the hardware will signal the OS and it will load the appropriate portion of the file into the memory and then resume running the process.

The problem

Now that we have an understanding of why processes need to communicate and some of the mechanisms they can use to do so, we need to turn our attention to a problem that can occur when two processes want to share a piece of data in memory. Examples of shared memory systems Consider the following example where two processes, A and B, are using a buffer to communicate and are attempting to update a shared record counter, X, which initially has the value 8. Process A has put a record into a buffer and is trying to increment the counter by adding 1 to X.

Process B has taken a record out of the buffer, so it is trying to decrement the counter by subtracting 1 from X. So process A has an instruction X = X -1 and process B has an instruction X = X +1. After these two instructions execute we would expect X to still contain the value 8. However, there is a small potential problem. The high-level language instructions we have shown are normally broken into three separate machine instructions: a Load to a register, an Add or Subtract, and a Store back into memory. Consider the execution. Process A loads the value of X into register A, so register A contains an 8. Process A is now interrupted because its time slice has finished. The registers are saved in the PCB for process A.

Now, process B gets a time slice. It loads the value of X into register A, so register A is not changed. Process B subtracts 1 from register A, yielding a 7, and stores it in the memory location for X, leaving X at a 7. It continues on. Eventually, process A gets another time slice. Examples of shared memory systems The registers for process A are restored from its PCB, so register A now contains an 8 again. It adds 1 to Register A, giving a value of 9, which it stores in the memory location for X, leaving X at a 9. This result is not exactly what we were expecting. To make matters even worse, this problem is timing-dependent. There is a very small window in which this problem will occur.

Almost all the time these two processes will share this variable very nicely, assuming this was the only modification they were making to the variable. This kind of problem is quite hard to debug because it is intermittent. It is called a race condition. A race condition, or race hazard, is a defect in a design whereby the output of the system is dependent on the sequence or timing of other events. This problem can also occur in a multiprocessor system when process A is running on one CPU and process B is running on another CPU. Regardless of the cause of the problem, we need a solution. Although multiple CPU systems have been uncommon outside of high-end servers, the focus of the current generation of CPU chips is to have multiple CPUs within a single chip. As a result, this sort of problem will become more common.

Atomic operations

The trick we need is to make those operations atomic. This word means that the operation we are doing is indivisible specifically, it cannot be interrupted by another process that wants to update the same information. One possible solution might be to compile that instruction into a single uninterruptible instruction. For example, some machines can add 1 to (or subtract 1 from) a variable in memory without loading the variable into a register. However, when we are writing our program in a higher-level language we want to not worry about such hardware details. We would like to be able to move our program to a different machine that perhaps didn’t have such an instruction. So we have to use a more general solution.

Locks and critical sections

Sometimes our processes will be sharing a single variable and sometimes they will be sharing a more elaborate structure. Sometimes we will be doing a single operation and sometimes we will be doing more complex operations. Sometimes we will have only two processes trying to share a single resource and sometimes we will have many. Sometimes we will be trying to share a resource for which there are multiple instances and sometimes there will only be one instance of the resource. The general technique we will use is to use a special kind of variable called a lock to control access to the shared variable. Examples of shared memory systems A lock is also sometimes called a mutex since it can be used to provide mutually exclusive access to the item the lock is protecting. When we look at a process that is using a lock we can think of the program as having four parts

Hardware locking instructions

There are many ways that the entry and exit section can be coded. In a very simple embedded OS or in the kernel of an OS we may use special machine instructions to lock and unlock the locks. These instructions are themselves atomic instructions. There are only a few common variants. One is a Test and Set instruction.

This instruction will set a variable in memory to a nonzero value (i.e., lock it) and will also return a result that tells us whether the lock was already set before we executed this instruction. If the lock was already set then we did not gain access to the lock, so we just retry the operation: Note that the scope of the while statement is null (the ;), so this loop does nothing but wait until the lock is not set when the instruction is executed. This type of coding is sometimes called a spin-lock or busy-waiting. Another common atomic instruction is a Swap instruction. It exchanges the values of two variables in a single step. This instruction is slightly more powerful than the Test and Set instruction in the sense that it can put any value into the lock variable.

Also Read:-


Like it? Share with your friends!

error: Content is protected !!