Even the technique of swapping was not enough, however, and the owners of these expensive machines wanted to get more work done for the money they were spending. multiple processes with a fixed number So the OS designers began to search for better ways to organize the processing.

A multiple-process
OS with a fixed
number of
A multiple-process OS with a fixed number of processes.

Primary memory was continuing to get cheaper, so they began thinking about ways to keep multiple programs in primary memory and run more than one by alternating between them without swapping them out swapping is an operation that requires lots of resources. Eventually, they realized that the relocation register could run a program anywhere, not only at the top of the resident OS. So they moved to an OS memory organization like that. At first, the base register had been used to keep applications from harming the OS.

Then the use of this register was changed to a relocation register, primarily to solve the problem of the growth of the OS. multiple processes with a fixed number Now when the OS is running multiple programs and one program does an I/O operation to some slow device, the OS simply puts the memory address of the second program in the relocation register and starts to run the second program. This situation (It does more than that, but here we are focused just on the memory aspects.)

Internal fragmentation

When this type of OS is installed the administrator will decide how much memory to set aside for each program area or partition. The OS will not change the size of these partitions as the system runs. With the earlier OS models, a program might not use all of the memory. If it didn’t use it all then we didn’t worry about it. Now the OS is trying to put more programs into the same memory. If we have set aside 100 KB for each partition and we want to run a program that only needs 6 KB, then we are wasting the rest of the space in that partition. This unused space is called internal fragmentation


We might set up a small partition or two to run small quick jobs and a larger partition or two for our big applications. This would tend to minimize the space wasted due to internal fragmentation. If the administrator is clever about setting up the partition sizes, then the programs that are running will come close to filling primary memory and we will have a better chance of keeping that expensive hardware fully utilized.

Time Sharing

Another case where swapping is utilized is in systems that are designed to support many users at terminals in a mode called time-sharing. When users are interactively editing programs and testing them, the vast majority of the time that process is waiting on the user at the terminal.multiple processes with a fixed number In this case, the system can swap out the process while the user is thinking or keying. In the case that was described in Section 10.3, there was only one partition and thus only one process actually executing. Any other processes could be swapped out to secondary storage.

In the case of time sharing it is more likely that we will have several partitions, perhaps even many partitions. We might keep in memory only the ones that are not waiting for the user to finish entering a line and are either running, ready to run, or waiting on something other than terminal I/O. The fixed size of the partitions wastes memory, of course. Recall the internal fragmentation that we just discussed.

In that case, we only had fragmentation of a single partition. multiple processes with a fixed number Now we have internal fragmentation in every partition. We would like to be able to use those fragments. If we saved enough memory then maybe we could run another program and keep the CPU busier. Although these techniques worked well enough for the time, modern time-sharing systems generally use techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *