A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a technique of
gradually increasing the priority of processes that wait in the system for a long time. For example, if priorities
range from 127 (low) to 0 (high), we could decrement the priority of a waiting process by 1 every 15 minutes.
Eventually, even a process with an initial priority of 127 would have the highest priority in the system and would
be executed. In fact, it would take no more than 32 hours for a priority 127 process to age to a priority 0 process.
Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to
FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time
quantum (or time slice), is defined. A time quantum is generally from 10 to 100 milliseconds. The ready queue is
treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each
process for a time interval of up to 1 time quantum.
To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes
are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a
timer to interrupt after 1 time quantum, and dispatches the process.
One of two things will then happen. The process may have a CPU burst of less than 1 time quantum. In
this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process
in the ready queue. Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum,
the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and
the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the
ready queue.
The average waiting time under the RR policy, however, is often quite long.
Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given
in milliseconds:
Process Burst Time
PI 24
P2 3
P3 3
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires
another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in
the queue, process P2. Since process P2 does not need 4 milliseconds, it quits before its time quantum expires.
The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the
CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is
The average waiting time is 17/3 = 5.66 milliseconds.
In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row.
If a process' CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue.
The RR scheduling algorithm is preemptive.
If there are n processes in the ready queue and the time quantum is q, then each process gets l/n of the
CPU time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time units until its
next time quantum. For example, if there are five processes, with a time quantum of 20 milliseconds, then each
process will get up to 20 milliseconds every 100
milliseconds.
The performance of the RR algorithm depends heavily on the size of the time quantum. At one extreme,
if the time quantum is very large (infinite), the RR policy is the same as the FCFS policy. If the time quantum is
very small (say 1 microsecond), the RR approach is called processor sharing, and appears (in theory) to the
users as though each of n processes has its own processor running at l/n the speed of the real processor. This
approach was used in Control Data Corporation (CDC) hardware to implement 10 peripheral processors with
only one set of hardware and 10 sets of registers. The hardware executes one instruction for one set of registers,
then goes on to the next. This cycle continues, resulting in 10 slow processors rather than one fast processor.
Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for situations in which processes are easily
classified into different groups. For example, a common division is made between foreground (or interactive)