Scheduling of processes/work is done to
finish the work on time. CPU Scheduling is
a process that allows one process to use the CPU while another process is
delayed (in standby) due to unavailability of any resources such as I / O etc,
thus making full use of the CPU. The purpose of CPU Scheduling is to make the
system more efficient, faster, and fairer.
Whenever the CPU becomes idle, the operating system
must select one of the processes in the line ready for launch. The selection
process is done by a temporary (CPU) scheduler. The Scheduler selects between
memory processes ready to launch and assigns the CPU to one of them.
Criteria of CPU Scheduling
CPU Scheduling has several
criteria. Some of them are mentioned below.
1)
CPU
utilization
The main objective of any CPU
scheduling algorithm is to keep the CPU as busy as possible. Theoretically, CPU
utilization can range from 0 to 100 but in a real-time system, it varies from
40 to 90 percent depending on the load upon the system.
2)
Throughput
A measure of the work done by
the CPU is the number of processes being executed and completed per unit of
time. This is called throughput. The throughput may vary depending on the
length or duration of the processes.
3)
Turnaround
Time
For a particular process, an
important criterion is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is
known as the turnaround time. Turn-around time is the sum of times spent
waiting to get into memory, waiting in the ready queue, executing in CPU, and
waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.
4)
Waiting Time
A scheduling algorithm does not
affect the time required to complete the process once it starts execution. It
only affects the waiting time of a process i.e. time spent by a process waiting
in the ready queue.
Waiting Time = Turnaround Time - Burst Time.
5)
Response
Time
In an interactive system,
turn-around time is not the best criterion. A process may produce some output
fairly early and continue computing new results while previous results are
being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced.
This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival Time
6)
Completion
Time
The completion time is the time
when the process stops executing, which means that the process has completed
its burst time and is completely executed.
7)
Priority
If the operating system assigns
priorities to processes, the scheduling mechanism should favor the
higher-priority processes.
8)
Predictability
A given process always should
run in about the same amount of time under a similar system load.
Scheduling Algorithms :
First-Come First-Serve Scheduling, FCFS
First
in, first out (FIFO), also known as first come, first served (FCFS), is the
simplest scheduling algorithm. FIFO simply queues processes in the order that
they arrive in the ready queue.
In this, the process that comes first will be executed first and next process
starts only after the previous gets fully executed.
Here we are considering that arrival time for all processes is 0.
1. Completion Time:
Time at which process completes its execution.
2. Turn Around Time:
Time Difference between completion time and arrival time. Turn Around Time =
Completion Time – Arrival Time
3. Waiting Time(W.T):
Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Here we have assumed
arrival times as 0, so turn around and completion times are same.
Implementation:
1- Input the processes along with their burst
time (bt).
2- Find waiting time (wt) for all processes.
3- As first process that comes need not to wait
so
waiting
time for process 1 will be 0 i.e. wt[0] = 0.
4- Findwaiting time for all
other processes i.e. for
process
i ->
wt[i]
= bt[i-1] + wt[i-1] .
5- Findturnaround time =
waiting_time + burst_time
for
all processes.
6- Findaverage waiting time
=
total_waiting_time
/ no_of_processes.
7- Similarly, find average turnaround time
=
total_turn_around_time
/ no_of_processes.
Shortest Job
First (or SJF) :
The shortest job first (SJF) or shortest job next, is
a scheduling policy that selects the waiting process with the smallest
execution time to execute next. SJN, also known as Shortest Job Next (SJN), can
be preemptive or
non-preemptive.
Characteristics of SJF Scheduling:
·
Shortest Job first has the advantage of having a
minimum average waiting time among all scheduling algorithms.
·
It is a Greedy Algorithm.
·
It may cause starvation if shorter processes keep
coming. This problem can be solved using the concept of ageing.
·
It is practically infeasible as Operating System may
not know burst times and therefore may not sort them. While it is not possible
to predict execution time, several methods can be used to estimate the execution
time for a job, such as a weighted average of previous execution times.
·
SJF can be used in specialized environments where
accurate estimates of running time are available.
Algorithm:
·
Sort all the processes according to the arrival
time.
·
Then select that process that has minimum arrival time
and minimum Burst time.
·
After completion of the process make a pool of
processes that arrives afterward till the completion of the previous process
and select that process among the pool which is having minimum Burst
time.

Shortest Remaining Time First (SRTF) :
Shortest Remaining Time First
(SRTF) is the preemptive version of Shortest Job Next (SJN) algorithm,
where the processor is allocated to the job closest to completion.
This
algorithm requires advanced concept and knowledge of CPU time required to
process the job in an interactive system, and hence can’t be implemented there.
But, in a batch system where it is desirable to give preference to short jobs,
SRT algorithm is used.
However, SRT involves more overheads than SJN, as the
OS is required to frequently monitor the CPU time of the jobs in the READY queue and perform context switching.
As illustrated above, for the same set of jobs, SRT
algorithm is faster in execution than SJN algorithm. But, here the overhead
charges, i.e., time required for context switching has been ignored. When a job
is preempted, all of it’s processing information must be saved in it’s PCB for
later when it is to be continued, and the contents of the PCB of the other job
to which the OS is switching are loaded into the registers in the memory. This
is known as Context Switching.
Advantages:
·
SRTF algorithm makes the processing of the jobs faster
than SJN algorithm, given it’s overhead charges are not counted.
·
Allows for easier management of library updates or
replacements without recompiling the program.
·
Enables efficient memory usage, as libraries can be
shared among multiple instances of the program.
·
Provides better portability, as the program can be
executed on different systems with compatible libraries available at runtime.
Disadvantages:
·
The context switch is done a lot more times in SRTF
than in SJN, and consumes CPU’s valuable time for processing. This adds up to
it’s processing time and diminishes it’s advantage of fast processing.
·
Slightly slower program startup due to the additional
linking process.
·
Requires proper handling of library dependencies to
ensure correct execution.
·
Debugging can be slightly more complex, as libraries
are separate entities loaded at runtime.
Priority CPU
Scheduling
Priority scheduling is one of the most common
scheduling algorithms in batch systems. Each process is assigned a priority.
The process with the highest priority is to be executed first and so
on. Processes with the same priority are executed on a first-come first
served basis. Priority can be decided based on memory requirements, time
requirements or any other resource requirement. Also priority can be decided on
the ratio of average I/O to average CPU burst time.
Implementation:
1- First input the processes with their burst time and
priority.
2- Sort the processes, burst time and priority
according to the priority.
3- Now simply apply FCFS algorithm.
Note: A major problem with priority scheduling is
indefinite blocking or starvation. A solution to the problem of indefinite
blockage of the low-priority process is aging. Aging is a technique of
gradually increasing the priority of processes that wait in the system for a
long period of time.
Round Robin :
Round Robin is a CPU scheduling
algorithm where each process is assigned a fixed time slot in a cyclic way. It
is basically the preemptive version of First come First Serve CPU Scheduling
algorithm.
·
Round Robin CPU Algorithm
generally focuses on Time Sharing technique.
·
The period of time for which a
process or job is allowed to run in a pre-emptive method is called time quantum.
·
Each process or job present in the
ready queue is assigned the CPU for that time quantum, if the execution of the
process is completed during that time then the process will end else
the process will go back to the waiting table and wait for its next turn to
complete the execution.
Characteristics of Round Robin CPU Scheduling Algorithm
·
It is simple, easy to
implement, and starvation-free as all processes get fair share of CPU.
·
One of the most commonly used
technique in CPU scheduling as
a core.
·
It is preemptive as processes are assigned
CPU only for a fixed slice of time at most.
·
The disadvantage of it is more
overhead of context switching.
Advantages of Round Robin CPU Scheduling Algorithm
·
There is fairness since every
process gets equal share of CPU.
·
The newly created process is
added to end of ready queue.
·
A round-robin scheduler
generally employs time-sharing, giving each job a time slot or quantum.
·
While performing a round-robin
scheduling, a particular time quantum is allotted to different jobs.
·
Each process get a chance to
reschedule after a particular quantum time in this scheduling.
Disadvantages of Round Robin CPU Scheduling Algorithm
·
There is Larger waiting time
and Response time.
·
There is Low throughput.
·
There is Context Switches.
·
Gantt chart seems to come too
big (if quantum time is less for scheduling. For Example:1ms for big
scheduling.)
·
Time consuming scheduling for
small quantum.
S. No Process ID Arrival Time Burst Time
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1 P 1
0 7
2 P 2 1 4
3 P 3 2 15
4 P 4 3 11
5 P 5 4 20
6 P 6 4 9
Assume Time Quantum TQ = 5
Ready Queue:
P1, P2, P3, P4, P5, P6, P1, P3, P4, P5, P6, P3, P4, P5
Gantt chart:

Average Completion Time
1. Average Completion Time = ( 31 +9 + 55 +56 +66 + 50 ) / 6
2. Average Completion Time = 267 / 6
3. Average Completion Time = 44.5
Average Waiting Time
1. Average Waiting Time = ( 5 + 26 + 5 + 42 + 42 + 37 ) / 6
2. Average Waiting Time = 157 / 6
3. Average Waiting Time = 26.16667
Average Turn Around Time
1. Average Turn Around Time = ( 31 + 8 + 53 + 53 + 62 + 46 ) / 6
2. Average Turn Around Time = 253 / 6
3. Average Turn Around Time = 42.16667
Multilevel Queue Scheduling
Each algorithm supports a different process, but in a
general system, some processes require scheduling using a priority algorithm.
While some processes want to stay in the system (interactive processes), others are background processes whose execution can be delayed.
The number of ready queue algorithms between queues and
within queues may differ between systems. A round-robin method with various
time quantum is typically utilized for such maintenance. Several types of
scheduling algorithms are designed for circumstances where the processes can be
readily separated into groups. There are two sorts of processes that require
different scheduling algorithms because they have varying response times and
resource requirements. The foreground (interactive) and background processes (batch process) are
distinguished. Background processes take priority over foreground processes.
The ready queue has been partitioned into seven different
queues using the multilevel queue scheduling technique. These processes are
assigned to one queue based on their priority, such as memory size, process
priority, or type. The method for scheduling each queue is different. Some
queues are utilized for the foreground process, while others are used for the
background process. The foreground queue may be
scheduled using a round-robin method, and
the background queue can be scheduled using
an FCFS strategy.
Advantages
and Disadvantages of Multilevel Queue Scheduling
There are various advantages and disadvantages of
multilevel queue scheduling. Some of the advantages and disadvantages of the
multilevel queue scheduling are as follows:
Advantages
1. You can use multilevel
queue scheduling to apply different scheduling methods to distinct processes.
2. It will have low
overhead in terms of scheduling.
Disadvantages
1. There is a risk of
starvation for lower priority processes.
2. It is rigid in nature.
Example
Let's take an example of a multilevel queue-scheduling
algorithm with five queues to understand how this scheduling works:

1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Every queue would have an absolute priority over the low-priority queues.
No process may execute until the high-priority queues are empty. In the above
instance, no other process may execute until and unless the queues for system,
interactive, and editing processes are empty. If an interactive editing process
enters the ready queue while a batch process is underway, the batch process
will be preempted.
There are the descriptions of the processes that are used in the above
example:
1) System
Process
The OS has
its process to execute, which is referred to as the System Process.
2) Interactive
Process
It is a process
in which the same type of interaction should occur.
3) Batch
Process
Batch
processing is an operating system feature that collects programs and data into
a batch before processing starts.
4) Student
Process
The system
process is always given the highest priority, whereas the student processes are
always given the lowest.
Example
Problem
Let's take an example of a multilevel queue-scheduling
(MQS) algorithm that shows how the multilevel queue scheduling work. Consider
the four processes listed in the table below under multilevel queue scheduling.
The queue number denotes the process's queue.
|
Process |
Arrival Time |
CPU Burst Time |
Queue Number |
|
P1 |
0 |
4 |
1 |
|
P2 |
0 |
3 |
1 |
|
P3 |
0 |
8 |
2 |
|
P4 |
10 |
5 |
4 |
Queue 1 has a higher priority than queue 2. Round
Robin is used in queue 1 (Time Quantum = 2), while FCFS is used in
queue 2.

Working:
1. Both queues have been
processed at the start. Therefore, queue 1 (P1, P2) runs first (due to greater
priority) in a round-robin way and finishes after 7 units.
2. The process in queue 2
(Process P3) starts running (since there is no process in
queue 1), but while it is executing, P4 enters queue 1 and interrupts P3, and
then P3 takes the CPU and finishes its execution.
Multilevel
Feedback Scheduling
Each algorithm supports a different process, but some
processes require scheduling using a priority algorithm in a general system.
There is a different queue for foreground or background operations, but they do
not switch between queues or change their foreground or background nature; this
type of organization benefits from low scheduling but is inflexible.
This strategy prioritizes operations that require I/O and
are interactive. It is a distinct process with a distinct CPU burst time. It
enables a process to switch between queues. If a process consumes too much
processor time, it will be switched to the lowest priority queue. A process
waiting in a lower priority queue for too long may be shifted to a higher
priority queue. This type of aging prevents starvation.
The parameters of the multilevel feedback queue scheduler
are as follows:
1. The scheduling algorithm
for every queue in the system.
2. The queues number in the
system.
3. The method for
determining when a queue should be demoted to a lower-priority queue.
4. When a process is
upgraded to a higher-priority queue, this process determines when it gets
upgraded.
5. The method for
determining which processes will enter the queue and when those processes will
require service
Thread scheduling
The scheduling of thread involves two boundary scheduling:
1.
Scheduling of Kernel-Level
Threads by the system scheduler.
2.
Scheduling of User-Level
Threads or ULT to Kernel-Level Threads or KLT by using Lightweight process or
LWP.
Lightweight process (LWP):
The Lightweight process is
threads that act as an interface for the User-Level Threads to access the
physical CPU resources.
The number of the lightweight processes
depends on the type of application, for an I\O bound application the number of
LWP depends on the user level threads, and for CPU bound application each
thread is connected to a separate kernel-level thread.
In real-time, the first boundary of thread
scheduling is beyond specifying the scheduling policy and the priority,
therefore, it requires two controls to be specified for the User level
threads:
1. Contention scope – Control scope defines
the extent to which contention takes place. Contention refers to the
competition among the ULTs to access the KLTs.
Contention scope can be further classified
into Process Contention Scope (PCS) and System
Contention Scope (SCS).
·
Process Contention Scope: Process Contention Scope is when the contention takes place in the same
process.
·
System contention scope (SCS): System Contention Scope refers to the contention that takes place
among all the threads in the system.
1. Allocation domain – The allocation domain is a set of
multiple (or single) resources for which a thread is competing.
Advantages of PCS over SCS:
The advantages of PCS over SCS are as follows:
1. It is cheaper.
2. It helps reduce system calls and achieve better performance.
3. If the SCS thread is a part of more than one allocation domain, the system
will have to handle multiple interfaces.
4. PCS thread can share one or multiple available LWPs, while every SCS
thread needs a separate LWP. Therefore, for every system call, a separate KLT
will be created.