Why do we require scheduling in operating system? Briefly explain various scheduling algorithms with its limitations.

when two or more processes are simultaneously in ready state and only one CPU is available, a choice has to be made which process to run next. The algorithm it uses is called scheduling.
In multiprocessing system, many processes are running at a same time, all competing for the CPU. So, the scheduling is required to pick the right process to run next.
Also, scheduling is very important to make efficient use of the CPU because process switching is expensive. Too many process switching per second can consume substantial amount of CPU time.
The characteristics of scheduling are:

  • Fairness: giving each process a fair share of CPU
  • throughput: maximize job per hour
  • turnaround time: minimize time between submission and termination
  • CPU utilization: keeps the CPUbusy all the time
  • response time: respond to request quickly

Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts;
Non-preemptive: picks up a process to run and then just let it run until it blocks or until it voluntarily releases the CPU
Preemptive: picks up a process and let it run for a maximum of some fixed time.
Different type of scheduling algorithm are discussed below:
First-in-first-out:

  • Non-preemptive
  • average waiting time can be long
  • processes are kept in queue (single)
  • The great strength of this algorithm is that it is easy to understand and equally easy to program
  • Disadvantage: takes more time to execute a process than using preemptive one

Example:

Process P1 P2 P3
Burst time 24 3 3

Let the process arrive in the order P1,P2,P3.
Average waiting time,
P1  (1  24)  P2(25    27)  P3(28     30) = 0+24+3/3 = 17units
Shortest job next scheduling:

  • non-preemptive
  • suitable for batch processing
  • assign the process with shortest cpu burst requirement to the cpu

Example:

Process P1 P2 P3 P4
Burst time 6 8 7 3

Scheduling is done as ,  P1(1   3) P2(4  9)  P3(10   16)  P4(17  24)
Average waiting time = 0+3+16+9/4 = 7 units
Moving a short job before a long one decreases the waiting time for short job more than it iterates the waiting time for the longer process.
Problem:

  • to determine the length of cpu burst for the job.
  • assumes the run time has to be known in advance
  • shortest job first is only optimal when all the jobs are available simultaneously.

Shortest remaining time:

  • preemptive

highest priority to those process that need least time to complete
from the process above, schedule for execution
P1(1   2)  P2(3   5)  P4(6  10)  P1(11  17)  P3(18  26)
 
Here, run time has to be known in advance. When a new job arrives, its total time is compared to the current process remaining time. If the new job needs less time to finish than the current process, the current process is suspended and the new job started.
Round Robin algorithm:

  • preemptive
  • preemption based on time slices or time quanta.
  • All user process treated to be at the same priority  CPU burst<1 quantum => process releases CPU voluntarily
  • Problem: performance depend heavily on the size of time quanta
  • switching from one process to another requires a certain amount of time for doing the administration – saving and loading registers, and memory maps, updating various tables and lists, flushing and reloading the memory cache, etc. So more cpu time will be wasted on administrative overhead.
  • setting the quantum too short causes too many process switches and lowers the cpu efficiency, but setting it too long may cause poor response to short interactive requests.

Priority scheduling:

  • preemptive
  • each process is assigned a priority, and the runnable process with the highest priority is allowed to run
  • priorities can be assigned to processes statically or dynamically
  • if priorities are not adjusted occasionally, lower priority classes may all starve to death.

Leave a Reply

Your email address will not be published. Required fields are marked *