Skip to main content

Priority and multilevel queue

Priority


•Priority scheduling is a more general case of SJF, in which each job is assigned a priority and the job with the highest priority gets scheduled first. (SJF uses the inverse of the next expected burst time as its priority – The smaller the expected burst, the higher the priority.)
•Note that in practice, priorities are implemented using integers within a fixed range, but there is no agreed-upon convention as to whether “high” priorities use large numbers or small numbers. This book uses low number for high priorities, with 0 being the highest possible priority.
•For example, the following Gantt chart is based upon these process burst times and priorities, and yields an average waiting time of 8.2ms:

Process
Burst Time
Priority
P1
10
3
P2
1
1
P3
2
4
P4
1
5
P5
5
2


 Priorities can be assigned either internally or externally. Internal priorities are assigned by the
 
OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel. External priorities are assigned by users, based on the importance of the job, fees paid politics, etc.

•Priority scheduling can be either preemptive or non- preemptive.
•Priority scheduling can suffer from a major problem known as indefinite blocking, or starvation, in which a low-priority task can wait forever because there are always some other joins around that have higher priority.
- If this problem is allowed to occur, then processes will either run eventually when the system load lightens (at say 2.00 a.m.), or will eventually get lost when the system is shut down or crashes. (There are rumors of jobs that has been stuck for years.)
- One common solution to this problem is aging, in which priorities of jobs increase the longer they wait. Under this scheme a low priority job will eventually get its priority raised high enough that it gets run.

 

MULTILEVEL QUEUE

Multilevel Queue Scheduling


•When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.
•Scheduling must also be done between queues that is scheduling one queue to get time relative to other queues. Two common options are strict priority ( No job in a lower priority queues runs until all higher priority queues are empty) and round –robin ( each queue gets a time slice in turn, possibly of different sizes.)
•Note that under this algorithm jons cannot switch from queue to queue – Once they are assigned a queue, that is their queue until finish.


 

 

 Multilevel Feedback Queue Scheduling

 
•Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons:
-If the characteristics of a job change between CPU-intensive and I/o intensive, then it may be appropriate to switch a job from one queue to another.
- Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while.
•Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters: Some of the parameters which define the system include:
-The number of queues
-The scheduling algorithm for each queue
-The methods used to upgrade or demote processes from one queue to another. (Which may be different)
-The method used to determine which queue a process enters initially.



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Comments

Popular posts from this blog

3.1.1 Identify Between Resident And Transient Routines

Memory Management Memory management is concerned with managing: The computer’s available pool of memory Allocating space to application routines and making sure that they do not interfere with each other. 3.1.1 Identify between resident and transient routines The operating system is a collection of software routines. Resident routines Transient routines Routines that directly support application programs as they run Stored on disk and read into memory only when needed Example: routine that control physical I/O Example: routine that formats disks The operating system occupies low memory beginning with address 0. Key control information comes first followed by the various resident operating system routines. The remaining memory, called the transient area, is where application programs and transient operating system routines are loaded. Resident & transient routines structure...

Operating Systems Definition and the Classification of OS

             OPERATING SYSTEMS ( OS ) What is an operating system? An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. The other programs are called applications or application programs. The application programs make use of the operating system by making requests for services through a defined application program interface (API). In addition, users can interact directly with the operating system through a user interface such as a command language or a graphical user interface (GUI). An operating system performs these services for applications:     In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before g...

2.1.4 Distinguish between logical I/O and physical I/O

2.1.4 Distinguish between logical I/O and physical I/O logical input relate to hard disk Logical I/O an Physical I/O Physical" I/O is an actual fetch of data from a storage device such as a disk. Logical" I/O is a programmatic request for data satisfied by a memory (block, buffer) access. A logical I/O may cause a physical I/O in the first place, or a logical I/O may retrieve a part of a block (buffer) of data from memory. 2.1.5 Distinguish between directory management and disk space management. Directory management A directory is a hierarchical collection of directories and files.  The only constraint on the number of files that can be contained in a single directory is the physical size of the disk on which the directory is located. Disk management A hard disk is a rigid disk inside a computer that stores and provides relatively quick access to large amounts of data. It is the type of storage most often used with Windows. The system also supp...