What is Process Scheduling in Operating System? Types

 
             
What is Process Scheduling in Operating System? Types
 Process Scheduling in Operating System






A computer program that is being run by one or more threads is referred to as a process in the context of computing. In many different computer settings, scheduling is crucial. The selection of applications to run on the CPU is one of the most crucial aspects of scheduling. The computer's operating system (OS) is responsible for handling this duty, and there are numerous ways to setup programs.

The essential parts of operating systems that determine which programs the CPU executes in what order are called process schedulers. Put more simply, they control how the CPU divides up its time between several activities or tasks that are vying for its attention.

What is Process Scheduling?

Process scheduling is the element of the process manager's job that involves choosing a different process based on a certain strategy and removing the active process from the CPU.

An essential component of an operating system with multiple programs is process scheduling. These operating systems enable the simultaneous loading of many processes into executable memory, which share the CPU using temporal multiplexing.

Categories of Scheduling

Scheduling can be classified into two types:

1.Non-Preemptive: In this scenario, resources cannot be used by a process before it has completed its execution. Resources are switched when a running process ends and enters a waiting state.


2.Preemptive: In this scenario, a process is given resources by the OS for a set amount of time. When allocating resources, the process alternates between the running and ready states, or between the waiting and ready states. This switching occurs because the CPU may assign a higher priority to other processes, replacing the running process with the one with a lower priority.


Process Scheduling Queues

All Process Control Blocks (PCBs) in Process Scheduling Queues are maintained by the OS. The operating system, which maintains separate queues for every process state, arranges the PCBs of all processes in the same execution state in the same queue.
 A process's PCB is transferred to its new state queue and its current queue is disconnected when a process's status is altered.

The following significant process scheduling queues are kept up to date by the operating system:

Job queue: All of the system's processes are maintained in this queue.

The ready queue contains all of the processes that are currently running in main memory and are prepared for execution. New processes are continuously added to this queue.

Processes that are halted due to an unavailable I/O device are known as device queues.


Different policies (FIFO, Round Robin, Priority, etc.) can be used by the OS to manage each queue. There can only be one entry per processor core in the system for the ready and run queues; in the diagram above, the CPU and queue have been combined to represent the OS scheduler's decision-making process.

Types of Process Schedulers

The operating system schedules processes using a variety of schedulers, as will be explained below.

1. Long term scheduler

Job scheduler is another name for long-term scheduler. It selects the processes from the secondary memory pool and stores them in the primary memory's ready queue.

The long-term scheduler primarily controls the extent of multiprogramming. 
 The long-term scheduler's job is to select the ideal balance of CPU- and IO-bound activities from the pool of workloads.


The CPU will be idle for the most of the time if the job scheduler selects more IO-bound processes, causing all of the jobs to constantly be in the blocked state. As a result, there will be less multiprogramming. As a result, the long-term scheduler's job is crucial and could have a lasting impact on the system.


2. Short-Term or CPU Scheduler


It is in charge of choosing one process from the ready state and scheduling it for operation. Note: The short-term scheduler does not load the process while it is running; it merely chooses which process to schedule. All of the scheduling methods are applied at this point. The CPU scheduler is in charge of making sure that processes with large burst times don't cause starvation.


The dispatcher is in charge of loading the CPU (Ready to Running State) with the process that the short-term scheduler chose. The dispatcher is the only one that switches contexts. What a dispatcher performs is as follows:

  • Changing the scene.
  • Going into user mode.
  • Ieaping to the correct spot in the freshly loaded application.

3. Medium-Term Scheduler

The Medium-Term scheduler manages the switched-out processes. The state must be changed from running to waiting if the processes in the running state need some IO time to finish.

We use a Medium-Term scheduler to do this. To create room for other processes, it halts the process's execution. Examples of this include processes that have been swapped out; this process is called swapping. Process start and stop is handled by this Medium-Term scheduler.

There is less multiprogramming in this case. Swapping is necessary in order to have an excellent mix of processes in the ready queue.


Process Scheduling's Significance in Operating Systems

The operating system (OS) keeps a queue of many applications waiting to be executed at any given time, as was previously explained. To guarantee smooth functioning, the OS coordinates the start, stop, and transition between various programs. It becomes essential for the OS to be guided by a process scheduler when deciding which program to run, when to pause, and when to transition to another.
The OS effectively allots CPU time to each task through scheduling, which is based on a predetermined plan or algorithm. This method guarantees constant CPU activity, maximizing resource utilization and reducing idle times.
Time is wasted less as a result, guaranteeing that program execution takes place as soon as possible. In order to reduce the turnaround time for process completion and increase system efficiency as a whole, process scheduling is essential. In addition, it improves the system's general responsiveness by cutting down on program reaction time.

Some Other Schedulers

I/O schedulers: They are responsible for controlling how I/O operations, including reading and writing to disks or networks, are carried out. To choose the sequence in which I/O operations are carried out, they can employ a number of algorithms, including RR (Round Robin) and FCFS (First-Come, First-Served).

Instantaneous Schedulers: Real-time schedulers in real-time systems make sure that important tasks are finished in a predetermined amount of time. They can use different algorithms, such RM (Rate Monotonic) or EDF (Earliest Deadline First), to schedule and prioritize jobs.

Context Switching

The process of storing and restoring a CPU's state or context in the Process Control block so that the process can be resumed at a later time is known as context flipping. A context switcher makes it possible for several programs to share a single CPU by employing this method. One crucial component of an operating system that allows multitasking is context switching.

The state of the active process is saved in the process control block when the scheduler shifts the CPU from one process to another. Subsequently, the following process loads its state from its own PCB, which is used to configure the PC, registers, etc. It is then possible for the second step to begin working.


The necessity to save and restore register and memory state makes context changes computationally demanding. Certain hardware systems make use of two or more sets of processor registers in order to reduce the amount of time spent changing contexts. The following data is kept for future reference when the process is changed.

Data on Program Counter-Scheduling
The base and limit register values
As of the now, utilized register
Revised State Information I/O State Data Accounting Data Prim

Two-State Process Model Short-Term

The two-state process paradigm is referred to as "running" and "non-running" states.

Running: When a new process is started, it first joins the system in the state of being in operation.

Not Running: Processes that are not in use at the moment are queued up for execution. Each item in the queue contains a pointer to a particular process. The queuing system is implemented using linked lists. The dispatcher is utilized in this manner. A procedure gets pushed to the back of the waiting list when it is interrupted. Whether the process is successful or not determines whether it gets discarded. In either case, the dispatcher then selects a process to run from the queue.

Conclusion

In summary, operating systems' process schedulers are essential components that manage how the CPU distributes work across multiple processes. They ensure that operations are completed efficiently, making the most of CPU resources and maintaining system responsiveness. By choosing the right activity to run at the right time, schedulers contribute to improving user experience, optimizing system performance, and ensuring that competing activities have fair access to CPU resources.


What is process scheduling in OS?

Process scheduling is the element of the process manager's job that involves choosing a different process based on a certain strategy and removing the active process from the CPU. An essential component of an operating system with multiple programs is process scheduling.

What are the three types of scheduling in OS?

Operating systems can have up to three different kinds of schedulers: a mid-term or medium-term scheduler, a short-term scheduler, and a long-term scheduler (also called an admission scheduler or high-level scheduler). The names allude to how frequently their respective roles are carried out.

What is Inter-Process Communication (IPC)?

IPC is an operating system method that makes it easier for processes to communicate, synchronize, and share data.

What is CPU scheduling in OS?

CPU scheduling, as used in operating systems, describes a method that lets one function use the CPU while keeping other processes running in the background or on hold.

What are the rules of scheduling?

Crucial Scheduling Guidelines

Rule 1: the scheduling methodology has to be approved and documented.
Rule 2: The entire scope of the schedule should be included.
Rule 3: Activities at a certain level of effort shouldn't be crucial or variable.
Rule 4: Name Your Activities Something Different.