Open In App

I/O scheduling in Operating Systems

Improve
Improve
Like Article
Like
Save
Share
Report

You can manage connectivity in your active I/O configurations through I/O operations, which offers a centralized point of control. In addition to allowing you to view and change the paths between a processor and an input/output device, which may involve using dynamic switching, it actively participates in identifying unusual I/O conditions.

Before understanding the I/O scheduling, It’s important to get an overview of I/O operations. 

How are I/O operations performed? 

Operating System has a certain portion of code that is dedicated to managing Input/Output in order to improve the reliability and the performance of the system. A computer system contains CPUs and more than one device controller connected to a common bus channel, generally referred to as the device driver. These device drivers provide an interface to I/O devices for communicating with the system hardware promoting ease of communication and providing access to shared memory.

i/o processs

I/O Requests in operating systems

 I/O Requests are managed by Device Drivers in collaboration with some system programs inside the I/O device. The requests are served by OS using three simple segments :

  1. I/O Traffic Controller: Keeps track of the status of all devices, control units, and communication channels.
  2. I/O scheduler: Executes the policies used by OS to allocate and access the device, control units, and communication channels.
  3. I/O device handler: Serves the device interrupts and heads the transfer of data.

I/O Scheduling in operating systems

Scheduling is used for efficient usage of computer resources avoiding deadlock and serving all processes waiting in the queue.To know more about CPU Scheduling refer to CPU Scheduling in Operating Systems.

I/O Traffic Controller has 3 main tasks:

  • The primary task is to check if there’s at least one path available.
  • If there exists more than one path, it must decide which one to select.
  • If all paths are occupied, its task is to analyze which path will be available at the earliest.

Scheduling in computing is the process of allocating resources to carry out tasks. Processors, network connections, or expansion cards are examples of the resources. The tasks could be processes, threads, or data flows.

A process referred to as a scheduler is responsible for scheduling. Schedulers are frequently made to keep all computer resources active (as in load balancing), efficiently divide up system resources among multiple users, or reach a desired level of service.

I/O Scheduler functions similarly to Process scheduler, it allocates the devices, control units, and communication channels. However, under a heavy load of I/O requests, Scheduler must decide what request should be served first and for that we multiple queues to be managed by OS. The major difference between a Process scheduler< and an I/O scheduler is that I/O requests are not preempted: Once the channel program has started, it’s allowed to continue to completion. Although it is feasible because programs are relatively short (50 to 100 ms). Some modern OS allows I/O Scheduler to serve higher priority requests. In simpler words, If an I/O request has higher priority then they are served before other I/O requests with lower priority. The I/O scheduler works in coordination with the I/O traffic controller to keep track of which path is being served for the current I/O request. I/O Device Handler manages the I/O interrupts (if any) and scheduling algorithms. 

A few I/O handling algorithms are :

  1. FCFS [First come first server].
  2. SSTF [Shortest seek time first].
  3. SCAN
  4. Look
    • N-Step Scan
    • C-SCAN
    • C-LOOK

Every scheduling algorithm aims to minimize arm movement, mean response time, and variance in response time. An overview of all I/O scheduling algorithms is described below :

  1. First Come First Serve [FCFS] It is one of the simplest device-scheduling algorithms since it is easy to program and essentially fair to users (I/O devices). The only barrier could be the high seek time, so any other algorithm that can surpass the minimum seek time is suitable for scheduling.
  2. Shortest Seek Time First [SSTF]: It uses the same ideology as the Shortest Job First in process scheduling, where the shortest processes are served first and longer processes have to wait for their turn. Comparing the SJF concept in I/O scheduling, the request with the track closest to the one being served (The one with the shortest distance to travel on disk) is next to be satisfied. The main advantage over FCFS is that it minimizes overall seek time. It favors easy-to-reach requests and postpones traveling to those that are out of the way.
  3. SCAN Algorithm: Scan uses a status flag that tells the direction of the arm, it tells whether the arm is moving toward the center of the disk or to the other side. This algorithm moves the arm from the end of the disk to the center track servicing every request in its way. When it reaches the innermost track, it reverses the direction and moves towards outer tracks on the disk, again servicing every request in its path.
  4. LOOK [Elevator Algorithm]: It’s a variation of the SCAN algorithm, here arm doesn’t necessarily go all the way to either side on disk unless there are requests pending. It looks ahead to a request before servicing it. A big question that arises is “Why should we use LOOK over SCAN?”. The major advantage of LOOK over SCAN is that it discards the indefinite delay of I/O requests.

Other variations of SCAN

  1. N-Step Scan: It holds all the pending requests until the arm starts its way back. New requests are grouped for the next cycle of rotation.
  2. C-SCAN [Circular SCAN] : It provides a uniform wait time as the arm serves requests on its way during the inward cycle. To know more, refer Difference between SCAN and C-SCAN.
  3. C-LOOK [Optimized version of C-SCAN] : Arm doesn’t necessarily return to the lowest-numbered track, it returns from the lowest request to be served. It optimized the C-SCAN as the arm doesn’t move to the end of the disk if not required. To know more, refer to the Difference between C-LOOK and C-SCAN.

Frequently Asked Questions

Q.1: What is I/O scheduling in operating systems? 

Answer:

I/O scheduling refers to the process of managing and prioritizing input/output (I/O) operations in an operating system. It determines the order in which I/O requests from different processes or devices are serviced by the underlying hardware, such as hard drives or solid-state drives (SSDs).

Q.2: Why is I/O scheduling important?

Answer: 

I/O operations involve reading from or writing to devices, such as disks, network interfaces, or user input/output devices. Efficient I/O scheduling is crucial for improving system performance, ensuring fairness among processes, reducing response time, and optimizing resource utilization.

Q.3: What are the common goals of I/O scheduling? 

Answer: 

The main goals of I/O scheduling include:

  1. Maximizing throughput: Ensuring that the system processes as many I/O requests as possible within a given time period.
  2. Minimizing response time: Reducing the time taken for an I/O request to receive a response.
  3. Fairness: Ensuring that all processes have a reasonable chance of accessing I/O resources, preventing starvation or excessive delays for any particular process.
  4. Prioritization: Allowing certain I/O requests or processes to have higher priority over others based on specific criteria or requirements.


Last Updated : 28 Jun, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads