Open In App

Techniques to handle Thrashing

Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite – Virtual Memory 

Thrashing is a condition or a situation when the system is spending a major portion of its time servicing the page faults, but the actual processing done is very negligible. 

Causes of thrashing:

  1. High degree of multiprogramming.
  2. Lack of frames.
  3. Page replacement policy.

Thrashing’s Causes

Thrashing has an impact on the operating system’s execution performance. Thrashing also causes serious performance issues with the operating system. When the CPU’s usage is low, the process scheduling mechanism tries to load multiple processes into memory at the same time, increasing the degree of Multi programming.

 In this case, the number of processes in the memory exceeds the number of frames available in the memory. Each process is given a set number of frames to work with.

If a high-priority process arrives in memory and the frame is not vacant at the moment, the other process occupying the frame will be moved to secondary storage, and the free frame will be allotted to a higher-priority process.

We may also argue that as soon as the memory is full, the procedure begins to take a long time to swap in the required pages. Because most of the processes are waiting for pages, the CPU utilization drops again.

As a result, a high level of multi programming and a lack of frames are two of the most common reasons for thrashing in the operating system.

The basic concept involved is that if a process is allocated to few frames, then there will be too many and too frequent page faults. As a result, no useful work would be done by the CPU and the CPU utilization would fall drastically. The long-term scheduler would then try to improve the CPU utilization by loading some more processes into the memory thereby increasing the degree of multi programming. This would result in a further decrease in the CPU utilization triggering a chained reaction of higher page faults followed by an increase in the degree of multi programming, called Thrashing. 

Locality Model – 
A locality is a set of pages that are actively used together. The locality model states that as a process executes, it moves from one locality to another. A program is generally composed of several different localities which may overlap. 

For example, when a function is called, it defines a new locality where memory references are made to the instructions of the function call, it’s local and global variables, etc. Similarly, when the function is exited, the process leaves this locality. 

Techniques to handle: 

1. Working Set Model – 

This model is based on the above-stated concept of the Locality Model. 
The basic principle states that if we allocate enough frames to a process to accommodate its current locality, it will only fault whenever it moves to some new locality. But if the allocated frames are lesser than the size of the current locality, the process is bound to thrash. 

According to this model, based on parameter A, the working set is defined as the set of pages in the most recent ‘A’ page references. Hence, all the actively used pages would always end up being a part of the working set. 

The accuracy of the working set is dependent on the value of parameter A. If A is too large, then working sets may overlap. On the other hand, for smaller values of A, the locality might not be covered entirely. 

If D is the total demand for frames and WSS_i            is the working set size for process i, 


D=\sum\nolimits{ WSS_ i}

Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities: 

  • (i) D>m i.e. total demand exceeds the number of frames, then thrashing will occur as some processes would not get enough frames.
  • (ii) D<=m, then there would be no thrashing.

2. Page Fault Frequency – 

A more direct approach to handling thrashing is the one that uses the Page-Fault Frequency concept. 

The problem associated with Thrashing is the high page fault rate and thus, the concept here is to control the page fault rate. 
If the page fault rate is too high, it indicates that the process has too few frames allocated to it. On the contrary, a low page fault rate indicates that the process has too many frames. 
Upper and lower limits can be established on the desired page fault rate as shown in the diagram. 
If the page fault rate falls below the lower limit, frames can be removed from the process. Similarly, if the page fault rate exceeds the upper limit, more frames can be allocated to the process. 
In other words, the graphical state of the system should be kept limited to the rectangular region formed in the given diagram. 
Here too, if the page fault rate is high with no free frames, then some of the processes can be suspended and frames allocated to them can be reallocated to other processes. The suspended processes can then be restarted later.

FAQ:

What is thrashing in virtual memory?
Thrashing is a condition when the system is spending a major portion of its time servicing the page faults, but the actual processing done is very negligible.

What are the causes of thrashing?
The high degree of multiprogramming, lack of frames, and page replacement policy are the main causes of thrashing.

How does the working set model handle thrashing?
A process’s frame allocation is determined by the working set model, which takes into consideration its current locality. When the frames allocated are insufficient to accommodate said locality, the outcome is thrashing. Defining the working set as the assortment of pages accessed in the previous ‘A’ reference pages depends on parameter A.

What is the locality model in virtual memory?
Localities refer to a group of pages that are frequently utilized as a whole. The model of locality suggests that a system progresses from one locality to the next as it runs. Programs can consist of numerous localities that sometimes intersect.

How does the page fault frequency approach handle thrashing?
Managing the page fault frequency approach regulates the occurrence of page faults during the process. Whenever the page fault rate becomes excessive, it signifies that the allocation of frames to the process is insufficient. Conversely, a decreased frequency of page faults suggests excessive allocation of frames. The establishment of maximum and minimum thresholds for the desired page fault rate can prevent excessive data swapping and thrashing.


Last Updated : 05 May, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads