Open In App

Chapter 6: Introduction to Remote Sensing| Class 11 Geography Practical Work

Last Updated : 26 Apr, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Class 11 Geography Ch 6 Introduction to Remote Sensing Notes: Introduction to Remote Sensing is the sixth chapter in CBSE Class 11 Geography, which discusses the concept of Remote Sensing and its various forms. The chapter explains how this technique works, its different methods, and how it’s used in various fields like geography, agriculture, and environmental science.

Students learn about the sensors used to collect data, like cameras and scanners, and how they create images and information about the Earth. They also learn how to analyze this data to understand changes in things like land use, vegetation, and the environment. Overall, the chapter helps students understand how remote sensing helps us learn more about our planet from afar.

Chapter-6-Introduction-to-Remote-Sensing-Class-11-Geography-Practical-Work

Chapter 6: Introduction to Remote Sensing| Class 11 Geography Practical Work

Introduction To Remote Sensing

The term “remote sensing” was coined in the early 1960s. It refers to the methods used to gather and measure information about objects and events without directly touching them. Instead, a recording device, called a sensor, is used to capture data from a distance. In remote sensing, there are three main components: the surface of the object being studied, the sensor recording the data, and the energy waves carrying the information. This definition highlights that remote sensing relies on capturing data from a distance using specialized equipment.

Stages in Remote Sensing

Remote sensing involves several processes to gather information about the Earth’s surface. These processes help collect data on the properties of objects and phenomena. Here are the basic steps involved:

  1. Source of Energy: The primary source of energy used in remote sensing is the sun. Artificial sources like flashguns or radar beams are also used to collect information.
  2. Transmission of Energy: Energy from the source travels to the Earth’s surface as electromagnetic radiation (EMR). This radiation includes various types of waves, such as gamma, X-rays, visible light, and microwaves.
  3. Interaction with Earth’s Surface: The energy interacts with objects on the Earth’s surface, leading to absorption, transmission, reflection, or emission of energy. Different objects respond differently based on their composition and properties.
  4. Propagation through Atmosphere: Reflected or emitted energy re-enters the atmosphere, where it interacts with gases, water molecules, and dust particles. These atmospheric constituents can modify the properties of the original energy.
  5. Detection by Sensor: Sensors onboard satellites detect the reflected or emitted energy. These satellites orbit the Earth in near-polar sun-synchronous orbits, collecting data from a distance.
  6. Conversion to Digital Data: The energy received by the sensor is electronically converted into digital images, consisting of digital numbers arranged in rows and columns.
  7. Information Extraction: After receiving the image data, errors are eliminated, and information extraction is performed using digital image processing techniques or visual interpretation methods.
  8. Conversion to Map/Tabular Forms: The interpreted information is then converted into thematic maps, and quantitative measures are taken to generate tabular data.

Sensors

A sensor is a device used in remote sensing to gather electromagnetic radiation, convert it into a signal, and present it in a suitable form for obtaining information about objects. Sensors are classified into photographic (analogue) and non-photographic (digital) sensors based on the form of data output.

Photographic sensors, like cameras, capture images of objects at an instance of exposure. Non-photographic sensors, known as scanners, obtain images in a bit-by-bit form. Multispectral Scanners (MSS) are commonly used as sensors in satellite remote sensing.

MSS sensors are designed to sweep across the field of view while obtaining images of objects. They consist of a reception system with a mirror and detectors. The scanning sensor constructs the scene by recording a series of scan lines, with the motor device oscillating the scanning mirror through the angular field of view, known as swath. The received energy is converted into electrical signals and then into numerical values called Digital Number (DN Values) for recording.

MSS sensors are further divided into two types:

  1. Whiskbroom Scanners
  2. Pushbroom Scanners

Whiskbroom Scanners use a rotating mirror and a single detector to sweep across the field of view and obtain images in narrow spectral bands. The mirror completes a rotation, allowing the detector to sweep across the field of view between 90° and 120°.

Pushbroom Scanners consist of multiple detectors arranged linearly. Each detector collects energy reflected by ground cells (pixels) at nadir’s view. The number of detectors is determined by dividing the swath of the sensor by the size of the spatial resolution.

Resolving Powers of the Satellites

In satellite remote sensing, the sun-synchronous polar orbit allows for the collection of images at regular intervals, known as the temporal resolution or revisit time. This enables the acquisition of images over the same area of the Earth’s surface at different points in time. These images can be compared to study and record changes occurring in the landscape.

Sensor Resolutions

Remote sensors possess three key characteristics: spatial, spectral, and radiometric resolutions, which are crucial for extracting valuable information about various terrain conditions.

Spatial Resolution: This aspect can be understood by considering how some people use spectacles while reading. Individuals with poor vision may struggle to distinguish between closely spaced letters in a word. By using spectacles, they enhance their vision and resolving power. Similarly, in remote sensing, spatial resolution refers to the sensor’s ability to differentiate between closely spaced object surfaces. Higher spatial resolution allows for the identification of smaller object surfaces.

Spectral Resolution: This refers to the sensor’s ability to sense and record electromagnetic radiation (EMR) across different bands. Multispectral images are captured using devices that disperse the incoming radiation and record it using detectors sensitive to specific spectral ranges. This process is akin to the dispersion of light in nature, as seen in rainbows, or using prisms in laboratories. Images obtained in different spectral bands reveal how objects respond differently to varying wavelengths of light. For example, images acquired by IRS P – 6 (Resource sat – 1) show the strong absorption properties of fresh water in the infrared band and mixed strong reflectance in the green band by dry surfaces.

Radiometric Resolution: This aspect refers to the sensor’s ability to distinguish between two targets based on their radiance. Higher radiometric resolution means that the sensor can detect smaller differences in radiance between two targets.

Data Products

The detection and recording of electromagnetic energy can be done photographically or electronically. Photographic processes utilize light-sensitive film to detect and record energy variations, while scanning devices capture images in digital format. It’s crucial to differentiate between “images” and “photographs.” An image refers to a pictorial representation, regardless of the energy regions used for detection, whereas a photograph specifically denotes images recorded on photographic film. Thus, all photographs are images, but not all images are photographs.

Remotely sensed data products are broadly categorized into two types based on the detection and recording mechanism:

  • Photographic Images: These are obtained in the optical regions of the electromagnetic spectrum, ranging from 0.3 to 0.9 µm. Different types of light-sensitive film emulsion bases are used, including black and white, color, black and white infrared, and color infrared. In aerial photography, black and white film is commonly used. Photographs can be enlarged without losing information or contrast.
  • Digital Images: A digital image comprises discrete picture elements known as pixels, each with an intensity value and a two-dimensional address. A digital number (DN) represents the average intensity value of a pixel, which depends on the received electromagnetic energy and the intensity levels used. The reproduction of object details in a digital image is influenced by the pixel size. Smaller pixels are beneficial for preserving scene details and digital representation. However, excessive zooming in a digital image can lead to information loss and pixelation. Digital image processing algorithms can manipulate the intensity levels represented by digital numbers in an image.

Interpretation of Satellite Imageries

The data obtained from remote sensors the purpose of extracting information regarding the forms and patterns of objects and phenomena on the Earth’s surface. With different sensors yielding photographic and digital data products, both qualitative and quantitative properties of features can be extracted using visual interpretation methods or digital image processing techniques.

Visual interpretation involves manually reading images for object identification. Conversely, digital images require hardware and software to extract desired information. Due to constraints in time, equipment, and accessories, we’ll focus solely on visual interpretation methods.

Elements of Visual Interpretation:

1. Tone or Colour: Objects reflect energy in various spectral regions, resulting in tones or colours in images. Tone or colour variations depend on surface properties and composition. For instance, healthy vegetation reflects strongly in the infrared region, appearing bright red or light-toned in standard false colour composites.

2. Texture: Texture refers to minor tone or colour variations caused by aggregated smaller features. Different objects exhibit various textures, from smooth to coarse, aiding in their identification. For example, dense residential areas appear fine-textured, while scrubbed lands display coarse texture.

3. Size: Object size, discerned from image resolution or scale, helps identify features such as industrial complexes, stadiums, and settlements.

4. Shape: The outline or form of an object provides important clues for identification. Distinctive shapes aid in identifying features like Sansad Bhawan or railway lines.

5. Shadow: Shadows cast by objects are influenced by the sun’s angle and object height. Some objects, like Qutub Minar or minarets, are identifiable based on their shadows.

6. Pattern: Repetitive spatial arrangements of features form distinct patterns, aiding in identification. Orchards, plantations, or planned residential areas exhibit recognizable patterns.

7. Association: The relationship between objects and their surroundings, along with their geographical location, provides valuable information. Educational institutions, stadiums, or industrial sites often have specific associations with their surroundings.

These elements collectively enhance the interpretation of remote sensing images, enabling a better understanding of the Earth’s surface features and phenomena.

Related Articles

FAQs on Class 11 Geography Ch 6 Introduction to Remote Sensing

What is remote sensing?

Remote sensing refers to the process of collecting data about the Earth’s surface without direct physical contact. It involves using sensors to gather information from a distance from satellites or aircraft.

How does remote sensing work?

Remote sensing works by detecting and measuring electromagnetic radiation reflected or emitted from the Earth’s surface. Sensors onboard satellites or aircraft capture this radiation, which is then processed to create images and extract valuable information about the environment.

What are the applications of remote sensing?

Remote sensing has various applications across different fields, including agriculture, forestry, urban planning, environmental monitoring, disaster management, and climate studies. It is used for mapping land use and land cover, assessing vegetation health, monitoring changes in the environment, and detecting natural disasters.

What are the different types of remote sensing sensors?

Remote sensing sensors can be classified into two main types: photographic and non-photographic (digital) sensors. Photographic sensors, such as cameras, record images on light-sensitive film. Non-photographic sensors, such as scanners, capture images in digital format.

What are the stages involved in remote sensing data acquisition?

The stages in remote sensing data acquisition include:

  • Source of energy (sun/self-emission)
  • Transmission of energy from the source to the Earth’s surface
  • Interaction of energy with the Earth’s surface
  • Propagation of reflected/emitted energy through the atmosphere
  • Detection of energy by the sensor
  • Conversion of energy into photographic/digital form
  • Extraction of information from the data
  • Conversion of information into map/tabular forms


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads