Open In App

Read and Write operations in Memory

Last Updated : 14 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

A memory unit stores binary information in groups of bits called words. Data input lines provide the information to be stored into the memory, Data output lines carry the information out from the memory. The control lines Read and write specifies the direction of transfer of data. Basically, in the memory organization, there are 2^{l}  memory locations indexing from 0 to 2^{l}-1  where l is the address buses. We can describe the memory in terms of the bytes using the following formula: N = 2^{l} Bytes  Where, l is the total address buses N is the memory in bytes For example, some storage can be described below in terms of bytes using the above formula:

 1kB= 210 Bytes
 64 kB = 26 x 210 Bytes
       = 216 Bytes
 4 GB = 22 x 210(kB) x 210(MB) x 210 (GB)
      = 232 Bytes

Memory Address Register (MAR) is the address register which is used to store the address of the memory location where the operation is being performed. Memory Data Register (MDR) is the data register which is used to store the data on which the operation is being performed.

  1. Memory Read Operation: Memory read operation transfers the desired word to address lines and activates the read control line.Description of memory read operation is given below: In the above diagram initially, MDR can contain any garbage value and MAR is containing 2003 memory address. After the execution of read instruction, the data of memory location 2003 will be read and the MDR will get updated by the value of the 2003 memory location (3D).
  2. Memory Write Operation: Memory write operation transfers the address of the desired word to the address lines, transfers the data bits to be stored in memory to the data input lines. Then it activates the write control line. Description of the write operation is given below: In the above diagram, the MAR contains 2003 and MDR contains 3D. After the execution of write instruction 3D will be written at 2003 memory location.


 

Advantages of Read Operations:

Speed: Read operations are generally faster than write operations since the data is already stored in the memory.

Efficiency: Read operations are more efficient since they do not require modifying the data in memory.

Non-destructive: Read operations do not modify the data in memory, so they can be performed repeatedly without affecting the stored data.

Disadvantages of Read Operations:

Limited functionality: Read operations only retrieve data from memory, so they cannot be used to modify the data.

Security risks: Read operations can be used to access sensitive data stored in memory, making them a potential security risk.

Advantages of Write Operations:

Flexibility: Write operations allow data to be modified, making them useful for storing and updating information in memory.

Dynamic: Write operations allow data to be changed in real-time, making them essential for many computing applications.

Customization: Write operations allow users to customize and personalize their computing experience by modifying stored data.

Disadvantages of Write Operations:

Slower: Write operations are generally slower than read operations since the data needs to be modified and then written back to memory.

Overwriting risk: Write operations can overwrite existing data in memory, leading to data loss or corruption.

Wear and Tear: Repeated write operations can cause wear and tear on memory cells, leading to reduced reliability and lifespan.


Previous Article
Next Article

Similar Reads

Random Access Memory (RAM) and Read Only Memory (ROM)
Random Access Memory (RAM) is a type of computer memory that is used to temporarily store data that the computer is currently using or processing. RAM is volatile memory, which means that the data stored in it is lost when the power is turned off. RAM is typically used to store the operating system, application programs, and data that the computer
5 min read
Classification and Programming of Read-Only Memory (ROM)
Read-Only Memory (ROM) is the primary memory unit of any computer system along with the Random Access Memory (RAM), but unlike RAM, in ROM, the binary information is stored permanently . Now, this information to be stored is provided by the designer and is then stored inside the ROM . Once, it is stored, it remains within the unit, even when power
10 min read
Difference between Byte Addressable Memory and Word Addressable Memory
Memory is a storage component in the Computer used to store application programs. The Memory Chip is divided into equal parts called as "CELLS". Each Cell is uniquely identified by a binary number called as "ADDRESS". For example, the Memory Chip configuration is represented as '64 K x 8' as shown in the figure below. The following information can
2 min read
Difference between Random Access Memory (RAM) and Content Addressable Memory (CAM)
Random Access Memory (RAM) is used to read and write. It is the part of primary memory and used in order to store running applications (programs) and program's data for performing operation. It is mainly of two types: Dynamic RAM (or DRAM) and Static RAM (or SRAM). RAM is made up of small memory cells that are arranged in a grid pattern. Each cell
4 min read
Difference between Virtual memory and Cache memory
Cache Memory: Cache memory increases the accessing speed of CPU. It is not a technique but a memory unit i.e a storage device. In cache memory, recently used data is copied. Whenever the program is ready to be executed, it is fetched from main memory and then copied to the cache memory. But, if its copy is already present in the cache memory then t
2 min read
Difference between Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA)
Multiprocessors can be categorized into three shared-memory model which are: Uniform Memory Access (UMA)Non-uniform Memory Access (NUMA)Cache-only Memory Access (COMA) Uniform Memory Access (UMA): In UMA, where Single memory controller is used. Uniform Memory Access is slower than non-uniform Memory Access. In Uniform Memory Access, bandwidth is re
4 min read
Write Through and Write Back in Cache
Prerequisite - Multilevel Cache Organisation Cache is a technique of storing a copy of data temporarily in rapidly accessible storage memory. Cache stores most recently used words in small memory to increase the speed at which data is accessed. It acts as a buffer between RAM and CPU and thus increases the speed at which data is available to the pr
3 min read
Differentiate between Write Through and Write Back Methods
Prerequisite - Write Through and Write Back in Cache During a read operation, when the CPU determines a word in the cache, the main memory is not included in the transfer. Thus, there are two ways that the system can proceed when the operation is a write. 1. Write Through Method : The simplest method is to update the main memory with every memory w
2 min read
Difference between Write invalidate protocol & Write Update Protocol (Bus-Snooping )
Prerequisite – Cache Coherence Cache Coherence :In multi-processor systems, each processor can have its own cache & when processors are allowed to update data of their individual cache's block, data will be in an inconsistent state. This problem is called cache coherence. Cache Coherence protocol :To keep the data consistent, cache coherence pr
3 min read
Allocating kernel memory (buddy system and slab system)
Prerequisite - Buddy System Introduction: Allocating kernel memory is a critical task in operating system design, as the kernel needs to manage memory efficiently and effectively to ensure optimal system performance. Two common methods for allocating kernel memory are the buddy system and the slab system. The buddy system is a memory allocation alg
9 min read