Open In App

Memory Buffering in Cisco Switches

Improve
Improve
Like Article
Like
Save
Share
Report

A memory buffer is a portion of the memory used by the switch to store data. Network switch interfaces buffer or drop traffic that exceeds their capacity. Traffic bursts, many-to-one traffic patterns, and interface speed mismatches are the main causes of buffering. That is the main concept of a memory buffer. Ethernet switches use memory buffering techniques to hold frames before they are sent to their destination. The switch uses buffering when the destination port is congested and busy. Therefore, frames must be buffered before transmission. As a result, frames can be dropped when network congestion occurs without an effective memory buffering mechanism. If the port is busy, the frame will be held by the switch until it is sent. Data storage in the switch is provided by memory buffers. 

Types of Memory Buffering in Cisco Switches:

There are two methods of buffering.

Port-based Memory Buffering:

When ports are used for memory buffering, frames are held in queues associated with specific incoming and outgoing ports. After each frame in the queue, the frame is sent to the outgoing port only before it is successfully sent. One frame can delay the transmission of all frames in memory due to a busy destination port. Other frames can be sent to open destination ports, but there is still a delay.

Shared Memory Buffering:

Through shared memory buffering, all frames are put into a shared memory buffer shared by all ports on the switch. The port dynamically allocates the required amount of buffer RAM. Destination ports are dynamically linked to frames in the buffer. This allows packets to be received on one port and broadcast on another port without moving to another queue. The switch maintains a map of frame-to-port connections that indicate where packets should be sent. After successfully transmitting the frame, the map link is removed. The size of the memory buffer as a whole, as well as the size of the individual port buffers, limits the number of frames that can be stored in the buffer. This allows you to send larger frames with fewer dropped frames. This is especially important for asymmetric switching. Asymmetric switching allows different data rates per port. This allows specific ports. For example, allocate more bandwidth to the port connected to the server.

Functionalities of Memory Buffering:

Port-based Memory Buffering:

Memory frames are held in queues associated with each incoming and outgoing port before transmission. All frames are held in a shared memory buffer for transmission to the port. Each port available on the switch shares one memory buffer. Memory frames are dynamically linked to destination ports before the transmission process begins. If the destination port is busy, there may be a transmission delay of 1 frame. Frames may be dropped while the port is out of buffers.

Shared Memory Buffering:

For port buffering, some early Cisco switches used a shared memory architecture.  All frames are placed in a shared memory buffer shared by all ports on the switch with shared buffering. The required buffer space for the port is dynamically allocated. The destination port has a dynamic connection with frames in the buffer. Packets can now be received on one port and then sent on another without having to move the packet to a new queue.

Buffering Memories in Cisco Switches

 


Last Updated : 30 Nov, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads