Open In App

How to Use Yield Keyword for Memory Efficient Python Code

Last Updated : 18 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

We are given a task to understand how to use the yield keyword for memory-efficient Python code with the help of different examples with their code. In this article, we will see how to use the yield keyword for memory-efficient Python code.

What is the Yield Keyword?

The yield keyword in Python is used in the context of generator functions and is fundamental to creating iterators efficiently. It allows a function to pause its execution and yield a value to the caller without losing its state. When the function is called again, it resumes execution from where it left off.

Use Yield Keyword for Memory-Efficient Python Code

Below are some of the examples by which we can understand how to use the yield keyword for memory-efficient Python code:

Memory-Efficient Even Number Filtering

In this example, filter_even_numbers function utilizes the yield keyword to lazily generate even numbers from the input data, resulting in memory-efficient iteration and printing of the filtered values.

Python3
def filter_even_numbers(data):
    for value in data:
        if value % 2 == 0:
            yield value


my_data = [1, 2, 3, 4, 5]
result_generator = filter_even_numbers(my_data)

for value in result_generator:
    print(value)

Output
2
4

Generating Lazy Sequences Efficiently

In this example, in below code The lazy_sequence generator function produces a sequence of doubled values up to the specified limit, demonstrating memory-efficient iteration when consumed in a for loop.

Python3
def lazy_sequence(limit):
    for i in range(limit):
        yield i * 2

# Using the generator
gen = lazy_sequence(5)

for value in gen:
    print(value)

Output
0
2
4
6
8

Memory-Efficient Fibonacci Sequence Generation

In this example, the fibonacci_sequence generator function yields Fibonacci sequence values up to the specified limit, showcasing memory-efficient iteration for generating and printing the sequence.

Python3
def fibonacci_sequence(limit):
    a, b = 0, 1
    while a < limit:
        yield a
        a, b = b, a + b


fibonacci_gen = fibonacci_sequence(50)

for value in fibonacci_gen:
    print(value)

Output
0
1
1
2
3
5
8
13
21
34

Memory-Efficient Task Parallelism

In this example, the parallel_tasks generator yields lambda functions representing parallel tasks, enabling memory-efficient execution of each task on a given input value (data in this case).

Python3
def parallel_tasks():
    yield lambda x: x * 2
    yield lambda x: x ** 2
    # Add more tasks as needed

# Using the generator for parallel tasks
tasks_gen = parallel_tasks()
data = 5

for task in tasks_gen:
    result = task(data)
    print(result)

Output
10
25

Memory Efficiency Comparison: Yield vs. Generator

In this example, below code compares memory usage between a list-based approach (without yield) and a generator-based approach (with yield) for generating a sequence of squares up to a specified limit. The generator-based approach demonstrates superior memory efficiency, especially for large limits, as it generates values on-the-fly without storing the entire sequence in memory.

Python3
# Without Yield (List-Based Approach)
import sys


def generate_squares_list(limit):
    squares = [num ** 2 for num in range(limit)]
    return squares

# With Yield (Generator-Based Approach)


def generate_squares_yield(limit):
    for num in range(limit):
        yield num ** 2


# Memory Comparison

limit = 10**6  # Set a large limit for demonstration purposes

# Without Yield (List-Based Approach)
squares_list = generate_squares_list(limit)
memory_usage_list = sys.getsizeof(squares_list)

# With Yield (Generator-Based Approach)
squares_yield = generate_squares_yield(limit)
memory_usage_yield = sys.getsizeof(squares_yield)

print(f"Memory usage without yield: {memory_usage_list} bytes")
print(f"Memory usage with yield: {memory_usage_yield} bytes")

Output
Memory usage without yield: 8697472 bytes
Memory usage with yield: 128 bytes

Conclusion

In conclusion, leveraging generators in Python is a powerful strategy for achieving memory-efficient code, particularly in situations where handling large datasets or dynamic data streams is essential. The lazy evaluation approach of generators allows for on-the-fly generation of values, reducing the overall memory footprint and enabling the processing of data in a scalable and efficient manner.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads