One of the most important ways to improve application performance in the dynamic world of Python programming is to optimize file access. The use of file caching is one effective method for accomplishing this. Developers can greatly reduce the requirement to read or create the same information by using file caching to store frequently used data in a temporary storage place. This article explores the approaches to implementing file caching in Python.
What is File Caching?
To speed up future access, file caching entails temporarily keeping copies of frequently requested files or data in memory or on disc. When working with files that are computationally expensive to create or read repeatedly, this strategy is especially helpful.
There are various types of file caching:
- Memory Caching: This uses the RAM of the computer to hold frequently accessed files or data.
- Disk Caching: Storing frequently requested data on a speedier storage device, such as an SSD, is known as disc caching. This can enhance overall system responsiveness and lower latency related to conventional hard disc drives (HDDs).
- Web Caching: To speed up the loading of online pages, web caching stores frequently requested web material locally, either on the user’s device or on a proxy server.
- Database caching: Data is stored in a database using database caching, usually in key-value stores like Redis or Memcached. Although database caching is more durable and spans several sessions, it is nonetheless slower than memory kind.
- Function Caching: This kind eliminates the need to rerun the function and is specific to functional calls. This is exactly what the built-in functions tools are.
Implementing File Caching In Python
Below are the possible approaches to implementing file caching in Python.
- Using pickle for Serialization
- Using json for Serialization
- Using shelve for Persistent Storage
- Using functools.lru_cache for Function-Level Caching
Implementing File Caching Using pickle for Serialization
In the below approach code, the file caching is implemented using the pickle module for serialization. The read_file_pickle function checks if the file content is already cached in the file_cache dictionary. If not, it reads the file using pickle, stores the content in the cache, and returns the cached content.
import pickle
# dictionary to store cached file content file_cache = {}
# function to read file content using pickle and implement caching def read_file_pickle(file_path):
# check if file content is already in cache
if file_path not in file_cache:
# read file using pickle and store content in cache
with open (file_path, 'rb' ) as file :
file_content = pickle.load( file )
file_cache[file_path] = file_content
# return the cached file content
return file_cache[file_path]
# data to be pickled and cached data = { 'website' : 'geeksforgeeks' }
# write data to file using pickle with open ( 'output.pkl' , 'wb' ) as file :
pickle.dump(data, file )
# read and print cached data from file result = read_file_pickle( 'output.pkl' )
print (result)
|
output.pkl
{'website' : 'geeksforgeeks'}
Implementing File Caching Using json for Serialization
In the below approach code, json.load() is used to load JSON data from a file. json.dumps() is the serialization of the data. Pickle does not work with text data; JSON does. To read the JSON data in this example, the file is opened in text mode (‘r’). This makes it appropriate for situations where the stored data must be readable by humans.
import json
# Dictionary to serve as a cache for file contents, preventing redundant file reads file_cache = {}
def read_file_json(file_path):
# Check if file content is already in the cache
if file_path not in file_cache:
# Open the file in read mode
with open (file_path, 'r' ) as file :
# Load the JSON content from the file
file_content = json.load( file )
# Cache the file content for future use
file_cache[file_path] = file_content
# Return the file content either from the cache or newly loaded
return file_cache[file_path]
# Example Usage # Create a sample JSON file for demonstration purposes data = { 'website' : 'geeksforgeeks' }
with open ( 'example.json' , 'w' ) as file :
json.dump(data, file )
# Read the content of the JSON file using the read_file_json function result = read_file_json( 'example.json' )
# Print the result obtained from the file read operation print (result)
|
output.json
{'website' : 'geeksforgeeks'}
Implementing File Caching Using shelve for Persistent Storage
In the below approach code, we are implementing caching using the shelve module for persistent storage. The read_file_with_cache function reads a file, storing its content in a shelved file as a cache. Subsequent calls to the function check and retrieve content from the cache if the file path is already present, reducing the need for redundant file reads. The example usage shows reading a sample text file twice, with the second call retrieving the content from the cache, optimizing file access.
import shelve
def read_file_with_cache(file_path):
# Using a shelve file as a persistent cache
with shelve. open ( 'file_cache.db' ) as cache:
# Check if the file path is already in the cache
if file_path not in cache:
# If not in the cache, read the file and store its content in the cache
with open (file_path, 'r' ) as file :
file_content = file .read()
cache[file_path] = file_content
print (f "Reading from file: {file_path}" )
else :
# If in the cache, retrieve the content from the cache
print (f "Reading from cache: {file_path}" )
# Return the content either from the cache or newly read file
return cache[file_path]
with open ( 'example.txt' , 'w' ) as file :
file .write( "This is some example text." )
# Reading the file using the caching function result1 = read_file_with_cache( 'example.txt' )
result2 = read_file_with_cache( 'example.txt' )
# Displaying the results print ( "Result 1:" , result1)
print ( "Result 2:" , result2)
|
Output:
Reading from file: example.txt
Reading from cache: example.txt
Result 1: This is some example text
Result 2: This is some example text
Implementing File Caching Using functools.lru_cache for Function-Level Caching
In the below approach code, using an LRU (Least Recently Used) cache with a maximum size of 2, function-level caching is applied using the @functools.lru_cache(maxsize=2) decorator. Using the supplied file path, the read_file_with_lru_cache function reads a file’s contents. The function obtains the content from the cache rather than reading the file again when the same file path is requested.
import functools
@functools .lru_cache(maxsize = 2 )
def read_file_with_lru_cache(file_path):
"""
Function to read the content of a file with function-level caching using LRU Cache.
Parameters:
- file_path (str): The path of the file to be read.
Returns:
- str: The content of the file.
"""
# Read the file content
with open (file_path, 'r' ) as file :
file_content = file .read()
print (f "Reading from file: {file_path}" )
return file_content
result1 = read_file_with_lru_cache( 'example.txt' )
result2 = read_file_with_lru_cache( 'example.txt' )
# Displaying the results print ( "Result 1:" , result1)
print ( "Result 2:" , result2)
|
Output:
Reading from file: example.txt
Result 1: This is some example text
Result 2: This is some example text
Conclusion
In conclusion, we explored a few different ways to use file caching in Python. We spoke about serializing data with pickle and json, storing data persistently using shelve, and performing function-level caching with functools.lru_cache. The decision you make will rely on the particular needs of your application. Each strategy offers benefits and applications. File caching reduces unnecessary file operations and boosts overall responsiveness, which can greatly increase performance.