How to implement File Caching in Python
Last Updated :
23 Feb, 2024
One of the most important ways to improve application performance in the dynamic world of Python programming is to optimize file access. The use of file caching is one effective method for accomplishing this. Developers can greatly reduce the requirement to read or create the same information by using file caching to store frequently used data in a temporary storage place. This article explores the approaches to implementing file caching in Python.
What is File Caching?
To speed up future access, file caching entails temporarily keeping copies of frequently requested files or data in memory or on disc. When working with files that are computationally expensive to create or read repeatedly, this strategy is especially helpful.
There are various types of file caching:
- Memory Caching: This uses the RAM of the computer to hold frequently accessed files or data.
- Disk Caching: Storing frequently requested data on a speedier storage device, such as an SSD, is known as disc caching. This can enhance overall system responsiveness and lower latency related to conventional hard disc drives (HDDs).
- Web Caching: To speed up the loading of online pages, web caching stores frequently requested web material locally, either on the user’s device or on a proxy server.
- Database caching: Data is stored in a database using database caching, usually in key-value stores like Redis or Memcached. Although database caching is more durable and spans several sessions, it is nonetheless slower than memory kind.
- Function Caching: This kind eliminates the need to rerun the function and is specific to functional calls. This is exactly what the built-in functions tools are.
Implementing File Caching In Python
Below are the possible approaches to implementing file caching in Python.
- Using pickle for Serialization
- Using json for Serialization
- Using shelve for Persistent Storage
- Using functools.lru_cache for Function-Level Caching
Implementing File Caching Using pickle for Serialization
In the below approach code, the file caching is implemented using the pickle module for serialization. The read_file_pickle function checks if the file content is already cached in the file_cache dictionary. If not, it reads the file using pickle, stores the content in the cache, and returns the cached content.
Python3
import pickle
file_cache = {}
def read_file_pickle(file_path):
if file_path not in file_cache:
with open (file_path, 'rb' ) as file :
file_content = pickle.load( file )
file_cache[file_path] = file_content
return file_cache[file_path]
data = { 'website' : 'geeksforgeeks' }
with open ( 'output.pkl' , 'wb' ) as file :
pickle.dump(data, file )
result = read_file_pickle( 'output.pkl' )
print (result)
|
output.pkl
{'website' : 'geeksforgeeks'}
Implementing File Caching Using json for Serialization
In the below approach code, json.load() is used to load JSON data from a file. json.dumps() is the serialization of the data. Pickle does not work with text data; JSON does. To read the JSON data in this example, the file is opened in text mode (‘r’). This makes it appropriate for situations where the stored data must be readable by humans.
Python3
import json
file_cache = {}
def read_file_json(file_path):
if file_path not in file_cache:
with open (file_path, 'r' ) as file :
file_content = json.load( file )
file_cache[file_path] = file_content
return file_cache[file_path]
data = { 'website' : 'geeksforgeeks' }
with open ( 'example.json' , 'w' ) as file :
json.dump(data, file )
result = read_file_json( 'example.json' )
print (result)
|
output.json
{'website' : 'geeksforgeeks'}
Implementing File Caching Using shelve for Persistent Storage
In the below approach code, we are implementing caching using the shelve module for persistent storage. The read_file_with_cache function reads a file, storing its content in a shelved file as a cache. Subsequent calls to the function check and retrieve content from the cache if the file path is already present, reducing the need for redundant file reads. The example usage shows reading a sample text file twice, with the second call retrieving the content from the cache, optimizing file access.
Python3
import shelve
def read_file_with_cache(file_path):
with shelve. open ( 'file_cache.db' ) as cache:
if file_path not in cache:
with open (file_path, 'r' ) as file :
file_content = file .read()
cache[file_path] = file_content
print (f "Reading from file: {file_path}" )
else :
print (f "Reading from cache: {file_path}" )
return cache[file_path]
with open ( 'example.txt' , 'w' ) as file :
file .write( "This is some example text." )
result1 = read_file_with_cache( 'example.txt' )
result2 = read_file_with_cache( 'example.txt' )
print ( "Result 1:" , result1)
print ( "Result 2:" , result2)
|
Output:
Reading from file: example.txt
Reading from cache: example.txt
Result 1: This is some example text
Result 2: This is some example text
Implementing File Caching Using functools.lru_cache for Function-Level Caching
In the below approach code, using an LRU (Least Recently Used) cache with a maximum size of 2, function-level caching is applied using the @functools.lru_cache(maxsize=2) decorator. Using the supplied file path, the read_file_with_lru_cache function reads a file’s contents. The function obtains the content from the cache rather than reading the file again when the same file path is requested.
Python3
import functools
@functools .lru_cache(maxsize = 2 )
def read_file_with_lru_cache(file_path):
with open (file_path, 'r' ) as file :
file_content = file .read()
print (f "Reading from file: {file_path}" )
return file_content
result1 = read_file_with_lru_cache( 'example.txt' )
result2 = read_file_with_lru_cache( 'example.txt' )
print ( "Result 1:" , result1)
print ( "Result 2:" , result2)
|
Output:
Reading from file: example.txt
Result 1: This is some example text
Result 2: This is some example text
Conclusion
In conclusion, we explored a few different ways to use file caching in Python. We spoke about serializing data with pickle and json, storing data persistently using shelve, and performing function-level caching with functools.lru_cache. The decision you make will rely on the particular needs of your application. Each strategy offers benefits and applications. File caching reduces unnecessary file operations and boosts overall responsiveness, which can greatly increase performance.
Share your thoughts in the comments
Please Login to comment...