Generally in any cache systems,one of these(LRU,LFU) algorithms are used as eviction policy for caching items
- What is LFU Cache?
- How Does LFU Cache Work?
- Why is LFU Cache Needed?
- Where is LFU Cache Used?
- Real-World Examples
- Implementation Details
- How to Use the Code
LFU (Least Frequently Used) Cache is a caching algorithm that removes the least frequently accessed items to make space for new entries when the cache reaches its capacity. Unlike other caching mechanisms like LRU (Least Recently Used), LFU prioritizes items based on their usage frequency.
- Frequency-Based Eviction: Items used less often are evicted first.
- Dynamic Frequency Updates: Accessing an item increases its usage frequency.
- Efficient Operations: Optimized to perform
get
andput
operations in near-constant time.
-
Structure:
- A map (
cache
) stores key-value pairs and links them to nodes that keep track of the frequency of access. - A frequency map (
frequencyMap
) groups nodes by their access frequency.
- A map (
-
Operations:
get(key)
:- If the key exists in the cache, return the value and increase the frequency of access.
- If the key does not exist, return
-1
.
put(key, value)
:- If the key already exists, update its value and frequency.
- If the key does not exist:
- If the cache is full, evict the least frequently used item.
- Insert the new item with a frequency of
1
.
-
Eviction Policy:
- If two items have the same frequency, the Least Recently Used (LRU) item among them is evicted.
Modern applications often need to manage limited resources, like memory or disk space, efficiently. LFU Cache provides a mechanism to ensure that frequently accessed data stays in memory while less important data is evicted.
- Optimal for Repeated Access Patterns: Keeps frequently used data readily available.
- Improved Performance: Reduces latency by avoiding unnecessary computations or data retrievals.
- Minimized Resource Usage: Makes efficient use of limited cache space.
-
Database Query Caching:
- Frequently queried data is cached to minimize expensive database lookups.
-
Web Browsers:
- Store frequently accessed web pages or assets to reduce load times.
-
Content Delivery Networks (CDNs):
- Cache popular content to reduce server load and improve response times.
-
Machine Learning Models:
- Cache frequently used model parameters or results for large-scale computations.
-
YouTube or Netflix:
- Frequently watched videos or series can be cached on edge servers close to users.
-
E-commerce Websites:
- Frequently viewed product details or recommendations are cached to provide faster page loads.
-
Social Media Platforms:
- Popular posts or trending topics are cached for quicker rendering.
-
Online Gaming:
- Frequently accessed assets like textures or maps are cached to minimize load times.
Let’s understand the LFU Cache algorithm with an example:
- Capacity: 3
-
- Cache is empty; insert the key
1
with value10
and frequency1
. - Cache:
{1: 10 (freq: 1)}
- Cache is empty; insert the key
-
put(2, 20)
2- Cache has space; insert key
2
with value20
and frequency1
. - Cache:
{1: 10 (freq: 1), 2: 20 (freq: 1)}
- Cache has space; insert key
-
- Cache has space; insert key
3
with value30
and frequency1
. - Cache:
{1: 10 (freq: 1), 2: 20 (freq: 1), 3: 30 (freq: 1)}
- Cache has space; insert key
-
- Cache is full; remove the least frequently used block.
- All blocks have the same frequency (
1
), so remove the block added first (1
). - Insert key
4
with value40
and frequency1
. - Cache:
{2: 20 (freq: 1), 3: 30 (freq: 1), 4: 40 (freq: 1)}
-
- Fetch value
30
for key3
and increment its frequency. - Cache:
{2: 20 (freq: 1), 3: 30 (freq: 2), 4: 40 (freq: 1)}
- Fetch value
- Purpose: Tracks all cache blocks for each frequency.
- Key: Frequency count (integer).
- Value: A Doubly Linked List containing all blocks with the same frequency.
- Why Doubly Linked List?: Allows efficient insertion and removal of nodes.
- Purpose: Maps keys to their corresponding DLLNode for quick access.
- Key: Cache key (integer).
- Value: Node containing key, value, and frequency.
- Purpose: Tracks the current size of the cache.
- Purpose: Tracks the minimum frequency of any block in the cache to efficiently identify eviction candidates.
get
: O(1)put
: O(1)- Space: O(n), where
n
is the cache capacity.
- LFUCache: Main class for the LFU cache.
- DLLNode: Represents a single node in the Doubly Linked List.
- DoubleLinkedList: Manages the list of nodes for each frequency.
java
Copy code
LFUCache cache = new LFUCache(capacity);
-
get(int key)
:- Retrieve the value associated with the key.
int value = cache.get(key);
-
put(int key, int value)
:- Insert a new key-value pair or update an existing one.
cache.put(key, value);
LFUCache cache = new LFUCache(2); // Capacity is 2
cache.put(1, 10); // Cache: {1=10}
cache.put(2, 20); // Cache: {1=10, 2=20}
System.out.println(cache.get(1)); // Output: 10 (Increases frequency of key 1) cache.put(3, 30); // Evicts key 2, Cache: {1=10, 3=30}
System.out.println(cache.get(2)); // Output: -1 (Key 2 is evicted) System.out.println(cache.get(3)); // Output: 3
LFU Cache is a powerful caching mechanism that ensures frequently used data remains available, improving application performance and resource utilization. Its use in real-world applications like web caching, databases, and CDNs highlights its importance in optimizing modern systems. The provided Java implementation offers an efficient way to integrate LFU caching into your projects.