Cache family
The meaning of this project is to implement LRU cache myself with types support, without ex-standard python library. For priority queue I implemented Linked List version.
An LRU is a decorator class.
Evaluation time T (in seconds) based on calculation of Fibonacci Sequence:
Number | T without cache | T with cache |
---|---|---|
32 | 0.28653 | 0.01014 |
42 | 33.25495 | 0.02694 |
48 | 581.86346 | 0.03312 |
Keep in mind, this values calculated with default size of LRU memory = 10.
Convertion to MRU require only changes in LRU
class.
- self.pqueue: PriorityQueue[datetime] = PriorityQueue[datetime](reverse=True)
+ self.pqueue: PriorityQueue[datetime] = PriorityQueue[datetime]()
Simple, right?
Important
In order to make this code easily changeable to support different types of priority inPriorityNode
, I letpeek
method return node's value and priority.
Convertion to LFU require only changes in LRU
class.
- self.pqueue: PriorityQueue[datetime] = PriorityQueue[datetime](reverse=True)
+ self.pqueue: PriorityQueue[int] = PriorityQueue[int](reverse=True)
if str_f in self.HT:
- self.pqueue.set(str_f, datetime.now())
+ _, p = self.pqueue.peek()
+ self.pqueue.set(str_f, p + 1)
else:
self.HT[str_f] = f()
- self.pqueue.enqueue(str_f, datetime.now())
+ self.pqueue.enqueue(str_f, 1)
if len(self.pqueue) > self.max_capacity:
hp_val, _ = self.pqueue.peek()
del self.HT[hp_val]
self.pqueue.dequeue()
return self.HT[str_f]
Maybe it's not optimal, but it's straightforward enought.