Welcome to my personal website. I use this space to share some of my thoughts and software that I write in my free time. Sometimes I also publish photos. If you want to get in touch please use LinkedIn or Mastodon.
Given the following code skeleton, implement an LRU cache with a variable capacity. The time complexity for get and put should be O(1) respectively.
In general, a cache is just a map which assigns keys to values. What makes a cache special is that values get evicted. This can be either after a certain time or when values are not used. In the case of an LRU cache, the least recently used element should be deleted once a cache reaches its predefined capacity.
To solve this problem, we need to think about two scenarios: Inserting and retrieving the values. For the insert operation, we just need to write the value in the map at the key position. This operation is O(n). If the value already exists, we need to update it.
This map unfortunately has no information at what time an item was added to the queue. For this reason we need another data structure that allows us to load, move, insert and delete an item with O(1). A doubly linked list allows us to do exactly this: We can store a reference to the item in the map. Then we can move it out of the list with a few operations, add it to the end and we also have constant access to the beginning of the list. Unfortunately, Python does not have a doubly linked list builtin and we need to create it ourselves.
I chose to create a dummy head and tail element to be able to skip several if/else checks. The solution then looks as follows:
Runtime: 949 ms, faster than 84.93% of Python3 online submissions for LRU Cache. Memory Usage: 75 MB, less than 82.05% of Python3 online submissions for LRU Cache.
About the author
Friedrich Ewald is an experienced Software Engineer with a Master's degree in Computer Science. He started this website in late 2015, mostly as a digital business card. He is interested in Go, Python, Ruby, SQL- and NoSQL-databases, machine learning and AI and is experienced in building scalable, distributed systems and micro-services at multiple larger and smaller companies.