Being an in-memory datastore, Redis has both advantages and disadvantages that need ways to manage the size of the datastore. Two techniques, setting expiration timestamps on keys as well as key eviction, can be used to manage the memory footprint of Redis.
Keys in Redis can be set so that the key will be automatically deleted from the datastore after a timeout.
Redis supports a variant of the less recently used (LRU) approach to removing old data as new data is added to a Redis datastore. When Redis is used as a an application cache, LRU eviction prevents the datastore from exceeding the available memory.
Redis offers a number of LRU options that are set in redis.config. First,
the maxmemory
configuration option must be set, if set to 0 (the default)
Redis does not limit memory itself, but is constrained by the operating system's
available RAM in 64-bit and 3GB in 32-bit systems.
The noeviction policy is the Redis default and will only display error messages when Redis exceeds existing memory.
127.0.0.1:6379> CONFIG SET maxmemory-policy noeviction OK
The all-keys
127.0.0.1:6379> CONFIG SET maxmemory-policy all-keys OK
127.0.0.1:6379> CONFIG SET maxmemory-policy volatile-lru OK
127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-random OK
127.0.0.1:6379> CONFIG SET maxmemory-policy volatile-random OK
127.0.0.1:6379> CONFIG SET maxmemory-policy volatile-ttl
The Redis LRU algorithm is not exact, as Redis does not automatically choose the best candidate key for eviction, the least used key, or the key with the earliest access date. Instead, Redis default behavior is take a sample of five keys and evict the least used of those five keys. If we want to increase the accuracy of our LRU algorithm, we can the change the maxmemory-samples directive in either redis.conf or during runtime with the CONFIG SET maxmemory-samples command. Increasing the sample size to 10 improves the performance of the Redis LRU so that it approaches a true LRU algorithm but with the side-effect of more CPU computation. Decreasing the sample size to 3 reduces the accuracy of Redis LRU but with a corresponding increase in processing speed.
volatile LRU Policy Redis keys are evicted if the keys have EXPIRE set, if there not any keys to be evicted when Redis reaches maxmemory an OOM error is returned to the client. Note: under this policy when Redis reached maxmemory, it will start evicting keys that have an expiration set even if the time limit on keys hasn't been reached yet.
127.0.0.1:6379> FLUSHDB 127.0.0.1:6379> CONFIG SET maxmemory-policy volatile-lru
Python function to add and set keys
>>> def add_id_expire(redis_instance): count = redis_instance.incr("global:uuid") redis_key = "uuid:{}".format(count) redis_instance.set(redis_key, uuid.uuid4()) if count <= 75: redis_instance.expire(redis_key, 300)
The allkeys-lru evicts keys based on the ttl. The allkeys-lru can delete ANY key in Redis and there is no way to restrict which keys are to be deleted. If you application needs to persist some Redis keys (say for configuration or reference look-up) DON’T use allkeys-lru policy!
127.0.0.1:6379> FLUSHDB 127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-lru< Running the add_id function in an infinite while loop and a counter
>>> count = 1 >>> while 1: add_id(local_redis) count += 1
Using the INFO stats will show us the status of Redis cache.
Redis offer two other types of non-LRU maxmemory policies, volatile-random and allkeys-random mirror the previous polices but instead of calculating LRU of the keys, the keys are either randomly evicted based on the key's TTL in the case of the volatile-random or any random keys in the case of allkeys-random.