How to resolve Redis timeout exception —redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out

How to resolve Redis timeout exception —redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out

How I fixed the "read timed out" error in my Amazon Elasticache configuration.

I encountered this exception during my cache implementation using Elasticache:

java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out

Java version: 8

jedis version: 2.9.3

spring-data-redis version: 2.0.14.RELEASE

After some debugging, I found this issue was encountered when I was trying to get all keys from the cache to eventually, delete them one by one.

....
redisTemplate.opsForHash().keys(REDIS_CACHE_NAME)
....

I tried increasing the connection timeout for the cache to 60 seconds, still, it did not help. This was a bit confusing since getting all keys from the cache should not take that long.

After a lot of googling and experimenting, I found that upgrading my Redis and Jedis versions will resolve this issue because this was a reoccurring problem in earlier versions of Jedis and they have resolved the bug.

But for this upgrade, a spring version upgrade was also required for my project, which was a lot of effort due to some internal frameworks being used for the project.

Ultimately, the solution that resolved this issue for me was:

I discovered the data I was storing in my cache was quite large, and since this exception was thrown every time I tried to clear my cache, I might have been running into memory issues. This was validated by another exception I was seeing in the logs:

Exception in thread "Thread-228" redis.clients.jedis.exceptions.JedisDataException: OOM command not allowed when used memory > 'maxmemory'.

Since I was using AWS Elasticache, I thought it to be a performance issue. So I tried to upgrade the node_type of my cache from t2.micro (0.555 GiB) to m6g.large (6.38 GiB).

This seems to have solved the issue because I haven't encountered the same exception again.

Did you find this article valuable?

Support Aditya Karad by becoming a sponsor. Any amount is appreciated!