After migrating to Cloudflare, I’m receiving a number of Redis Session Errors. Essentially, Cloudflare appears to max out our Redis Concurrent Connections.
What would be the reason for Cloudflare to max out our Redis concurrent connections?
Is it related to site crawling? If so, can the crawl rate be lowered/slowed as to not overwhelm connections?
Or is it because Redis is seeing all traffic as one user (Cloudflare)?
What errors do you have? How do they look like?
How does your Redis configuration look like?
Is it open to public or just intern connections?
Are you using login / passphrase for each Redis connection to your app?
Could be due to the hm… maybe Rate limiting if enabled? Or due to the HTTP TCP keepalive header?
What is the value on your server for /etc/sysctl.conf for parameter fs.file-max?
Are you using NodeJS or some other app?
Mostly, it based on how the Redis client implemented and the redis config file.
On Redis server side, all commands are execute serializable. So single connection will not be a problem for Redis server.
On client side, if your framework uses blocking TCP connection, few connections will be a bottle neck as your commands will be blocked on client side to waiting for idle connections.
But if the client is using async, single connection will be ok since connections can be reused between threads.
Additionally, thousands of concurrent users will not be a problem for Redis but you should also focus on your web service if you are using single web server.
At least, in Redis 2.6 the default limit is 10000 clients but can be over-ridden in the redis.conf.
If the number that we require is more than the maximum number of file descriptors that can be opened by the File System, then REDIS sets the maximum number of clients/connections to what it can realistically handle.
I believe this should be related to the Redis config at your origin.
Or are you using Cloudflare Railgun … which, hm, can it be the case you have?