Why would I get an SSL error during a DDOS? Does NGinx limit connections?

I was able to successfully block an attack and perform in a second on any request midst attack. This however took a few tries to sort out.

Something I noticed was that my NGINX was not handling well and throwing SSL issues, handshake errors.

Is this due to a limit or is there something more to this, like exhaustion of TLS/etc.

Could you post a screenshot of this error?

Have you checked your log files at the origin host / server?
May I ask are the DNS records at the DNS tab of Cloudflare dashboard for your domain name proxied :orange:?

I couldn’t capture it but I think it was a 526 error, I keep low-log to ensure fast responses.
NGINX is set to no logs unfortunately.

So I was wondering why the handshake error would occur?

Yes proxied 100%.

I know my CPU was 100% on the server and perhaps that error was due to a timeout?
As soon as I was able to slow responses down with the firewall this resolved immediately.

This doesn’t make much sense as logging is done in parallel.

This is generally a bad practice; your case is an obvious example as you can’t know what happened exactly.

Possibly because a connection is dropped before the handshake completes.

NGINX, by default, isn’t stunned for handling many concurrent requests per second. The best advice I can give is to check their documentation and write your configuration according to your needs.
Logging must be done in chunks of data to avoid saturating the I/O and blocking code.

Cloudflare isn’t that good to analyze traffic in real-time or in detail.
I’d advise creating a different reverse proxy using NGINX, Haproxy, or LiteSpeed to filter/analyze traffic in real-time.
You can produce the logs in the reverse proxy and send them to your database of preference. Grafana would be an optimal bet for handling the visualization of the data.

1 Like

Look up how to configure Nginx buffered logging - it will allow you to continue logging without impacting disk I/O as much. Though buffered logging will impact effectiveness of fail2ban type implementations which require timely logging writes/inspection which may help with DDOS attacks as well.

Depending on size of layer 7 application level attacks, nginx may need tuning to better handle concurrent traffic loads as out of box nginx defaults still really not enough and well if the attack is impacting php-fpm side, that would be your bottleneck.

1 Like

I derped, my nginx had a limit of 500 concurrent and this threw; A small glitch with my keep-alive. So I think it’s safe to say that I timed-out and my SSL handshakes could not take place.

I had over 2K unique connections in a burst so it seems like the case.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.