I know my CPU was 100% on the server and perhaps that error was due to a timeout?
As soon as I was able to slow responses down with the firewall this resolved immediately.
This doesn’t make much sense as logging is done in parallel.
This is generally a bad practice; your case is an obvious example as you can’t know what happened exactly.
Possibly because a connection is dropped before the handshake completes.
NGINX, by default, isn’t stunned for handling many concurrent requests per second. The best advice I can give is to check their documentation and write your configuration according to your needs.
Logging must be done in chunks of data to avoid saturating the I/O and blocking code.
Cloudflare isn’t that good to analyze traffic in real-time or in detail.
I’d advise creating a different reverse proxy using NGINX, Haproxy, or LiteSpeed to filter/analyze traffic in real-time.
You can produce the logs in the reverse proxy and send them to your database of preference. Grafana would be an optimal bet for handling the visualization of the data.
Look up how to configure Nginx buffered logging - it will allow you to continue logging without impacting disk I/O as much. Though buffered logging will impact effectiveness of fail2ban type implementations which require timely logging writes/inspection which may help with DDOS attacks as well.
Depending on size of layer 7 application level attacks, nginx may need tuning to better handle concurrent traffic loads as out of box nginx defaults still really not enough and well if the attack is impacting php-fpm side, that would be your bottleneck.
I derped, my nginx had a limit of 500 concurrent and this threw; A small glitch with my keep-alive. So I think it’s safe to say that I timed-out and my SSL handshakes could not take place.
I had over 2K unique connections in a burst so it seems like the case.