Latency of HTTPS through cloudflare more than expected

Dear all,
I would like to discuss a high latency issue that my domain is facing.
We experience much higher end-to-end latency when calling HTTPS endpoint (we are connecting through Cloudflare).

Command: * curl -s -w “Time to connect: %{time_connect} Time to first byte: %{time_starttransfer}\n” -o /dev/null [address] -o /dev/null [address]

HTTP example (using [address] = http://3.137.85.72/):

  • Time to connect: 0.404773 Time to first byte: 0.714566
  • Time to connect: 0.000023 Time to first byte: 0.304737

HTTPS example (using [address] = https://hive.space)(hive.space is mapped to 3.12.187.2 through cloudflare):

  • Time to connect: 0.033559 Time to first byte: 0.995849
  • Time to connect: 0.000036 Time to first byte: 0.688093

Based on the direct connection to node’s IP above, the expected latency to HTTPS endpoint via Cloudflare should be somewhere around ~350ms (same as http connection) for the second time (due to keep alive), but in reality is much slower and can go above > 1s. We noticed that other websites using HTTPS with cloudflare have significantly lower latency.

Is my expectation correct? What do you think causes this issue and do you have suggestions where to look?

Currently our backend is hosted on EC2 with apache server with this configuration:

<VirtualHost *:443>

ServerName hive.space

ServerAdmin [email protected]

DocumentRoot /var/www/html

ErrorLog ${APACHE_LOG_DIR}/error.log

CustomLog ${APACHE_LOG_DIR}/access.log combined

SSLEngine on

SSLProxyEngine on

SSLCertificateFile /etc/apache2/cert/tls.crt

SSLCertificateKeyFile /etc/apache2/cert/tls.key

ProxyPass “/api/app/pubsub” “ws://localhost:8082/pubsub”

ProxyPassReverse “/api/app/pubsub” “ws://localhost:8082/pubsub”

ProxyPass /api/app/ http://localhost:8082/

ProxyPassReverse /api/app/ http://localhost:8082/

ProxyPass /api/search/ http://localhost:8100/

ProxyPassReverse /api/search/ http://localhost:8100/

ProxyPass / http://localhost:8101/

ProxyPassReverse / http://localhost:8101/

<VirtualHost *:443>

ServerName test.hive.space

Redirect 307 / https://hive.space/

<VirtualHost *:80>

Redirect permanent / https://hive.space/

This test in my opinion does not make sense, as you want to compare SSL vs nonSSL but the nonSSL is getting redirected to SSL anyway, so on this “nonSSL” you are again resolving a SSL Cert. Just not on the first connection.

See: http://3.137.85.72 ==( 301 ) ==> https://dev.hive.space

Also if you do this please keep the factor of “variables” as low as possible. You have not done this.
As both sites are requesting a HTML site which is different! and have to be generated dynamically (on the first call) you are actually meassuring your Server-Performance and not CloudFlare. Because CloudFlare is the only variable in this test which is fixed and did not change between the testcases.

What I would recommend:

  1. Go put the same file “testfile.txt” with the same input on both domains (domain.tld/testfile.txt)
  2. prevent your plain IP from being redirected to a domain, or directly use the domain.
  3. prevent changes/redirects to another protocoll (HTTP ==> HTTPS)
  4. prevent redirects in general.
  5. Then run your test again on a static file. As this excludes the Server-Performance and dynamic parts in this test!

Not run your test on these URLs:

  1. https://hive.space/testfile.txt
  2. http://dev.hive.space/testfile.txt (make sure there is no redirection to HTTPS!!!)

Now compare them.
Result:

Yes you will notice that HTTPS is slower as it has to resolve and validate the SSL Cert. This is normal. This just happens once and should not take longer then 50ms (for me it takes 37ms which is awesome!)

Also: to compare the TTFB or nonSSL vs SSL you can easily do this in a new (anonym) Tab in Chrome and just see how long your SSL Cert takes to resolve. Thats exactly the time of difference between SSL vs nonSSL. Simple as this.
See:

Screenshot (68)

36.98ms for me. This is really good.

Hi Martin,

Thanks for the reply and tips. In the end, we decided to use a temporary workaround by increasing the keep alive connection duration for the time being.

Thanks.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.