Latency to proxied DNS entries high since 2024-01-30T07:09:15CET

F*****g sh*t, it did help. Hum, the motivation for Cloudflare to investigate the issue is then probably zero, in case it is not even intended (expect the evil) :thinking:.

The route(s) change immediately completely:

traceroute to (, 64 hops max, 52 byte packets
1 ( 2.569 ms 3.447 ms 2.991 ms
2 ( 10.488 ms 10.968 ms 11.332 ms
3 ( 14.499 ms ( 13.758 ms ( 14.771 ms
4 ( 22.685 ms 24.881 ms 22.182 ms
5 ( 22.758 ms 26.160 ms 24.951 ms
6 * ( 37.535 ms 15.109 ms
7 ( 18.801 ms ( 13.919 ms ( 15.448 ms
8 ( 16.189 ms 15.275 ms 14.002 ms
Routenverfolgung zu []
über maximal 30 Hops:

  1     1 ms     1 ms     4 ms []
  2     3 ms     2 ms     4 ms  speedport.ip []
  3    12 ms    10 ms    10 ms []
  4    12 ms    11 ms    10 ms  b-eh3-i.B.DE.NET.DTAG.DE []
  5    23 ms    13 ms    10 ms
  6    30 ms    26 ms    27 ms []
  7    30 ms    27 ms    28 ms
  8    28 ms    29 ms    27 ms
  9    30 ms    29 ms    29 ms

Ablaufverfolgung beendet.

Is this intentional to motivate free users moving to pro plan? And why is there no single hint in pro/business plans, that routing is/can be better?

It’s not a secret, but not stated in the marketing pages as well.
There was a blog post last year which shows how Cloudflare manages traffic and that the priority of traffic present on each colo is determined by the plan.

1 Like

Thanks for the info. Generally understandable. I would still consider this multi-seconds routing from Germany > USA > Germany when a specific ISP is used, an unintended quirk. For visitors with other ISPs, nothing seems to have significantly changed, from what I can say now. I personally (other ISP) have/had the issue, that sometimes, when connecting to our website or server the first time a day, or after some hour(s), the very first request took some seconds, but then everything was fine for the session and soon follow up sessions. If this never happens again, then also for me something changed to better. Probably negotiating the route at some point takes/took longer, and is then cached, or something like that :thinking:.

For me, it’s usually related to time of day. But the worst issue isn’t that the requests are delayed by 1-2 seconds, but that a significant number of packets are just lost in transit. Packet loss is through the roof in the evening.

I’m from Poland and I’m using T-Mobile Poland ISP (which belongs to Deutsche Telekom) and I too have been experiencing high pings with Cloudflare servers lately.

The address was mentioned above and I actually have a lower ping to it:

Tracing route to over a maximum of 30 hops

  1    <1 ms    <1 ms    <1 ms
  2     *        *        *     Request timed out.
  3    32 ms    32 ms    30 ms
  4     *        *        *     Request timed out.
  5     *        *        *     Request timed out.
  6     *        *        *     Request timed out.
  7     *        *        *     Request timed out.
  8    48 ms    61 ms    58 ms
  9    65 ms    56 ms    52 ms  vie-sb5-i.VIE.AT.NET.DTAG.DE []
 10    48 ms    47 ms    56 ms
 11    58 ms    46 ms    49 ms
 12    86 ms    53 ms    60 ms
 13    48 ms    48 ms    47 ms
Tracing route to over a maximum of 30 hops

  1    <1 ms    <1 ms    <1 ms
  2     *        *        *     Request timed out.
  3    30 ms    27 ms    29 ms
  4     *        *        *     Request timed out.
  5     *        *        *     Request timed out.
  6     *        *        *     Request timed out.
  7     *        *        *     Request timed out.
  8    68 ms    52 ms    55 ms
  9   152 ms   148 ms   151 ms  nyc-sb5-i.NYC.US.NET.DTAG.DE []
 10   145 ms   159 ms   161 ms
 11   151 ms   156 ms   144 ms []
 12   163 ms   160 ms   168 ms
 13   280 ms   150 ms   149 ms
 14   153 ms   149 ms   156 ms

Indeed, it is the very same issue with the route over a Deutsche Telekom node in the USA. Your origin server is located in Poland?

Not sure about the chances, but it could make sense to contact your ISP about this issue. I mean Cloudflare is very widely used, and it seems that every Deutsche Telekom (or daughter) customer in Europe who accesses any origin server located in Europe behind Cloudflare free plan edge, has this issue. That might be quite many cases. Of course visitors usually would expect the website provider to have an issue, not the ISP, when experiencing slow page loads, so DTAG might not have much motivation to invest time. But the chances are probably better compared to asking Cloudflare for a solution.

Btw, adding some info: We sadly forgot to check the edge servers (IPs) we were offered on the free plan, but after switching to pro plan, all of us get the same 3 IPv4 and 3 IPv6 addresses offered from DNS, which differ from the two IPs from the two traceroutes I posted before. But the subnets overlap. So it seems like our proxied domains are now getting a different set of Cloudflare edge servers from the same subnets. What I was wondering about is that this change happened within a minute. I literally hit the pro plan button, got a confirmation text immediately, told my team in a video chat that I did that, and they had proper low response times and the different route immediately. This was faster than the 5 minutes (auto) TTL that those DNS entries should have :thinking:: Time to Live (TTL) · Cloudflare DNS docs

Proxied records

By default, all proxied records have a TTL of Auto, which is set to 300 seconds.

I am connected to “free” sites from EWR (Newark, NJ, United States). I reported the problem to T-Mobile Poland, unfortunately they said it was not their problem, they even pounced on me. I solve the problem with my DNS server on an ad-hoc basis - some website takes a long time to load - I change its IP to VIE or FRA, T-Mobile has always connected me there mostly, so I don’t even know any addresses to WAW (Warsaw).

What the hack, was it community members or really T-Mobile Poland staff? The vast part of the response delay is added by the hop from a DTAG node in Europe to a DTAG node in USA, and no other provider has this issue. So it is pretty damn clear an issue with DTAG. Probably not only, and also Cloudflare has contributes to the issue, but obviously both ends need to sort it out. If they just said “we do not care”, okay, but going into defensive mode, blaming the ones who report the issue, is so unprofessional childish and stupid (when seeing it as a quality issue for customers). But well, sometimes its the largest companies who have the worst customer support. Neither you, nor the affected guys on my end change their provider because of this, so why would they care …

I have some good news (at least for me and for now): at the moment the latency seems to be back to normal…

Let’s wait and see how long this lasts.

From my point of view traffic to Free plan sites is still going through DTAG US/NYC → AS6453 TATA → Cloudflare while Pro plan sites go through DTAG DE → AS3356 Level3 DUS → Cloudflare.


Seems like they are indeed working out their agreements on peering. Peering for websites behind a Free plan seems to be fixed on the network of Magyar Telekom (The Hungarian subsidiary of Deutsche Telekom)

And its back.