Simple ping test to aid in 1.1.1.1 hijacking discovery


#1

I’ve used this ping diagnostic test to assist in discovering if 1.1.1.1 may have been hijacked by an ISP or upstream network peering partner.

The theory behind the following ping test is that the endpoints of 1.1.1.1 and 1.0.0.1 are one and the same and the ping results should be very similar if not exactly the same. I’ve tested this theory on hosts in Canada, US and the UK that 1.1.1.1 is the real 1.1.1.1 and this is most definitely the case.

While not definitive, seeing discrepancies would raise my suspicion that the 1.1.1.1 node replying is not the real 1.1.1.1 Cloudflare endpoint node.


Results from a host that can reach the real 1.1.1.1

ping -c10 -M do 1.1.1.1 -s 1472 && ping -c10 -M do 1.0.0.1 -s 1472

PING 1.1.1.1 (1.1.1.1) 1472(1500) bytes of data.
1480 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=5.90 ms
1480 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=5.96 ms
1480 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=5.57 ms
1480 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=6.34 ms
1480 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=6.16 ms
1480 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=5.58 ms
1480 bytes from 1.1.1.1: icmp_seq=7 ttl=57 time=5.55 ms
1480 bytes from 1.1.1.1: icmp_seq=8 ttl=57 time=6.65 ms
1480 bytes from 1.1.1.1: icmp_seq=9 ttl=57 time=5.58 ms
1480 bytes from 1.1.1.1: icmp_seq=10 ttl=57 time=5.54 ms

‐‐‐ 1.1.1.1 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9015ms
rtt min/avg/max/mdev = 5.547/5.886/6.659/0.383 ms

PING 1.0.0.1 (1.0.0.1) 1472(1500) bytes of data.
1480 bytes from 1.0.0.1: icmp_seq=1 ttl=57 time=5.96 ms
1480 bytes from 1.0.0.1: icmp_seq=2 ttl=57 time=6.29 ms
1480 bytes from 1.0.0.1: icmp_seq=3 ttl=57 time=6.01 ms
1480 bytes from 1.0.0.1: icmp_seq=4 ttl=57 time=7.47 ms
1480 bytes from 1.0.0.1: icmp_seq=5 ttl=57 time=6.00 ms
1480 bytes from 1.0.0.1: icmp_seq=6 ttl=57 time=5.95 ms
1480 bytes from 1.0.0.1: icmp_seq=7 ttl=57 time=6.00 ms
1480 bytes from 1.0.0.1: icmp_seq=8 ttl=57 time=8.09 ms
1480 bytes from 1.0.0.1: icmp_seq=9 ttl=57 time=5.99 ms
1480 bytes from 1.0.0.1: icmp_seq=10 ttl=57 time=5.98 ms

— 1.0.0.1 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 5.959/6.378/8.099/0.727 ms



Results from a host that could not reach the real 1.1.1.1

ping -c10 -M do 1.1.1.1 -s 1472 && ping -c10 -M do 1.0.0.1 -s 1472

PING 1.1.1.1 (1.1.1.1) 1472(1500) bytes of data.
1480 bytes from 1.1.1.1: icmp_seq=1 ttl=252 time=3.29 ms
1480 bytes from 1.1.1.1: icmp_seq=2 ttl=252 time=3.57 ms
1480 bytes from 1.1.1.1: icmp_seq=3 ttl=252 time=3.56 ms
1480 bytes from 1.1.1.1: icmp_seq=4 ttl=252 time=5.21 ms
1480 bytes from 1.1.1.1: icmp_seq=5 ttl=252 time=3.78 ms
1480 bytes from 1.1.1.1: icmp_seq=6 ttl=252 time=3.25 ms
1480 bytes from 1.1.1.1: icmp_seq=7 ttl=252 time=6.28 ms
1480 bytes from 1.1.1.1: icmp_seq=8 ttl=252 time=3.81 ms
1480 bytes from 1.1.1.1: icmp_seq=9 ttl=252 time=21.1 ms
1480 bytes from 1.1.1.1: icmp_seq=10 ttl=252 time=5.61 ms

— 1.1.1.1 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 3.258/5.957/21.164/5.168 ms

PING 1.0.0.1 (1.0.0.1) 1472(1500) bytes of data.
1480 bytes from 1.0.0.1: icmp_seq=1 ttl=57 time=1.27 ms
1480 bytes from 1.0.0.1: icmp_seq=2 ttl=57 time=1.29 ms
1480 bytes from 1.0.0.1: icmp_seq=3 ttl=57 time=1.29 ms
1480 bytes from 1.0.0.1: icmp_seq=4 ttl=57 time=1.28 ms
1480 bytes from 1.0.0.1: icmp_seq=5 ttl=57 time=1.29 ms
1480 bytes from 1.0.0.1: icmp_seq=6 ttl=57 time=1.28 ms
1480 bytes from 1.0.0.1: icmp_seq=7 ttl=57 time=1.27 ms
1480 bytes from 1.0.0.1: icmp_seq=8 ttl=57 time=1.28 ms
1480 bytes from 1.0.0.1: icmp_seq=9 ttl=57 time=1.31 ms
1480 bytes from 1.0.0.1: icmp_seq=10 ttl=57 time=1.27 ms

— 1.0.0.1 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 1.277/1.287/1.311/0.046 ms


The ttl was a pretty good tell in the second test above and resulted in one service provider’s tech support taking the issue more seriously instead of blaming the problem on Cloudflare.