Load Balancing Origin - IPv4 Vs IPv6

So when setting up my Origin servers in the load balancer wouldn’t it be quicker for Cloudflare to connect to my origins via IPv6 ?

If an origin host has an IPv4 and IPv6 address we default to v4. No difference in speed. The happy eyes RFC (which you’re probably thinking of when you ask about speed) is a client protocol. When we do our lookup we don’t introduce a delay upon ourselves to favor v6 or v4 so it’s not a factor.

1 Like

…but all other things being equal I hope we change at some point to prefer IPv6 just because.

1 Like

Thanks for the reply. So my website/server/nginx does support ipv6 wouldn’t it be faster if when Cloudflare gets the content from my origin for it to use ipv6 to connect to my server?

Im not quite sure about what you mean when you say you default to v4. In the load balancer setting in each origin where you input the server ip i’v put in the ipv6 ip wouldnt that force Cloudflare to connect to my servers via ipv6 and would that be faster then ipv4?

If you have an entry for both an IPv6 and IPv6 address we will default to IPv4. There is no speed difference when connecting to a site over IPv4 or IPv6. If you’ve entered a v6 address alone we will use that, but it won’t be any faster or slower than the v4 equivalent address.


So if IPv6 is faster then IPv4 is still picked, as it’s the default?

At the moment, I noticed IPv4 routing is faster in most cases anyway. But this might change in the future, when network admins focus on optimizing IPv6 routing tables.

Just curious, how do you configure load balancing with IPv4 and IPv6?
I only have a field to enter one IP address, support told me to buy extra origin’s, but it seems a bit silly to have to pay more because Cloudflare didn’t implement proper dual stack A and AAAA records when using the load balancer add on.

How many origins do you have today? I implemented mine using an A and AAAA for each and just ‘named’ them server-a and server-b. The load balancer is only using DNS like functionality. It behaves enough like DNS that we show the parent record in the DNS tab, but under the covers we do enough different that the intersection/ interaction with DNS is definitely not 1 to 1 (and in most cases this is a good thing).

And I guess I should amend my former answer about choosing between IPv4 and IPv6 choices… that’s for regular DNS queries to origin where load balancing is purely percentage based round robin today. Load balancing isn’t really about speed it’s primarily about balancing load. We have additional functionality coming in the load balancing arena soon which may make that statement less true, but that was the primary focus of the feature.

1 Like

I have two origins.

Both origins have an A address specified in the load balancer feature.
I can’t specify an additional address (e.g. an AAAA), unless I buy extra origins and add them as a separate origin (according to the support staff).

I don’t know that it would make a difference unless the IPv4 network failed where you’re hosting. But yes today when an IP is specified for a host it’s a single IP which sort of makes sense since it’s more about balancing load than anything else.

There’s no speed difference between IPv4 and IPv6 really, it’s just an addressing protocol. There is a built in delay with some providers in looking up IPv4 addresses, but that’s a non-issue in the case of load balancing since we don’t build in any delay when we’re looking up an origin and on our edge we will advertise both an IPv6 and IPv4 address for an orange clouded record regardless of what IP scheme is used on the origin server.

The only reason I used the scheme I described above (server-a and server-b) was to weight one server heavier in my pool since today we only support equal distribution across hosts in a pool.


any plans to support weighted dns loadbalancing like dyn or ns1 on your paid plans?

Definitely on the roadmap. Not sure as to exact timing. We did recently release session affinity for LB, so the feature is under active development for sure.

1 Like

awsome, thanks for the update!