Proxied LB vs Unproxied LB w/ Argo


#1

I have a question about proxied Load balancer performance with and without Argo enabled.

Assumptions (please help confirm these assumptions are all correct):

  • We have 1 pool of servers in the US that serve dynamic REST responses (non-static and not web pages).
  • For a connection to our REST service from Japan, if using proxied (orange cloud) load balancing, the entire REST request will get proxied through the closest CloudFlare Japan PoP.
  • The CloudFlare Japan PoP would end up proxying all requests from Japan PoP to our servers in US. Because the content is purely a string of dynamically generated JSON, no caching takes place
  • If the Japan PoP connection to our load balanced servers in the US is ideal, then the end user will end up with a better experience as the CloudFlare network to our LB servers in the US is probably a better connection then if the end user in Japan connected directly to our servers in Japan
  • Using Argo, will the Japan PoP find even better/ideal routes to the load balanced server? If this were true, then the end user experience is further improved because they have a close to last mile connection with the Japanese PoP and the Japanese PoP connection is optimally routed using Argo to the actual load balanced servers in the US. For the end user in Japan they may feel/think that the server they are connecting to is situated in Japan rather than 6000 miles away.

Since this was a bit of a drawn out question, I asked in the form of assumptions I’ve formulated and want confirmed. Please help confirm or correct these assumptions.

Thanks!


#2

The connection will be proxied to the closest POP to the user based on BGP routing from their ISP. This will (likely) but not necessarily be in Japan. But for the sake of this discussion let’s simplify and say that it is coming from Japan.

See this article for best practices on configuring your API to ensure you are getting the behavior you desire:

Maybe? In reality the connection through a Cloudflare POP could be slower than going directly, but we are also providing layer 3/4 DDoS protection, address obfuscation and potentially other services to the application overall which may make the slight speed tradeoff acceptable. Or it could be that the ECC cert + fast TTL resumption evens this out. To the extent ‘probably’ is the key word in your question above… probably is the right answer as well. :smiley:[quote=“daniel.huang, post:1, topic:788”]

  • Using Argo, will the Japan PoP find even better/ideal routes to the load balanced server? If this were true, then the end user experience is further improved because they have a close to last mile connection with the Japanese PoP and the Japanese PoP connection is optimally routed using Argo to the actual load balanced servers in the US. For the end user in Japan they may feel/think that the server they are connecting to is situated in Japan rather than 6000 miles away.
    [/quote]

It will try to fin a better route. If there is one, the majority of traffic would then use this better route. Performance would improve by whatever % the better route might provide. That could be a significant improvement, or it might not be. On average we have seen some great numbers for customers, but from a given pop there may be little or no performance improvement… and this could change over time as internet congestion/failures occur or are resolved. Argo is not faster than the speed of light however, so the time it takes for a data packet to make a 12k mile round trip can only be improved so much. 12000 miles adds ~- 60ms to a round trip request + whatever other latency exists on the network (the latter may be minimized by Argo if it finds a faster route). [quote=“daniel.huang, post:1, topic:788”]
Since this was a bit of a drawn out question, I asked in the form of assumptions I’ve formulated and want confirmed. Please help confirm or correct these assumptions.
[/quote]

Did that help?


#3

Yes very helpful. We’ve started doing side by side internal tests (w/out Argo for now).

Our findings so far:

  • Without Cloudflare LB, the fastest we get is 160ms before first byte response from a server in the US to Japan. The latency jumps around quite a bit though especially if accessing from a mobile network.
  • With Cloudflare LB, at its fastest, we see about 200ms before first byte response from US to Japan. However, the latency stays rock solid at 200ms and seems very predictable.

We’ll give Argo a try to see if it can bring the latency down and keep it even more stable.