I am in AU, as you can see the storage zone they use is in Singapore, based on download speeds & response times to my location being 300ms, they’re definitely no in Australia, which is understandable due to the cost of bandwidth here.
But, I don’t believe any ‘Globally distributed object storage’ is simply not true. And if it’s not, Wasabi is better solution not only in storage of data costs, but no retrieval or bandwidth charges.
Brazil & India are getting terrible results. I used vpn to brazil, but the storage uses USA, so my testingis right there are three main server locations. Cloudflare makes you think itvwill be distributed to 100+ pops. It won’t.
It’s already been said in the first reply to you - you’re benchmarking an open beta product which has been open for less than 70 days. Performance is the primary focus of the R2 team for where they are on the roadmap at the moment, but that’s still a work in progress.
If you’re not happy with beta performance, don’t use it until it’s in general availability?
It’s obvious that it will be zones that can have 100Gbps+ links for cheap, which is Singapore, EU, USA, because it’s unlimited egress. Which means, India, Korea, Australia, Brazil, these regions are out.
I don’t think the performance will increase that much.
The weird thing is egress is unlimited and free but only from 3 regions (in GA), but then the CDN doesn’t allow video, large files etc.
We have additional low-level network performance optimization work happening in the next quarter & we hope to cut that worst case by another 25-50% (not much of a difference for most people since access I think tends to be localized). Public buckets will have faster & more stable TTFB times than the S3 endpoint even at launch. Long tail access will drop dramatically by the end of the year & we’re hoping to feed this through to S3 as well. Long distance download speeds should be increasing in the next few weeks/month as the TCP stack is tuned on the underlying storage nodes.
As you can tell, performance is a primary focus for this quarter. Availability is a parallel effort with the bulk of the gains probably H1 2023 but lots of interim gains along the way.
You clearly have no insight into what’s on the roadmap so R2 so why do you try and be authoritative about judging the performance of an open beta product?
Files aren’t delivered via the CDN, the worker sets the policy of read/writing, not actually caching, you can’t cache using a worker you need a custom domain to use the main CDN, which doesn’t allow large files or videos.
It’s not a good replacement for S3 as most of the compute & actual processing will be done in a container/service/cloud environment such as AWS/GCP or Azure where data egress is not going to be free.
Because Cloudflare does a lot of job trashing AWS for the cost of Egress & other performance metrics but in the end the Egress of actually doing anything into the bucket will have egress costs, and will be high latency as AWS EC2 to S3 is same zone.
I dunno, feels like I am doing more testing than you, lol, probably the only person to have indicated that there are only 3 data centres not 100’s like they’ve been claiming it will be.
My understanding is that R2 is Vital for Improving several Cloudflare Products.
As a Example Cache in each POP Location is small and expire very fast.
They will use R2 for a secondary cashe where Images and other static data can be saved
in a secondary cache for very long time on the R2 Storage close to the POPs.
That way uncached requests dont need to go to the origin for a static file but can be
fetched from the R2 Cache.
Think they work this out at the moment as the Latency round trip is too high at the moment.
Also R2 looks like will be used for upcomming Storage Solutions like the SQL Database D2.
This Distributed Storage is not easy to do as lot of traffic is needed to keep everything in sync.
It provides however a big opprtuniy when solved right as lot of people are looking
for low latency geo distributed storage and if Cloudflare does it right they will be the globale winner.