Consistent Miss Count - Am I doing something wrong?

image

In the analytics console, with a MISS filter applied, I am seeing each of these killlistrow misses, there are no query parameters. What strikes me as odd is the consistencly in the MISS numbers for the killlistrow endpoint. Why is this happening, and what can I do to bring the MISS number down?

I even have it setup so that the clients will spread out their requests over ~30 seconds from notification to fetch that endpoint.

The /cache/1hour… has the following rule setup:
Cache Level: Cache everything
Edge Cache TTL: 1 hour

Have you tried a HIT filter to see what that’s like?

First visit through that datacenter will always be a miss. You’ve set TTL to one hour, but if you’re using Cache Analytics defaults, that’s a 24 hour view. How is it if you do a 30 minute view?

The 30 minute view is essentially the same:

If I select just one endpoint, this is what I see:

That looks like they’re all happening at the exact time in that hour. Or maybe since it’s all within a 30 second window, that’s how the graph will look.

And it’s better than a 50% cache hit ratio. Not horrible. Where are these clients located? Same geographic area?

Roughly USA, Europe, and Russia

Each edge node has its own cache, unless you’re using Argo tiered caching. Considering this is a once an hour run with that TTL, and your clients seem pretty geographically widespread, this isn’t abnormal. You’re definitely getting a lot of HITs, and that might be as good as it gets, considering the infrastructure.

Well then, that’s disappointing considering I have hundreds of these, sometimes thousands, each hour.

I’ll look into Argo and see what it can do for me. Thanks @sdayman

1 Like

What’s the main concern? Is it load speed, or the hits to your origin?

Every MISS hits the ORIGIN, yes. It isn’t so much that ORIGIN gets hit, but that there are so many MISSes. I do have caching setup within nginx that helps, so nginx really passes it through the proxy only once, and then serves each remaining request (within the hour).

Server caching certainly helps, as it’s the single point of entry regardless of visitor location. At least it appears that Cloudflare is cutting those hits in half.

Other than Argo’s tiered caching, I can’t think of a way to offload hits to your server. Short of using Workers KV, but that takes a fair amount of coding and could get expensive with that much traffic.

This topic was automatically closed after 30 days. New replies are no longer allowed.