Why is telex.blog so fast considering i’ve read in a lot of places where the read speed is around 250-300ms and write is 0.5s to propagate globally?
Is telex.blog the norm or is it the abnomaly?
However, thebestgoats.com seemed slow.
the cold read speed of telex i’ve tested is very fast, using pure js i think.
but thebestgoats.com is using wasm-bind to do the KV. ← is this the reason for the slowdown?
Trying to understand this part as I need to write the app logic based on what i can expect with the KV store. Please help. Thanks.
Anything KV add about 1 second (Writes) and 200ms (Read) in most of our own tests, only when KV is cached or used enough to be moved to an edge location - will you see speeds < 50ms.
I was also under that impression when I started using KV but AFAIK values do not propagate globally.
So here goes my layman approximation. When you write to a KV with a worker this only happens in a single data center or point of presence (PoP). This value is only copied to another PoP when there is a worker there that is reading the KV. So the first read is slower and subsequent reads on that particular PoP will be faster.
“cold” vs “hot” performance. Cold reads are much slower. How slow depends on where the request is coming from.
How big of a value you’re storing. Smaller ones are faster.
Which kind of thing you’re storing. From fastest to slowest: "stream" , "arrayBuffer" , "text" , and "json" .
These numbers don’t seem right to me. Cold reads may take that long, but hot reads should be much, much faster. Also, it does not take 0.5 seconds, it may take up to 60 seconds, depending on a bunch of details.
It’s entirely pull based, not push based. Eventually new values will make it to the edge if you’ve requested them, but if you don’t request something, it doesn’t make it to the edge until you ask for it. Hence cold reads being slower than hot ones.