Just kinda looking at workers and thinking of getting started… can someone shed some light on a few initial questions?
Let’s say I want to structure things more like a traditional server with routing as opposed to one-project-per-endpoint… what exactly are the limits I need to be concerned about in this scenario? Specifically, is memory usage a problem?
Kinda building on the above, let’s say I want to use a relational db like google cloud sql… is there a way I can take advantage of connection pooling somehow? Are there articles on how to best set this up?
Here’s the popular node API that I’d like to use: https://github.com/mysqljs/mysql#pooling-connections
If this is not possible or beneficial, what is the recommended approach?
Benefit vs. Server
Assuming the above - does the latency of going from the workers at the edge to the db and back ruin the benefit I get by distributing the workers in the first place?
In other words, there are two scenarios to compare:
- Client <-> cloudflare (edge/server) <-> google (db)
- Client <-> google (server) <-> google (db)
On the one hand I’d imagine route 1 is faster since it’s hitting the edge and cloudflare has better connectivity to google than the client, but maybe not faster in the end since the db/server traffic is all in geographic proximity with route 2?