Is there any chance we could add a feature to this service? Basically the proxy would add a protocol layer, call it the proxy batching protocol, and forward batched tcp packets (for the sockets coming into the proxy) to the destination via a single tcp socket or something like UDT? (https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol)
This would be a nice perf optimization in that user/kernel context switch overhead could be dramatically reduced/controlled (ie more data throughput per context switch). Given the context switch overhead has recently gotten much worse due to spectre and meltdown patches, I’m sure this sort of functionality would be very welcome in many stacks. Customers could dramatically decrease their VM instance counts as well, rather than increase them due to the increased cpu overhead from latest security patches.
With respect to UDT, using recvmmsg and sendmmsg would be the magic calls on Linux. For Windows, use RIO (registered i/o) to do the same thing. Send/recv many packets per context switch. Could possibly use DTLS to encrypt the datagrams?
BTW, doing the same thing for the websocket proxy would be useful as well.
Talk about a massive value add!