I looking for a non-blocking Websocket and TLS library - not async sense - for use on event loop (a mix of epoll and io_uring). I need TLS definitely, but I can live with implementing my own websocket code (done it before).
What gets me to this point is there are certain latency penalties in async rust that are a little too expensive. I'm trying the current thread executor from tokio, but it doesn't appear to make much of a difference.
Some of the issues I'm having with current code:
- await has a non-negligible cost to pass through, and I don't want to
wait most of the time for the send. I have more computation to do before I am willing to give up the thread.
- however i want to try the send now, not wait to be scheduled, because most of the time it will succeed immediately and then I can continue with what I was doing. This is how to works in the c++ version.
- cannot resize buffers (kernel and userspace). I havent' see any library give the ability to resize its network buffers, so maybe I have to hack one up?
On the buffer resize issues, there are also issues with socket options. Passing in the socket doesn't work for accepting sockets and there are some sock opts that aren't inherited from the parent socket, so you need to dig the descriptor out and set them later.
It seems to be the always async nature of the system maybe? I often want to make a quick attempt first, but lower priority io like admin messages will just always schedule for later.
And sends are different than receives. You need to the data from the recv to process, but the send you are just looking for the OS to ack it back to you and you can retry later without having it affect much.
It seems everything has moved to async. Readiness-based loops seem no longer supported in the rust ecosystem.
The tokio-tungstenite crate internally works by using the non-async tungstenite crate with the socket configured to be non-blocking. It should be possible for you to do the same thing.
I'm not familiar with the internals of the crate, so I don't know if it satisfies all of the requirements you listed, but it is my best bet.
This appears to no longer be true. Tungstenite is async under the hood (it seems to me), but Rustls does appear to show a sync side.
The WS lib though is in need of some heavy love though: the choice of TLS or no TLS is done with cfg so you would need to compile it a second time if you needed both encrypted and unencrypted, the TCP read buffers are a compile time constant of 4K, etc
So no ws for non-async
Where do you see that? Grepping through its source code for "async", "await", "future" and "poll" yields no results, except for the following sentence in the readme:
It allows for both synchronous (like TcpStream) and asynchronous usage and is easy to integrate into any third-party event loops including MIO.
Here I note that mio is a thin-ish cross-platform wrapper around epoll and friends. It is not async.
As far as I can tell, enabling the tls feature flags does not remove support for unencrypted websockets. See e.g. the
(This is also the convention for the Rust feature flags: enabling a feature flag should never break code that compiles without the feature flag)
This appears to be correct.
I find these issues interesting: the async interface does support synchronous completion reporting.
Perhaps OP is on Windows, which apparently has overhead due to differing readiness vs completion semantics? If so, async itself would not be adding the overhead, but tokio.
Thanks. I misunderstood you and was looking that wrong crate to see if async,.
The statements guarded by the cfgs along with the instructions to pick one don't make sense to me then, but the cfgs themselves see to imply both are fine. not sure have to look further. the additive rule seems to be broken often for things like runtime anyways.
Like I said the websocket layer doesn't worry me, i can redo that in a day, it is the lack of non-blocking ssl layer im more worried about. And the inflexibilty of some of these libs in regards to higher performance needs (eg, tcp buffer sizes).is also a concern. There used to be one, but some of the warnigns about a bifurcated networking stack seem to be coming true.
Regarding SSL I have a similar response. The Tokio tls crates internally depend on some non-async tls crate with support for non-blocking use. For example, there's the native-tls crate.
I have never used it myself, so I don't know how difficult it would be to use directly, but the fact that tokio-native-tls uses it is proof that it can be used in a non-blocking manner.
Runtime choice is the primary major exception where people don't follow the convention.
Rustls is neither sync nor async in itself. You push encrypted data into it on one side and get decrypted data out on the other side and vice versa. You can ask it if you need to read fromor write to it. It has some convenience functions to wire it up to
std::io::Write, but if the underlying socket is set to nonblocking rustls won't block. It is still your responsibility to poll in that case though unless you use a crate to glue it to a specific async runtime like tokio.
Hopefully it fixes all the craziness that openssl has (like having to pass in the same pointer on partial write, a dozen ways to init, super wonky handling for the connection depending on if bio was marked blocking or not, and anybody who's had to deal with it could keep going).
I did a non-blocking openssl client, but it wasn't simple. I first tried with the bio interfaces since i already had shared network sockets and buffers objects for zero copy across the protocol stack, but i couldn't get it to work as i wanted and there were some rules about the user pointer i still dont understand). So i punted and used an intermediate mem io from openssl and handled the socket myself.
It was a fucking mess of an api.
I always hope modern implementations dont just blindly follow openssl's api. Its an ugly mix of ifdefs, abstraction violations, slavish attachment to the current api structure, and other self-inflicted wounds. The the crypto lib is useful, the ssl side if a dead end (i hope).
Unless it just slept for no reason, not sure hot it would even do that.
the two pains were figure out that engine wanted without knowing the protocol too specifically (does that WANT_READ want you to write? usually it does). And if it had the socket with edge triggering, making sure it drained the read - these i couldn't ever do while it held the socket
It would simply say that the read or write was successful for 0 bytes if the underlying socket wasn't ready yet just like trying to read from or write to the underlying socket.
rustls::StreamOwned encapsulates a rustls connection and a socket (or whatever implements both
Write) and handles all these details. It implements
Write and transparently handles reading and writing to the socket as necessary.
I still have to look at it and see how i control the buffering. I don't always want every call to ssl_write to generate a write on the socket. And i have rather large read buffers (4M or larger) and need to see how to get it to use them.
i was going to move to uring for writes anyways (keep epoll for reads).