I have a function that takes a 1D segment represented by the coordinates of each end. I want to split it in or more segments (of a fixed maximum length) and returns those new segments from this function. In python I would have just used a generator:
From what I found, generators are still unstable in Rust. What should I use instead? A Vec<(f64, f64) and then return it? This is my naive answer, but I wanted to know if there wasn't a better solution.
Specifically, you need to have a struct containing the data you'll need (in this case it looks like you'll need source, target, length and current_point) and then impl Iterator on it. This is the relevant documentation page for implementation Iterator.
Generator syntax is just sugar for automatically making a struct that captures the needed variables and impl Iteratoring on it. Just like in Python generator syntax is sugar for generating a class with the necessary captures and the appropriate __iter__ method.
I have some code that sounds like it does something vaguely similar:
An endless stream of bytes are received from a serial port.
Those bytes get passed to a parser function that identifies data frames in the stream, verifies message types, check sums and so on. It stitches the chunks of bytes from the stream into message frames.
When a full frame is detected that parser function returns a frame struct. Otherwise None.
It works but does not look so nice somehow.
Would it make sense to make that parser function into an iterator that produced frames from the stream?
I think that effectively you can create a iterator for this. To to so, you put all the variable that are outside of the main loop in a struct, then you implement Iterator<Item=Option<Frame> for this struct. And finally your return an instance of that struct.
struct State {
// anything needed to store the state
}
impl State {
fn new(/* ... */) -> Self { /* ... */ }
}
impl Iterator for State {
type Item = Option<Frame>;
fn next(&mut self) -> Option<Self::Item> {
// read the input from the serial port
if let data = self.serial_port.next() {
// read the data
let (decoded_data, data_for_next_frame): Option<Frame, _> = decode(self.data_for_next_frame, data);
// update the current state
self.data_for_next_frame = data_for_next_frame;
// if decoded_data is Some(Frame), we have a new frame, otherwise it will be None
return decoded_data;
} else {
// No more data to read from the serial port
return None;
}
}
}
return State::new(/* ... */);
The output is Option<Option<Frame>>.
If it's None, this means that the endless stream from the serial port is ended (maybe it was unplugged?).
If it's Some(None), this means that we have read 0 or more data from the serial port, but there is no valid dataframe yet.
If it's Some(Some(Frame)), this means that we have 0 or more data from the serial port, and we were able to find a new dataframe. If there was more that 1 dataframe in the available data of the serial port, the next dataframes will just be returned by the next call to next().
Note that the thing you are describing is exactly what Tokio's codec module does. The codec module wraps the function that detects frames in a type that turns it into an async-enabled iterator, and you could probably do the same in sync code if you want an Iterator of frames.
The point is that when you have time-expensive IO operation async/.await is the best approach.
You can implement Stream trait from async-std to have async Iterator-like behavior for your type.
I'm quite aware of the motivation for async. In my mind I summarize it as: Threads are good for getting work done on many cores, async is good for waiting on lot's of I/O.
I have been using this event driven approach in node.js since there was a node.js.
So far, I can't make out heads or tails of the Rust async world. Which seems to require abandoning most of the standard library, adopting one of a dozen async frameworks instead and getting to grips with all the bizarre syntax. I don't get the feeling that async in Rust is ready for prime time.
I'm not saying that you should use Tokio's codec module if you're not already using async/await, but it has the same pattern as the one you described, and it could be translated to the sync world.
The async world does indeed require abandoning the IO code in the standard library, because to take advantage of async/await, you must use OS apis such as epoll to listen on IO resources, and the blocking api exposed by TcpStream in std does not use those apis. The requirement that the IO resources integrate with the event loop like this is also why the async world is fragmented between Tokio and async-std, as they both implement their own integration with their own event loops.
Edit: A quick sketch of how it would look to translate it: playground