which lends itself to an API that is easy to use on the caller side thanks to the try operator:
match lexer.next()? {
Token::Eof => break,
Token::SomethingElse => ...,
}
But I'm now cleaning up the code and one thing that caught my attention was a Clippy warning about my next() implementation, because it should be implemented in terms of Iterator. Which sounds like a good idea because I can then rely on Peekable and more natural behavior.
The "problem" is that in converting to an iterator, next becomes:
fn next(&mut self) -> Option<Fallible<Token>>;
And I'm feeling that this has worse ergonomics on the caller side because now I have to do things like:
The inability to use the try operator feels annoying. On the other hand, this design doesn't require a fictitious Eof token and it integrates better with the common interfaces for iteration.
So I'm curious: what do you think? Is there any good reason to not go for the iterator-based approach?
I had seen that, but I am trying to avoid as many dependencies as possible and trying to fit stuff into the "standard" interfaces. (I know, I know, my mention of Fallible doesn't align with what I just said )
My current favorite interface for getting the next token is fn next_token(&mut self) -> Token.
I don’t find it useful to single out eof condition. In the parser, you generally handle eof and any other wrong token the same. So, it’s clear to handle one unified error path for both conditions by having a dedicated eof token.
I think it’s better not to pollute the lexer with io::Errors. Working with &str is usually fine. Like, source files are usually smaller than 10 mb, so it’s not a problem to keep them in memory. If I had a requirement to work with arbitrary Read, I’d probably wrapped it into a type that just returns eof on error and communicates the actual error via a side channel.
similarly, I think it’s a good idea to not report lexer errors (malformed tokens) as a Result. Instead, one can return an explicit Error token, or emit error via diagnostic/side channel, or store error flags (like unclosed quotes) in the token itself — that way the lexer doesn’t need to stop after the first error.
Oh thanks, encoding the malformed tokens as a token itself sounds like a good idea. And splitting these conditions from I/O errors has already uncovered a couple of subtle issues in error handling