Clean parser functions to access tokens

I'm trying to write a parser but it's coming out pretty redundant, here's some code to better explain what I'm saying

pub struct Located<T> {
    pos: Position,
    value: T,
}

impl<T> Located<T> {
    pub fn at_eof(value: T) -> Self {
        Self {
            pos: Position::Eof,
            value,
        }
    }

    pub fn co_locate<L>(&self, value: L) -> Located<L> {
        Located {
            pos: self.pos.clone(),
            value,
        }
    }

    pub fn value(&self) -> &T {
        &self.value
    }
}

I implemented a Lexer as an iterator whose Item is Located<Token> (pos being the token's coordinates in the file). The parser calls next() and peek() on the lexer to access consecutive tokens, note that Token is an enum with variants that possibly have associated values (e.g. Token::Number(i32)).
My parsing methods look something like this and all present somewhat the same patterns

fn parse_rule(&mut self) -> Result<Stmt, Located<SyntaxError>> {
    let Some(l) = self.lexer.peek() else {
        return Err(Located::at_eof(SyntaxError::SomeError));
    };
    let Token::SomeToken = l.value() else {
        return Err(l.co_locate(SyntaxError::SomeError));
    };
    self.tokens.next();
    ...
    match self.lexer.peek() {
        Token::SomeToken1 => {
            self.lexer.next();
            self.some_other_rule1()?;
        }
        Token::SomeToken2 => {
            self.some_other_rule2()?;
        }
        ...
        Ok(Stmt::SomeStmt(some_other_rule3()?))
    }

As you can see Located is used to localize errors as well (Located<T: Error> implements Error) and their location is usually the one of the faulty token or eof when there's no tokens left.
Using let ... else instead of if let or match to do the above two-steps check does make the code a little cleaner but it's still boilerplate, and putting these self.lexer.next() lines by hand seems error-prone so I tried implementing some methods like

fn next_token_if(
    &mut self,
    func: impl FnOnce(&Located<Token>) -> bool,
) -> Option<Located<Token>> {
    self.lexer.peek().filter(|c| func(c)).map(|c| {
        self.lexer.next();
        c
    })
}

but this doesn't distinguish between non-matching present token and an absent token so it's difficult to map to an error, and others like

fn next_token_if_map_or_error<T>(
    &mut self,
    func: impl FnOnce(&Located<Token>) -> Option<T>,
    error: SyntaxError,
) -> Result<T, Located<SyntaxError>> {
    if let Some(l) = self.lexer.peek() {
        if let Some(res) = func(&t) {
            self.lexer.next();
            Ok(res)
        } else {
            Err(l.co_locate(error))
        }
    } else {
        Err(Located::at_eof(error))
    }
}

to be used like this:

let number self.next_token_if_map_or_error(
    |t| if let Token::Number(n) = t.value() {
        Some(n)
    } else {
        None
    },
    SyntaxError::ExpectedNumber
)?;

But still doesn't look much better than before to me.

To summarize I'm discussing two problems

  1. The two-steps match maintaining the information of both steps since the first one is used to localize an error and know if there were no token or just a wrong token
  2. self.tokens.next() manual insertion after token match

What do you guys think? Are these problems a result of poor design? Should I change something radically? Should I stick with let ... else boilerplate and manual self.tokens.next() or is there any better solution you're thinking of?
I'd like to add that one of my previous solutions used a tuple (usize, usize, T) to store the equivalent of Located<T> but it had other drawbacks; do you think struct deconstruction could solve problem 1.?

You can have is_some_variant like methods on your Token.

pub enum Token {
    SomeToken(i32),
    SomeToken1,
    SomeToken2,
}

impl Token {
    // You can macro the definition of these up, see playground below
    fn is_some_token(&self) -> Result<i32, SyntaxError> {
        if let Self::SomeToken(i) = self {
            Ok(*i)
        } else {
            Err(SyntaxError::SomeError)
        }
    }
}

That gets you to here:

    // I also rewrote this part ot use `ok_or_else`
    let loc = self.lexer.peek().ok_or_else(|| Located::at_eof(SyntaxError::SomeError))?;
    let int = loc.value().is_some_token().map_err(|e| loc.co_locate(e))?;
    self.lexer.next();

You could make a trait and put these steps together by implementing the trait on your iterator:

trait NextToken {
    fn some_token(&mut self) -> Result<i32, Located<SyntaxError>>;
}

impl<Iter: Iterator<Item = Located<Token>>> NextToken for Peekable<Iter> {
    // also macro-able
    fn some_token(&mut self) -> Result<i32, Located<SyntaxError>> {
        let loc = self.peek().ok_or_else(|| Located::at_eof(SyntaxError::SomeError))?;
        let int = loc.value().is_some_token().map_err(|e| loc.co_locate(e))?;
        self.next();
        Ok(int)
    }
}

And that gets you to here:

    let int = self.lexer.some_token()?;

If you're doing the ok_or_else, map_err dance with Located a lot, you can make it briefer by doing something like

struct Eof;
struct At<T>(T);
impl<T> From<(T, At<Eof>)> for Located<T> {
    fn from((value, _): (T, At<Eof>)) -> Self {
        Self::at_eof(value)
    }
}
impl<T, L> From<(L, At<&Located<T>>)> for Located<L> {
    fn from((value, At(loc)): (L, At<&Located<T>>)) -> Self {
        loc.co_locate(value)
    }
}

which shortens things up like so:

        let loc = self.peek().ok_or((SyntaxError::SomeError, At(Eof)))?;
        let int = loc.value().is_some_token().map_err(|e| (e, At(loc)))?;

...but I don't know if I'd consider that worth it for this case, you're not saving a ton.

I added the macros as an afterthought but they can cut down on a ton of repetition. (Rust is often boiler plate heavy.) Instead of defining all these methods and so on you might be able to just macro up the 5 lines or so I addressed above in such a way that it works everywhere you have that pattern.