Just for information as a side note, the other classical way to do a lexer is transforming a set of regular expressions into a DFA (Deterministic Finite Automaton—a finite state machine) and, from there, either a direct translation to source code of the state machine or using a lexer based on a transition table. The best source I've found on the subject is the Dragon Book (Compilers: Principles, Techniques, & Tools, 2nd Edition, by Aho, Lam, Sethi, and Ullman, chapter 3—in particular, section 3.9.5).
Typically, those regular expressions are parsed by a tool from a human-readable file describing the lexicon in terms of regular expressions (Id: [a-zA-Z][a-zA-Z]*). Lexer generators like lex / flex do that transformation for you and generate source code (this one is quite old and produces C; I've written one that produces Rust, but it's not public just yet).
The advantages are that you can modify that lexicon without having to change all your code, and you have the option of a table-based lexer. Also, you also have a clear view of your lexicon, since the source is human-readable, so it lowers the risk of errors, especially when the lexicon changes over time. The disadvantage is that it leaves less room for customization if you need to do something special which isn't supported by the lexer generator.
There's also some debate about whether a generated source code FSM or a table-driver lexer is quicker. I think it depends on the size of the lexer, the extent of its alphabet, and the context (data types, respective CPU cache sizes, etc). But it's clear that if you're hand-writing the code instead, you can optimize it at will. 
However, I wouldn't bother with all that for assembly language, which is rather straightforward. I think the way you did it is simple and worry-free, but it's worth keeping in mind in case you have to extend the lexer repeatedly or if some tokens become cumbersome to handle with a match.
For the code itself,
- I don't know LALRPOP, but doesn't it support errors? I see you're skipping any illegal character, which may not be a good thing. It's better to produce an error so that the user isn't confused if something goes wrong because of a typo.
- Potential improvement: your code is panicking if the input is incorrect; for example, a malformed hex number. Again, if an error path exists, it's best to use it and report the wrongly formed number in a log. Chances are the parser is able to recover and parse more of the source file to report further errors to the user, allowing them to correct the whole file in one go rather than seeing a succession of crashes.
- Nit-picking: it would seem more logical to name
is_letter as is_letter_or_underscore, and safer to use it in is_letter_or_digit_or_underscore rather than re-writing the same code. I'm not entirely sure the compiler will inline this automatically, so something to check or to test in the Compiler Explorer, maybe.
- You could also make all those methods as a trait implemented for
char rather than C-like functions. Actually, I'm not sure why you redefine some like is_hex_digit and is_digit rather than using the native ones and using the existing name convention for the other ones you're defining.
- For the tests, I see several unit tests, if not all of them, following the same pattern: you could merge them in one test by creating a
let test: Vec<(&str, Vec<Tok>)> = ...: a list of inputs and their expected tokens. It will give you a clearer view of the tests, and it will incite you to put more of them, since you don't have to create a whole function for each input.