so i'm writing a parser in Python for some language (specifics don't matter), and most of my sub-parsers end up looking like this:
consume_whitespace_and_comments()
keyword = parse_keyword()
consume_whitespace_and_comments()
identifier = parse_identifier()
consume_whitespace_and_comments()
operator = parse_operator()
consume_whitespace_and_comments()
...
it's a (admittedly somewhat scuffed) recursive descent parser, so i figured i could skip having a separate tokenization phase and just do that while i'm parsing. it works, but it's really annoying to keep having to write consume_whitespace_and_comments().
does anybody have a good pattern for avoiding this? if it's "just tokenize it" that's alright, just wondering about other approaches

eggbug enjoyer