The current documentation seems a bit inconsistent in its use of the
terms "token" and "lexeme". The majority of the text seems to use
"lexeme" exclusively, which is inconsistent with the fact that the
term "token" is exposed by ts_token_type() and friends. But there
are a few places that seem to use "lexeme" to mean something returned
by a dictionary.
I was considering trying to adopt these conventions:
* What a parser returns is a "token".
* When a dictionary recognizes a token, what it returns is a "lexeme".
This would make the phrase "normalized lexeme" redundant, since we
don't call it a lexeme at all unless it's been normalized.
Comments?
regards, tom lane