Home | History | Annotate | Download | only in antlr3

Lines Matching refs:lexer

91         # LEXER FIELDS (must be in same state object to avoid casting
92 # constantly in generated code and Lexer object) :(
95 ## The goal of all lexer rules/methods is to create a token object.
98 # matching lexer rule(s). If you subclass to allow multiple token
131 lexer, parser, and tree grammars. This is all the parsing
156 ## State of a lexer, parser, or tree parser are collected into a state
403 Get number of recognition errors (lexer, parser, tree parser). Each
404 recognizer tracks its own number. So parser and lexer each have
431 your token objects because you don't have to go modify your lexer
777 If you change what tokens must be created by the lexer,
849 TODO: move to a utility class or something; weird having lexer call
1001 Errors from the lexer are never passed to the parser. Either you want
1046 class Lexer(BaseRecognizer, TokenSource):
1048 @brief Baseclass for generated lexer classes.
1050 A lexer is recognizer that draws input symbols from a character stream.
1051 lexer grammars result in a subclass of this object. A Lexer object
1060 # Where is the lexer drawing characters from?
1075 # wack Lexer state variables
1132 Instruct the lexer to skip creating a token for current lexer rule
1134 a lexer rule finishes with token set to SKIP_TOKEN. Recall that
1143 """This is the lexer entry point that sets instance var 'token'"""
1150 """Set the char stream and reset the lexer"""
1208 self.recover(mte) # don't really recover; just consume in lexer
1270 ## TODO: not thought about recovery in lexer yet.
1367 def __init__(self, lexer, state=None):
1370 self.input = lexer