Home | History | Annotate | Download | only in Antlr3.Runtime

Lines Matching refs:TOKEN

59   /// or token type sequence (such as for interpretation).
74 /// Negative indexes are allowed. LA(-1) is previous token (token just matched).
75 /// LA(-i) where i is before first token should yield -1, invalid char or EOF.
248 /// <summary>The line number on which this token was matched; line=1..N</summary>
256 /// <summary>The line number on which this token was matched; line=1..N</summary>
260 /// An index from 0..N-1 of the token object in the input stream
267 /// <summary>The text of the token</summary>
281 /// to keep going or you do not upon token recognition error. If you do not
286 /// requested a token. Keep lexing until you get a valid one. Just report
287 /// errors and keep going, looking for a valid token.
297 /// Returns a Token object from the input stream (usually a CharStream).
323 /// Get Token at current input pointer + I ahead (where I=1 is next
324 /// Token).
325 /// I &lt; 0 indicates tokens in the past. So -1 is previous token and -2 is
326 /// two tokens ago. LT(0) is undefined. For I>=N, return Token.EOFToken.
333 /// Get a token at an absolute index I; 0..N-1. This is really only
334 /// needed for profiling and debugging and token stream rewriting.
349 /// <summary>Because the user is not required to use a token with an index stored
350 /// in it, we must provide a means for two token objects themselves to
526 /// Tracks the set of token types that can follow any rule invocation.
535 /// matched a token. Prevents generation of more than one error message
545 /// but no token is consumed during recovery...another error is found,
547 /// one token/tree node is consumed for two errors.
552 /// In lieu of a return value, this indicates that a rule or token
553 /// has failed to match. Reset to false upon valid token match.
570 /// the stop token index for each rule.
574 /// For key RuleStartIndex, you get back the stop token for
587 /// Token object normally returned by NextToken() after matching lexer rules.
590 /// The goal of all lexer rules/methods is to create a token object.
592 /// create a single token. NextToken will return this object after
593 /// matching lexer rule(s). If you subclass to allow multiple token
594 /// emissions, then set this to the last token to be matched or
595 /// something nonnull so that the auto token emit mechanism will not
596 /// emit another token.
598 property Token: IToken read GetToken write SetToken;
601 /// What character index in the stream did the current token start at?
604 /// Needed, for example, to get the text for current token. Set at
610 /// The line on which the first character of the token resides
617 /// <summary>The channel number for the current token</summary>
620 /// <summary>The token type for the current token</summary>
624 /// You can set the text for the current token to override what is in
650 /// A Token object like we'd use in ANTLR 2.x; has an actual string created
652 /// tree nodes that have payload objects. We need to create a Token object
653 /// that has a string; the tree node will point at this token. CommonToken
710 /// single token insertion or deletion error recovery. If
714 /// To turn off single token insertion or deletion error
730 /// <summary>A hook to listen in on the token consumption during error recovery.
743 /// a token (after a resync). So it will go:
747 /// 3. consume until token found in resynch set
789 /// How should a token be displayed in an error message? The default
793 token). This is better than forcing you to override a method in
794 /// your token objects because you don't have to go modify your lexer
807 /// single token insertion and deletion, this will usually not
809 /// token that the Match() routine could not recover from.
819 /// <summary>Consume tokens until one matches the given token set </summary>
850 /// Convert a List&lt;Token&gt; to List&lt;String&gt;
855 /// Given a rule number and a start token index number, return
859 /// It returns the index of the last token matched by the rule.
870 /// input stream? Return the stop token index or MEMO_RULE_UNKNOWN.
876 /// 1 past the stop token matched for this rule last time.
909 /// an error and next valid token match
930 /// Used to print out token names like ID during debugging and
938 /// The most common stream of tokens is one where every token is buffered up
943 /// TODO: how to access the full token stream? How to track all tokens matched per rule?
949 /// A simple filter mechanism whereby you can tell this token stream
969 /// the token type BitSet. Return null if no tokens were found. This
1059 /// Return a token from this source; i.e., Match a token on the char stream.
1064 /// Instruct the lexer to skip creating a token for current lexer rule and
1065 /// look for another token. NextToken() knows to keep looking when a lexer
1066 /// rule finishes with token set to SKIP_TOKEN. Recall that if token==null
1067 /// at end of any token rule, it creates one for you and emits it.
1071 /// <summary>This is the lexer entry point that sets instance var 'token' </summary>
1080 procedure Emit(const Token: IToken); overload;
1083 /// The standard method called to automatically emit a token at the
1084 /// outermost lexical rule. The token object should point into the
1086 /// use that to set the token's text.
1088 /// <remarks><para>Override this method to emit custom Token objects.</para>
1101 /// a token, so do the easy thing and just kill a character and hope
1123 /// Gets or sets the 'lexeme' for the current token.
1127 /// The getter returns the text matched so far for the current token or any
1131 /// The setter sets the complete text of this token. It overrides/wipes any
1153 /// <summary>Set the token stream and reset the parser </summary>
1173 /// <summary>Return the start token or tree </summary>
1176 /// <summary>Return the stop token or tree </summary>
1226 /// screw up the token index values. That is, an insert operation at token
1230 /// the original token stream back without undoing anything. Since
1357 /// exceptions are built with the expected token type.
1373 /// state can change before the exception is reported so current token index
1375 /// perhaps print an entire line of input not just a single token, for example.
1387 /// What is index of token/char were we looking at when the error occurred?
1392 /// The current Token when an error occurred. Since not all streams
1393 /// can retrieve the ith Token, we have to track the Token object.
1422 /// for most recent token with line/col info, but notify getErrorHeader()
1428 /// Returns the current Token when the error occurred (for parsers
1429 /// although a tree parser might also set the token)
1431 property Token: IToken read FToken write FToken;
1453 /// Returns the token type or char of the unexpected input element
1463 /// Returns the token/char index in the stream when the error occurred
1469 /// A mismatched char or Token or tree node.
1492 /// We were expecting a token but it's not found. The current token
1848 /// <summary>What token number is this from 0..n-1 tokens; &lt; 0 implies invalid index </summary>
1851 /// <summary>The char position into the input buffer where this token starts </summary>
1854 /// <summary>The char position into the input buffer where this token stops </summary>
1898 /// <summary>What token number is this from 0..n-1 tokens </summary>
1957 /// In an action, a lexer rule can set token to this SKIP_TOKEN and ANTLR
1958 /// will avoid creating a token for this symbol and try to fetch another.
1986 // copies from Token object for convenience in actions
2007 /// into the label for the associated token ref; e.g., x=ID. Token
2017 /// Factor out what to do upon token mismatch so tree parsers can behave
2019 /// to get single token insertion and deletion. Use this to turn off
2020 /// single token insertion and deletion. Override mismatchRecover
2027 /// Attempt to Recover from a single missing or extra token.
2030 /// EXTRA TOKEN
2032 /// LA(1) is not what we are looking for. If LA(2) has the right token,
2033 /// however, then assume LA(1) is some extra spurious token. Delete it
2037 /// MISSING TOKEN
2039 /// If current token is consistent with what could come after
2040 /// ttype then it is ok to "insert" the missing token, else throw
2053 /// mismatched token error. To Recover, it sees that LA(1)==';'
2054 /// is in the set of tokens that can follow the ')' token
2061 /// Conjure up a missing token during error recovery.
2068 /// $x points at that token. If that token is missing, but
2069 /// the next token in the stream is what we want we assume that
2070 /// this token is missing and we keep going. Because we
2071 /// have to return some token to replace the missing token,
2077 /// a CommonToken of the appropriate type. The text will be the token.
2092 /// This is set of token types that can follow a specific rule
2129 /// At the "3" token, you'd have a call chain of
2137 /// You want the exact viable token set when recovering from a
2138 /// token mismatch. Upon token mismatch, if LA(1) is member of
2139 /// the viable next token set, then you know there is most likely
2140 /// a missing token in the input stream. "Insert" one by just not
2160 * input might just be missing a token--you might consume the
2201 * we resync'd to that token, we'd consume until EOF. We need to
2206 * At this point, it gets a mismatched token error and throws an
2207 * exception (since LA(1) is not in the viable following token
2214 * for the token that was a member of the recovery set.
2300 /// <summary>Record every single token pulled from the source so we can reproduce
2321 /// The index into the tokens list of the current token (next token
2326 /// <summary>Load all tokens from the token source and put in tokens.
2328 /// set some token type / channel overrides before filling buffer.
2336 /// token.
2461 procedure Emit(const Token: IToken); overload; virtual;
2557 /// Return the index of the next token to operate on.
2572 // Token buffer index
2744 /// Return a map from token index to operation.
2915 // node created from real token
2925 FToken := CommonTree.Token;
2982 if (Token = nil) then
2985 Result := 'UnwantedTokenException(found=' + Token.Text + Exp + ')'
3977 (* Override the text for this token. The property getter
3981 * string in the token object.
4192 Input.Seek(StopIndex + 1); // jump to one past stop token
4355 Result := 'missing ' + TokenName + ' at ' + GetTokenErrorDisplay(E.Token);
4364 Result := 'mismatched input ' + GetTokenErrorDisplay(E.Token)
4386 Result := 'no viable alternative at input ' + GetTokenErrorDisplay(E.Token);
4393 + GetTokenErrorDisplay(E.Token);
4397 Result := 'mismatched input ' + GetTokenErrorDisplay(E.Token)
4403 Result := 'mismatched input ' + GetTokenErrorDisplay(E.Token)
4552 // a single token and hope for the best
4567 // if current token is consistent with what could come after set
4568 // then we know we're missing a token; error recovery is free to
4569 // "insert" the missing token
4610 // uh oh, another error at same token index; must be a case
4611 // where LT(1) is in the recovery token set so nothing is
4612 // consumed; consume a single token so at least to prevent
4628 // we don't know how to conjure up a token for sets yet
4633 // TODO do single token deletion like above for Token mismatch
4644 // if next token is what we are looking for then "delete" this token
4649 Input.Consume; // simply delete extra token
4651 ReportError(E); // report after consuming so AW sees the token in the exception
4652 // we want to return the token we're actually matching
4654 Input.Consume; // move past ttype token as if all were ok
4658 // can't recover with single token deletion, try insertion
4664 ReportError(E); // report after inserting so AW sees the token in the exception
4676 // if we've already reported an error and have not matched a token
4712 Token: IToken;
4719 for Token in Tokens do
4720 Result.Add(Token.Text);
4753 FP := SkipOffTokenChannels(FP); // leave p on valid token
4795 // is there a channel override for token type?
4815 // leave p pointing at first token on channel
4920 I := SkipOffTokenChannelsReverse(I - 1); // leave p on valid token
4952 I := SkipOffTokenChannels(I + 1); // leave p on valid token
5353 procedure TLexer.Emit(const Token: IToken);
5355 FState.Token := Token;
5525 FState.Token := nil;
5539 if (FState.Token = nil) then
5542 if (FState.Token = TToken.SKIP_TOKEN) then
5544 Exit(FState.Token);
5579 FState.Token := nil;
5602 FState.Token := TToken.SKIP_TOKEN;
6073 // whole token buffer so no lazy eval issue with any templates
6240 // no operation at that index, just dump token
6242 Inc(I); // move to next token
6253 // Scan any remaining operations after last token