Home | History | Annotate | Download | only in antlr3

Lines Matching refs:tokens

72   parsers with the means to sequential walk through series of tokens.
79 In a similar fashion to CommonTokenStream, CommonTreeNodeStream feeds tokens
82 the two-dimensional shape of the tree using special UP and DOWN tokens. The
99 is the <i>integer token type of the token</i> <tt>k</tt> tokens ahead of the
108 <b>TokenStreams</b>, this is the <i>full token structure</i> <tt>k</tt> tokens
282 sequence of tokens. Unlike simple character-based streams, such as StringStream,
287 <i>channel</i> feature, which allows you to hold on to all tokens of interest
288 while only presenting a specific set of interesting tokens to a parser. For
291 whitespace to channel value HIDDEN as it creates the tokens.
295 yield tokens that have the same value for <tt>channel</tt>. The stream skips
296 over any non-matching tokens in between.
312 # all tokens in the stream were retreived
329 # :method: to_s(start=0,stop=tokens.length-1)
330 # should take the tokens between start and stop in the sequence, extract their text
729 tokens will be filtered out by the #peek, #look, and #consume methods.
736 tokens = ANTLR3::CommonTokenStream.new(lexer)
738 # assume this grammar defines whitespace as tokens on channel HIDDEN
739 # and numbers and operations as tokens on channel DEFAULT
740 tokens.look # => 0 INT['35'] @ line 1 col 0 (0..1)
741 tokens.look(2) # => 2 MULT["*"] @ line 1 col 2 (3..3)
742 tokens.tokens(0, 2)
746 # notice the #tokens method does not filter off-channel tokens
775 # # discard all WHITE_SPACE tokens
789 tokens = stream.tokens.map { | t | t.dup }
794 tokens = @token_source.to_a
797 @tokens = block_given? ? tokens.select { | t | yield( t, self ) } : tokens
798 @tokens.each_with_index { |t, i| t.index = i }
800 if first_token = @tokens.find { |t| t.channel == @channel }
801 @tokens.index( first_token )
802 else @tokens.length
810 # then clear the token buffer and attempt to harvest new tokens. Identical in
811 # behavior to CommonTokenStream.new, if a block is provided, tokens will be
819 @tokens = block_given? ? @token_source.select { |token| yield( token ) } :
821 @tokens.each_with_index { |t, i| t.index = i }
824 if first_token = @tokens.find { |t| t.channel == @channel }
825 @tokens.index( first_token )
826 else @tokens.length
842 @tokens.empty? ? CommonToken : @tokens.first.class
848 @tokens.length
860 @position += 1 while token = @tokens[ @position ] and
902 token = @tokens[ @position ] || EOF_TOKEN
903 if @position < @tokens.length
904 @position = future?( 2 ) || @tokens.length
915 @position = index.to_i.bound( 0, @tokens.length )
921 # the current token. +k+ greater than 1 represents upcoming on-channel tokens. A negative
922 # value of +k+ returns previous on-channel tokens consumed, where <tt>k = -1</tt> is the last
934 @tokens.fetch( index, EOF_TOKEN )
944 # on-channel tokens exist
955 # tokens, the stream can't just go to the
957 # over off-channel tokens
960 tk = @tokens.at( cursor += 1 ) or return( cursor )
970 # on-channel tokens exist before the current token
983 tk = @tokens.at( cursor -= 1 ) or return( nil )
992 # yields each token in the stream (including off-channel tokens)
994 # #each accepts the same arguments as #tokens
998 tokens( *args ).each { |token| yield( token ) }
1009 for token in @tokens
1037 # returns a copy of the token buffer. If +start+ and +stop+ are provided, tokens
1039 # are converted to integers with their <tt>to_i</tt> methods, and thus tokens
1040 # can be provided to specify start and stop. If a block is provided, tokens are
1044 def tokens( start = nil, stop = nil )
1045 stop.nil? || stop >= @tokens.length and stop = @tokens.length - 1
1047 tokens = @tokens[ start..stop ]
1050 tokens.delete_if { |t| not yield( t ) }
1053 return( tokens )
1058 @tokens.at i
1065 @tokens[ i, *args ]
1071 [ self.class, @token_source.class, @position, @tokens.length ]
1078 # fetches the text content of all tokens between +start+ and +stop+ and
1081 def extract_text( start = 0, stop = @tokens.length - 1 )
1083 stop = stop.to_i.at_most( @tokens.length )
1084 @tokens[ start..stop ].map! { |t| t.text }.join( '' )