HomeSort by relevance Sort by last modified time
    Searched full:lexer (Results 726 - 750 of 1025) sorted by null

<<21222324252627282930>>

  /external/owasp/sanitizer/src/main/org/owasp/html/
HtmlChangeReporter.java 49 * notices differences between the events from the lexer and those from the
CssTokens.java 86 Lexer lexer = new Lexer(css); local
87 lexer.lex();
88 return lexer.build();
293 private static final class Lexer {
329 Lexer(String css) {
    [all...]
  /external/qemu/qobject/
qjson.c 14 #include "qapi/qmp/json-lexer.h"
  /external/smali/smali/src/test/antlr3/org/jf/smali/
expectedTokensTestGrammar.g 72 @lexer::header {
  /external/antlr/antlr-3.4/runtime/Python/antlr3/
streams.py 79 rewind(mark()) should not affect the input cursor. The Lexer
175 pass the buck all the way to the lexer who can ask its input stream
184 @brief A source of characters for an ANTLR lexer.
210 lexer code. I'd prefer to return a char here type-wise, but it's
329 directly. Every method call counts in the lexer.
337 to parse. If you pass in a byte string, the Lexer will choke on
619 @param tokenSource A TokenSource instance (usually a Lexer) to pull
    [all...]
exceptions.py 48 the various reporting methods in Parser and Lexer can be overridden
98 # generated from a lexer. We need to track this since the
  /external/chromium_org/ppapi/generators/
idl_parser.py 124 # The Parser inherits the from the Lexer to provide PLY with the tokenizing
999 # Attempts to parse the current data loaded in the lexer.
1006 return self.yaccobj.parse(lexer=self)
1015 # Loads a new file into the lexer and attemps to parse it.
    [all...]
  /external/chromium_org/third_party/jinja2/
environment.py 20 from jinja2.lexer import get_lexer, TokenStream
271 # lexer / parser information
368 lexer = property(get_lexer, doc="The lexer for this environment.") variable in class:Environment
473 return self.lexer.tokeniter(source, name, filename)
488 for all the extensions. Returns a :class:`~jinja2.lexer.TokenStream`.
491 stream = self.lexer.tokenize(source, name, filename, state)
    [all...]
  /external/chromium_org/third_party/yasm/source/patched-yasm/tools/python-yasm/pyxelator/
ir.py 15 #from lexer import Lexer
1039 self.lexer.rmtypedef( name )
1041 self.lexer.lex( cstr )
1042 #print self.lexer.err_string()
1043 declaration.parse( self.lexer, Symbols() ) # use new name-space
1044 #declaration.parse( Lexer( cstr ), Symbols() )
    [all...]
  /external/llvm/docs/tutorial/
OCamlLangImpl3.rst 16 work to build a lexer and parser than it is to generate LLVM IR code. :)
540 <{lexer,parser}.ml>: use_camlp4, pp(camlp4of)
557 * Lexer Tokens
560 (* The lexer returns these 'Kwd' if it is an unknown character, otherwise one of
572 lexer.ml:
576 * Lexer
    [all...]
OCamlLangImpl4.rst 437 <{lexer,parser}.ml>: use_camlp4, pp(camlp4of)
460 * Lexer Tokens
463 (* The lexer returns these 'Kwd' if it is an unknown character, otherwise one of
475 lexer.ml:
479 * Lexer
    [all...]
  /external/antlr/antlr-3.4/runtime/C/src/
antlr3inputstream.c 477 * elements of the lexer state.
490 /** \brief Rewind the lexer input to the state specified by the last produced mark.
503 /** \brief Rewind the lexer input to the state specified by the supplied mark.
542 /** \brief Rewind the lexer input to the state specified by the supplied mark.
562 /** \brief Rewind the lexer input to the state specified by the supplied mark.
706 // of lexer->parser->tree->treeparser and so on.
    [all...]
  /external/antlr/antlr-3.4/tool/src/main/java/org/antlr/tool/
NFAFactory.java 191 /** For a non-lexer, just build a simple token reference atom.
192 * For a lexer, a string is a sequence of char to match. That is,
198 if ( nfa.grammar.type==Grammar.LEXER ) {
312 * in the case of a lexer grammar, an EOT token when the conversion
318 if ( nfa.grammar.type==Grammar.LEXER ) {
  /external/bison/data/
lalr1.java 113 public interface Lexer {
146 b4_lexer_if([[private class YYLexer implements Lexer {
147 ]b4_percent_code_get([[lexer]])[
151 private Lexer yylexer;
169 b4_lexer_if([[protected]], [[public]]) b4_parser_class_name[ (]b4_parse_param_decl([[Lexer yylexer]])[) {
  /external/antlr/antlr-3.4/tool/src/test/java/org/antlr/test/
TestAttributes.java     [all...]
  /external/eclipse-basebuilder/basebuilder-3.6.2/org.eclipse.releng.basebuilder/plugins/org.eclipse.jdt.debug_3.6.1.v20100715_r361/
jdimodel.jar 
  /external/chromium_org/third_party/polymer/components-chromium/core-component-page/
core-component-page-extracted.js 822 * Block Lexer
825 function Lexer(options) {
844 Lexer.rules = block;
850 Lexer.lex = function(src, options) {
851 var lexer = new Lexer(options);
852 return lexer.lex(src);
859 Lexer.prototype.lex = function(src) {
873 Lexer.prototype.token = function(src, top, bq) {
1239 * Inline Lexer & Compile
    [all...]
  /external/antlr/antlr-3.4/runtime/CSharp2/Sources/Antlr3.Runtime/Antlr.Runtime/
BaseRecognizer.cs 51 * lexer, parser, and tree grammars. This is all the parsing
68 * State of a lexer, parser, or tree parser are collected into a state
320 * Get number of recognition errors (lexer, parser, tree parser). Each
321 * recognizer tracks its own number. So parser and lexer each have
345 * your token objects because you don't have to go modify your lexer
693 * If you change what tokens must be created by the lexer,
    [all...]
  /external/antlr/antlr-3.4/runtime/CSharp3/Sources/Antlr3.Runtime/
BaseRecognizer.cs 52 * lexer, parser, and tree grammars. This is all the parsing
70 * State of a lexer, parser, or tree parser are collected into a state
373 * Get number of recognition errors (lexer, parser, tree parser). Each
374 * recognizer tracks its own number. So parser and lexer each have
405 * your token objects because you don't have to go modify your lexer
    [all...]
  /external/clang/lib/Lex/
Pragma.cpp 288 // Make and enter a lexer object so that we lex and expand the tokens just
290 Lexer *TL = Lexer::Create_PragmaLexer(TokLoc, PragmaLoc, RParenLoc,
359 // Get the current file lexer we're looking at. Ignore _Pragma 'files' etc.
365 assert(CurPPLexer && "No current lexer?");
423 // Get the current file lexer we're looking at. Ignore _Pragma 'files' etc.
    [all...]
  /external/owasp/sanitizer/distrib/lib/
owasp-java-html-sanitizer.jar 
  /external/antlr/antlr-3.4/gunit/src/main/java/org/antlr/gunit/
JUnitCodeGen.java 109 String lexerName = grammarInfo.getGrammarName()+"Lexer";
207 // need to determine whether it's a test for parser rule or lexer rule
  /external/antlr/antlr-3.4/runtime/C/include/
antlr3tokenstream.h 75 /// lexer rule said to just skip the generated token altogether.
88 * name from whence the tokens were produced by the lexer. This pointer is a
  /external/antlr/antlr-3.4/runtime/JavaScript/tests/functional/
t042ast.html 41 lexer = new TLexer(cstream),
42 tstream = new org.antlr.runtime.CommonTokenStream(lexer),
  /external/antlr/antlr-3.4/runtime/Python/tests/
t042ast.py 29 self.lexer = self.getLexer(cStream)
30 tStream = antlr3.CommonTokenStream(self.lexer)

Completed in 569 milliseconds

<<21222324252627282930>>