1 This is doc/cppinternals.info, produced by makeinfo version 4.13 from 2 /Volumes/androidtc/androidtoolchain/./src/build/../gcc/gcc-4.6/gcc/doc/cppinternals.texi. 3 4 INFO-DIR-SECTION Software development 5 START-INFO-DIR-ENTRY 6 * Cpplib: (cppinternals). Cpplib internals. 7 END-INFO-DIR-ENTRY 8 9 This file documents the internals of the GNU C Preprocessor. 10 11 Copyright 2000, 2001, 2002, 2004, 2005, 2006, 2007 Free Software 12 Foundation, Inc. 13 14 Permission is granted to make and distribute verbatim copies of this 15 manual provided the copyright notice and this permission notice are 16 preserved on all copies. 17 18 Permission is granted to copy and distribute modified versions of 19 this manual under the conditions for verbatim copying, provided also 20 that the entire resulting derived work is distributed under the terms 21 of a permission notice identical to this one. 22 23 Permission is granted to copy and distribute translations of this 24 manual into another language, under the above conditions for modified 25 versions. 26 27 28 File: cppinternals.info, Node: Top, Next: Conventions, Up: (dir) 29 30 The GNU C Preprocessor Internals 31 ******************************** 32 33 1 Cpplib--the GNU C Preprocessor 34 ******************************** 35 36 The GNU C preprocessor is implemented as a library, "cpplib", so it can 37 be easily shared between a stand-alone preprocessor, and a preprocessor 38 integrated with the C, C++ and Objective-C front ends. It is also 39 available for use by other programs, though this is not recommended as 40 its exposed interface has not yet reached a point of reasonable 41 stability. 42 43 The library has been written to be re-entrant, so that it can be used 44 to preprocess many files simultaneously if necessary. It has also been 45 written with the preprocessing token as the fundamental unit; the 46 preprocessor in previous versions of GCC would operate on text strings 47 as the fundamental unit. 48 49 This brief manual documents the internals of cpplib, and explains 50 some of the tricky issues. It is intended that, along with the 51 comments in the source code, a reasonably competent C programmer should 52 be able to figure out what the code is doing, and why things have been 53 implemented the way they have. 54 55 * Menu: 56 57 * Conventions:: Conventions used in the code. 58 * Lexer:: The combined C, C++ and Objective-C Lexer. 59 * Hash Nodes:: All identifiers are entered into a hash table. 60 * Macro Expansion:: Macro expansion algorithm. 61 * Token Spacing:: Spacing and paste avoidance issues. 62 * Line Numbering:: Tracking location within files. 63 * Guard Macros:: Optimizing header files with guard macros. 64 * Files:: File handling. 65 * Concept Index:: Index. 66 67 68 File: cppinternals.info, Node: Conventions, Next: Lexer, Prev: Top, Up: Top 69 70 Conventions 71 *********** 72 73 cpplib has two interfaces--one is exposed internally only, and the 74 other is for both internal and external use. 75 76 The convention is that functions and types that are exposed to 77 multiple files internally are prefixed with `_cpp_', and are to be 78 found in the file `internal.h'. Functions and types exposed to external 79 clients are in `cpplib.h', and prefixed with `cpp_'. For historical 80 reasons this is no longer quite true, but we should strive to stick to 81 it. 82 83 We are striving to reduce the information exposed in `cpplib.h' to 84 the bare minimum necessary, and then to keep it there. This makes clear 85 exactly what external clients are entitled to assume, and allows us to 86 change internals in the future without worrying whether library clients 87 are perhaps relying on some kind of undocumented implementation-specific 88 behavior. 89 90 91 File: cppinternals.info, Node: Lexer, Next: Hash Nodes, Prev: Conventions, Up: Top 92 93 The Lexer 94 ********* 95 96 Overview 97 ======== 98 99 The lexer is contained in the file `lex.c'. It is a hand-coded lexer, 100 and not implemented as a state machine. It can understand C, C++ and 101 Objective-C source code, and has been extended to allow reasonably 102 successful preprocessing of assembly language. The lexer does not make 103 an initial pass to strip out trigraphs and escaped newlines, but handles 104 them as they are encountered in a single pass of the input file. It 105 returns preprocessing tokens individually, not a line at a time. 106 107 It is mostly transparent to users of the library, since the library's 108 interface for obtaining the next token, `cpp_get_token', takes care of 109 lexing new tokens, handling directives, and expanding macros as 110 necessary. However, the lexer does expose some functionality so that 111 clients of the library can easily spell a given token, such as 112 `cpp_spell_token' and `cpp_token_len'. These functions are useful when 113 generating diagnostics, and for emitting the preprocessed output. 114 115 Lexing a token 116 ============== 117 118 Lexing of an individual token is handled by `_cpp_lex_direct' and its 119 subroutines. In its current form the code is quite complicated, with 120 read ahead characters and such-like, since it strives to not step back 121 in the character stream in preparation for handling non-ASCII file 122 encodings. The current plan is to convert any such files to UTF-8 123 before processing them. This complexity is therefore unnecessary and 124 will be removed, so I'll not discuss it further here. 125 126 The job of `_cpp_lex_direct' is simply to lex a token. It is not 127 responsible for issues like directive handling, returning lookahead 128 tokens directly, multiple-include optimization, or conditional block 129 skipping. It necessarily has a minor ro^le to play in memory 130 management of lexed lines. I discuss these issues in a separate section 131 (*note Lexing a line::). 132 133 The lexer places the token it lexes into storage pointed to by the 134 variable `cur_token', and then increments it. This variable is 135 important for correct diagnostic positioning. Unless a specific line 136 and column are passed to the diagnostic routines, they will examine the 137 `line' and `col' values of the token just before the location that 138 `cur_token' points to, and use that location to report the diagnostic. 139 140 The lexer does not consider whitespace to be a token in its own 141 right. If whitespace (other than a new line) precedes a token, it sets 142 the `PREV_WHITE' bit in the token's flags. Each token has its `line' 143 and `col' variables set to the line and column of the first character 144 of the token. This line number is the line number in the translation 145 unit, and can be converted to a source (file, line) pair using the line 146 map code. 147 148 The first token on a logical, i.e. unescaped, line has the flag 149 `BOL' set for beginning-of-line. This flag is intended for internal 150 use, both to distinguish a `#' that begins a directive from one that 151 doesn't, and to generate a call-back to clients that want to be 152 notified about the start of every non-directive line with tokens on it. 153 Clients cannot reliably determine this for themselves: the first token 154 might be a macro, and the tokens of a macro expansion do not have the 155 `BOL' flag set. The macro expansion may even be empty, and the next 156 token on the line certainly won't have the `BOL' flag set. 157 158 New lines are treated specially; exactly how the lexer handles them 159 is context-dependent. The C standard mandates that directives are 160 terminated by the first unescaped newline character, even if it appears 161 in the middle of a macro expansion. Therefore, if the state variable 162 `in_directive' is set, the lexer returns a `CPP_EOF' token, which is 163 normally used to indicate end-of-file, to indicate end-of-directive. 164 In a directive a `CPP_EOF' token never means end-of-file. 165 Conveniently, if the caller was `collect_args', it already handles 166 `CPP_EOF' as if it were end-of-file, and reports an error about an 167 unterminated macro argument list. 168 169 The C standard also specifies that a new line in the middle of the 170 arguments to a macro is treated as whitespace. This white space is 171 important in case the macro argument is stringified. The state variable 172 `parsing_args' is nonzero when the preprocessor is collecting the 173 arguments to a macro call. It is set to 1 when looking for the opening 174 parenthesis to a function-like macro, and 2 when collecting the actual 175 arguments up to the closing parenthesis, since these two cases need to 176 be distinguished sometimes. One such time is here: the lexer sets the 177 `PREV_WHITE' flag of a token if it meets a new line when `parsing_args' 178 is set to 2. It doesn't set it if it meets a new line when 179 `parsing_args' is 1, since then code like 180 181 #define foo() bar 182 foo 183 baz 184 185 would be output with an erroneous space before `baz': 186 187 foo 188 baz 189 190 This is a good example of the subtlety of getting token spacing 191 correct in the preprocessor; there are plenty of tests in the testsuite 192 for corner cases like this. 193 194 The lexer is written to treat each of `\r', `\n', `\r\n' and `\n\r' 195 as a single new line indicator. This allows it to transparently 196 preprocess MS-DOS, Macintosh and Unix files without their needing to 197 pass through a special filter beforehand. 198 199 We also decided to treat a backslash, either `\' or the trigraph 200 `??/', separated from one of the above newline indicators by 201 non-comment whitespace only, as intending to escape the newline. It 202 tends to be a typing mistake, and cannot reasonably be mistaken for 203 anything else in any of the C-family grammars. Since handling it this 204 way is not strictly conforming to the ISO standard, the library issues a 205 warning wherever it encounters it. 206 207 Handling newlines like this is made simpler by doing it in one place 208 only. The function `handle_newline' takes care of all newline 209 characters, and `skip_escaped_newlines' takes care of arbitrarily long 210 sequences of escaped newlines, deferring to `handle_newline' to handle 211 the newlines themselves. 212 213 The most painful aspect of lexing ISO-standard C and C++ is handling 214 trigraphs and backlash-escaped newlines. Trigraphs are processed before 215 any interpretation of the meaning of a character is made, and 216 unfortunately there is a trigraph representation for a backslash, so it 217 is possible for the trigraph `??/' to introduce an escaped newline. 218 219 Escaped newlines are tedious because theoretically they can occur 220 anywhere--between the `+' and `=' of the `+=' token, within the 221 characters of an identifier, and even between the `*' and `/' that 222 terminates a comment. Moreover, you cannot be sure there is just 223 one--there might be an arbitrarily long sequence of them. 224 225 So, for example, the routine that lexes a number, `parse_number', 226 cannot assume that it can scan forwards until the first non-number 227 character and be done with it, because this could be the `\' 228 introducing an escaped newline, or the `?' introducing the trigraph 229 sequence that represents the `\' of an escaped newline. If it 230 encounters a `?' or `\', it calls `skip_escaped_newlines' to skip over 231 any potential escaped newlines before checking whether the number has 232 been finished. 233 234 Similarly code in the main body of `_cpp_lex_direct' cannot simply 235 check for a `=' after a `+' character to determine whether it has a 236 `+=' token; it needs to be prepared for an escaped newline of some 237 sort. Such cases use the function `get_effective_char', which returns 238 the first character after any intervening escaped newlines. 239 240 The lexer needs to keep track of the correct column position, 241 including counting tabs as specified by the `-ftabstop=' option. This 242 should be done even within C-style comments; they can appear in the 243 middle of a line, and we want to report diagnostics in the correct 244 position for text appearing after the end of the comment. 245 246 Some identifiers, such as `__VA_ARGS__' and poisoned identifiers, 247 may be invalid and require a diagnostic. However, if they appear in a 248 macro expansion we don't want to complain with each use of the macro. 249 It is therefore best to catch them during the lexing stage, in 250 `parse_identifier'. In both cases, whether a diagnostic is needed or 251 not is dependent upon the lexer's state. For example, we don't want to 252 issue a diagnostic for re-poisoning a poisoned identifier, or for using 253 `__VA_ARGS__' in the expansion of a variable-argument macro. Therefore 254 `parse_identifier' makes use of state flags to determine whether a 255 diagnostic is appropriate. Since we change state on a per-token basis, 256 and don't lex whole lines at a time, this is not a problem. 257 258 Another place where state flags are used to change behavior is whilst 259 lexing header names. Normally, a `<' would be lexed as a single token. 260 After a `#include' directive, though, it should be lexed as a single 261 token as far as the nearest `>' character. Note that we don't allow 262 the terminators of header names to be escaped; the first `"' or `>' 263 terminates the header name. 264 265 Interpretation of some character sequences depends upon whether we 266 are lexing C, C++ or Objective-C, and on the revision of the standard in 267 force. For example, `::' is a single token in C++, but in C it is two 268 separate `:' tokens and almost certainly a syntax error. Such cases 269 are handled by `_cpp_lex_direct' based upon command-line flags stored 270 in the `cpp_options' structure. 271 272 Once a token has been lexed, it leads an independent existence. The 273 spelling of numbers, identifiers and strings is copied to permanent 274 storage from the original input buffer, so a token remains valid and 275 correct even if its source buffer is freed with `_cpp_pop_buffer'. The 276 storage holding the spellings of such tokens remains until the client 277 program calls cpp_destroy, probably at the end of the translation unit. 278 279 Lexing a line 280 ============= 281 282 When the preprocessor was changed to return pointers to tokens, one 283 feature I wanted was some sort of guarantee regarding how long a 284 returned pointer remains valid. This is important to the stand-alone 285 preprocessor, the future direction of the C family front ends, and even 286 to cpplib itself internally. 287 288 Occasionally the preprocessor wants to be able to peek ahead in the 289 token stream. For example, after the name of a function-like macro, it 290 wants to check the next token to see if it is an opening parenthesis. 291 Another example is that, after reading the first few tokens of a 292 `#pragma' directive and not recognizing it as a registered pragma, it 293 wants to backtrack and allow the user-defined handler for unknown 294 pragmas to access the full `#pragma' token stream. The stand-alone 295 preprocessor wants to be able to test the current token with the 296 previous one to see if a space needs to be inserted to preserve their 297 separate tokenization upon re-lexing (paste avoidance), so it needs to 298 be sure the pointer to the previous token is still valid. The 299 recursive-descent C++ parser wants to be able to perform tentative 300 parsing arbitrarily far ahead in the token stream, and then to be able 301 to jump back to a prior position in that stream if necessary. 302 303 The rule I chose, which is fairly natural, is to arrange that the 304 preprocessor lex all tokens on a line consecutively into a token buffer, 305 which I call a "token run", and when meeting an unescaped new line 306 (newlines within comments do not count either), to start lexing back at 307 the beginning of the run. Note that we do _not_ lex a line of tokens 308 at once; if we did that `parse_identifier' would not have state flags 309 available to warn about invalid identifiers (*note Invalid 310 identifiers::). 311 312 In other words, accessing tokens that appeared earlier in the current 313 line is valid, but since each logical line overwrites the tokens of the 314 previous line, tokens from prior lines are unavailable. In particular, 315 since a directive only occupies a single logical line, this means that 316 the directive handlers like the `#pragma' handler can jump around in 317 the directive's tokens if necessary. 318 319 Two issues remain: what about tokens that arise from macro 320 expansions, and what happens when we have a long line that overflows 321 the token run? 322 323 Since we promise clients that we preserve the validity of pointers 324 that we have already returned for tokens that appeared earlier in the 325 line, we cannot reallocate the run. Instead, on overflow it is 326 expanded by chaining a new token run on to the end of the existing one. 327 328 The tokens forming a macro's replacement list are collected by the 329 `#define' handler, and placed in storage that is only freed by 330 `cpp_destroy'. So if a macro is expanded in the line of tokens, the 331 pointers to the tokens of its expansion that are returned will always 332 remain valid. However, macros are a little trickier than that, since 333 they give rise to three sources of fresh tokens. They are the built-in 334 macros like `__LINE__', and the `#' and `##' operators for 335 stringification and token pasting. I handled this by allocating space 336 for these tokens from the lexer's token run chain. This means they 337 automatically receive the same lifetime guarantees as lexed tokens, and 338 we don't need to concern ourselves with freeing them. 339 340 Lexing into a line of tokens solves some of the token memory 341 management issues, but not all. The opening parenthesis after a 342 function-like macro name might lie on a different line, and the front 343 ends definitely want the ability to look ahead past the end of the 344 current line. So cpplib only moves back to the start of the token run 345 at the end of a line if the variable `keep_tokens' is zero. 346 Line-buffering is quite natural for the preprocessor, and as a result 347 the only time cpplib needs to increment this variable is whilst looking 348 for the opening parenthesis to, and reading the arguments of, a 349 function-like macro. In the near future cpplib will export an 350 interface to increment and decrement this variable, so that clients can 351 share full control over the lifetime of token pointers too. 352 353 The routine `_cpp_lex_token' handles moving to new token runs, 354 calling `_cpp_lex_direct' to lex new tokens, or returning 355 previously-lexed tokens if we stepped back in the token stream. It also 356 checks each token for the `BOL' flag, which might indicate a directive 357 that needs to be handled, or require a start-of-line call-back to be 358 made. `_cpp_lex_token' also handles skipping over tokens in failed 359 conditional blocks, and invalidates the control macro of the 360 multiple-include optimization if a token was successfully lexed outside 361 a directive. In other words, its callers do not need to concern 362 themselves with such issues. 363 364 365 File: cppinternals.info, Node: Hash Nodes, Next: Macro Expansion, Prev: Lexer, Up: Top 366 367 Hash Nodes 368 ********** 369 370 When cpplib encounters an "identifier", it generates a hash code for it 371 and stores it in the hash table. By "identifier" we mean tokens with 372 type `CPP_NAME'; this includes identifiers in the usual C sense, as 373 well as keywords, directive names, macro names and so on. For example, 374 all of `pragma', `int', `foo' and `__GNUC__' are identifiers and hashed 375 when lexed. 376 377 Each node in the hash table contain various information about the 378 identifier it represents. For example, its length and type. At any one 379 time, each identifier falls into exactly one of three categories: 380 381 * Macros 382 383 These have been declared to be macros, either on the command line 384 or with `#define'. A few, such as `__TIME__' are built-ins 385 entered in the hash table during initialization. The hash node 386 for a normal macro points to a structure with more information 387 about the macro, such as whether it is function-like, how many 388 arguments it takes, and its expansion. Built-in macros are 389 flagged as special, and instead contain an enum indicating which 390 of the various built-in macros it is. 391 392 * Assertions 393 394 Assertions are in a separate namespace to macros. To enforce 395 this, cpp actually prepends a `#' character before hashing and 396 entering it in the hash table. An assertion's node points to a 397 chain of answers to that assertion. 398 399 * Void 400 401 Everything else falls into this category--an identifier that is not 402 currently a macro, or a macro that has since been undefined with 403 `#undef'. 404 405 When preprocessing C++, this category also includes the named 406 operators, such as `xor'. In expressions these behave like the 407 operators they represent, but in contexts where the spelling of a 408 token matters they are spelt differently. This spelling 409 distinction is relevant when they are operands of the stringizing 410 and pasting macro operators `#' and `##'. Named operator hash 411 nodes are flagged, both to catch the spelling distinction and to 412 prevent them from being defined as macros. 413 414 The same identifiers share the same hash node. Since each identifier 415 token, after lexing, contains a pointer to its hash node, this is used 416 to provide rapid lookup of various information. For example, when 417 parsing a `#define' statement, CPP flags each argument's identifier 418 hash node with the index of that argument. This makes duplicated 419 argument checking an O(1) operation for each argument. Similarly, for 420 each identifier in the macro's expansion, lookup to see if it is an 421 argument, and which argument it is, is also an O(1) operation. Further, 422 each directive name, such as `endif', has an associated directive enum 423 stored in its hash node, so that directive lookup is also O(1). 424 425 426 File: cppinternals.info, Node: Macro Expansion, Next: Token Spacing, Prev: Hash Nodes, Up: Top 427 428 Macro Expansion Algorithm 429 ************************* 430 431 Macro expansion is a tricky operation, fraught with nasty corner cases 432 and situations that render what you thought was a nifty way to optimize 433 the preprocessor's expansion algorithm wrong in quite subtle ways. 434 435 I strongly recommend you have a good grasp of how the C and C++ 436 standards require macros to be expanded before diving into this 437 section, let alone the code!. If you don't have a clear mental picture 438 of how things like nested macro expansion, stringification and token 439 pasting are supposed to work, damage to your sanity can quickly result. 440 441 Internal representation of macros 442 ================================= 443 444 The preprocessor stores macro expansions in tokenized form. This saves 445 repeated lexing passes during expansion, at the cost of a small 446 increase in memory consumption on average. The tokens are stored 447 contiguously in memory, so a pointer to the first one and a token count 448 is all you need to get the replacement list of a macro. 449 450 If the macro is a function-like macro the preprocessor also stores 451 its parameters, in the form of an ordered list of pointers to the hash 452 table entry of each parameter's identifier. Further, in the macro's 453 stored expansion each occurrence of a parameter is replaced with a 454 special token of type `CPP_MACRO_ARG'. Each such token holds the index 455 of the parameter it represents in the parameter list, which allows 456 rapid replacement of parameters with their arguments during expansion. 457 Despite this optimization it is still necessary to store the original 458 parameters to the macro, both for dumping with e.g., `-dD', and to warn 459 about non-trivial macro redefinitions when the parameter names have 460 changed. 461 462 Macro expansion overview 463 ======================== 464 465 The preprocessor maintains a "context stack", implemented as a linked 466 list of `cpp_context' structures, which together represent the macro 467 expansion state at any one time. The `struct cpp_reader' member 468 variable `context' points to the current top of this stack. The top 469 normally holds the unexpanded replacement list of the innermost macro 470 under expansion, except when cpplib is about to pre-expand an argument, 471 in which case it holds that argument's unexpanded tokens. 472 473 When there are no macros under expansion, cpplib is in "base 474 context". All contexts other than the base context contain a 475 contiguous list of tokens delimited by a starting and ending token. 476 When not in base context, cpplib obtains the next token from the list 477 of the top context. If there are no tokens left in the list, it pops 478 that context off the stack, and subsequent ones if necessary, until an 479 unexhausted context is found or it returns to base context. In base 480 context, cpplib reads tokens directly from the lexer. 481 482 If it encounters an identifier that is both a macro and enabled for 483 expansion, cpplib prepares to push a new context for that macro on the 484 stack by calling the routine `enter_macro_context'. When this routine 485 returns, the new context will contain the unexpanded tokens of the 486 replacement list of that macro. In the case of function-like macros, 487 `enter_macro_context' also replaces any parameters in the replacement 488 list, stored as `CPP_MACRO_ARG' tokens, with the appropriate macro 489 argument. If the standard requires that the parameter be replaced with 490 its expanded argument, the argument will have been fully macro expanded 491 first. 492 493 `enter_macro_context' also handles special macros like `__LINE__'. 494 Although these macros expand to a single token which cannot contain any 495 further macros, for reasons of token spacing (*note Token Spacing::) 496 and simplicity of implementation, cpplib handles these special macros 497 by pushing a context containing just that one token. 498 499 The final thing that `enter_macro_context' does before returning is 500 to mark the macro disabled for expansion (except for special macros 501 like `__TIME__'). The macro is re-enabled when its context is later 502 popped from the context stack, as described above. This strict 503 ordering ensures that a macro is disabled whilst its expansion is being 504 scanned, but that it is _not_ disabled whilst any arguments to it are 505 being expanded. 506 507 Scanning the replacement list for macros to expand 508 ================================================== 509 510 The C standard states that, after any parameters have been replaced 511 with their possibly-expanded arguments, the replacement list is scanned 512 for nested macros. Further, any identifiers in the replacement list 513 that are not expanded during this scan are never again eligible for 514 expansion in the future, if the reason they were not expanded is that 515 the macro in question was disabled. 516 517 Clearly this latter condition can only apply to tokens resulting from 518 argument pre-expansion. Other tokens never have an opportunity to be 519 re-tested for expansion. It is possible for identifiers that are 520 function-like macros to not expand initially but to expand during a 521 later scan. This occurs when the identifier is the last token of an 522 argument (and therefore originally followed by a comma or a closing 523 parenthesis in its macro's argument list), and when it replaces its 524 parameter in the macro's replacement list, the subsequent token happens 525 to be an opening parenthesis (itself possibly the first token of an 526 argument). 527 528 It is important to note that when cpplib reads the last token of a 529 given context, that context still remains on the stack. Only when 530 looking for the _next_ token do we pop it off the stack and drop to a 531 lower context. This makes backing up by one token easy, but more 532 importantly ensures that the macro corresponding to the current context 533 is still disabled when we are considering the last token of its 534 replacement list for expansion (or indeed expanding it). As an 535 example, which illustrates many of the points above, consider 536 537 #define foo(x) bar x 538 foo(foo) (2) 539 540 which fully expands to `bar foo (2)'. During pre-expansion of the 541 argument, `foo' does not expand even though the macro is enabled, since 542 it has no following parenthesis [pre-expansion of an argument only uses 543 tokens from that argument; it cannot take tokens from whatever follows 544 the macro invocation]. This still leaves the argument token `foo' 545 eligible for future expansion. Then, when re-scanning after argument 546 replacement, the token `foo' is rejected for expansion, and marked 547 ineligible for future expansion, since the macro is now disabled. It 548 is disabled because the replacement list `bar foo' of the macro is 549 still on the context stack. 550 551 If instead the algorithm looked for an opening parenthesis first and 552 then tested whether the macro were disabled it would be subtly wrong. 553 In the example above, the replacement list of `foo' would be popped in 554 the process of finding the parenthesis, re-enabling `foo' and expanding 555 it a second time. 556 557 Looking for a function-like macro's opening parenthesis 558 ======================================================= 559 560 Function-like macros only expand when immediately followed by a 561 parenthesis. To do this cpplib needs to temporarily disable macros and 562 read the next token. Unfortunately, because of spacing issues (*note 563 Token Spacing::), there can be fake padding tokens in-between, and if 564 the next real token is not a parenthesis cpplib needs to be able to 565 back up that one token as well as retain the information in any 566 intervening padding tokens. 567 568 Backing up more than one token when macros are involved is not 569 permitted by cpplib, because in general it might involve issues like 570 restoring popped contexts onto the context stack, which are too hard. 571 Instead, searching for the parenthesis is handled by a special 572 function, `funlike_invocation_p', which remembers padding information 573 as it reads tokens. If the next real token is not an opening 574 parenthesis, it backs up that one token, and then pushes an extra 575 context just containing the padding information if necessary. 576 577 Marking tokens ineligible for future expansion 578 ============================================== 579 580 As discussed above, cpplib needs a way of marking tokens as 581 unexpandable. Since the tokens cpplib handles are read-only once they 582 have been lexed, it instead makes a copy of the token and adds the flag 583 `NO_EXPAND' to the copy. 584 585 For efficiency and to simplify memory management by avoiding having 586 to remember to free these tokens, they are allocated as temporary tokens 587 from the lexer's current token run (*note Lexing a line::) using the 588 function `_cpp_temp_token'. The tokens are then re-used once the 589 current line of tokens has been read in. 590 591 This might sound unsafe. However, tokens runs are not re-used at the 592 end of a line if it happens to be in the middle of a macro argument 593 list, and cpplib only wants to back-up more than one lexer token in 594 situations where no macro expansion is involved, so the optimization is 595 safe. 596 597 598 File: cppinternals.info, Node: Token Spacing, Next: Line Numbering, Prev: Macro Expansion, Up: Top 599 600 Token Spacing 601 ************* 602 603 First, consider an issue that only concerns the stand-alone 604 preprocessor: there needs to be a guarantee that re-reading its 605 preprocessed output results in an identical token stream. Without 606 taking special measures, this might not be the case because of macro 607 substitution. For example: 608 609 #define PLUS + 610 #define EMPTY 611 #define f(x) =x= 612 +PLUS -EMPTY- PLUS+ f(=) 613 ==> + + - - + + = = = 614 _not_ 615 ==> ++ -- ++ === 616 617 One solution would be to simply insert a space between all adjacent 618 tokens. However, we would like to keep space insertion to a minimum, 619 both for aesthetic reasons and because it causes problems for people who 620 still try to abuse the preprocessor for things like Fortran source and 621 Makefiles. 622 623 For now, just notice that when tokens are added (or removed, as 624 shown by the `EMPTY' example) from the original lexed token stream, we 625 need to check for accidental token pasting. We call this "paste 626 avoidance". Token addition and removal can only occur because of macro 627 expansion, but accidental pasting can occur in many places: both before 628 and after each macro replacement, each argument replacement, and 629 additionally each token created by the `#' and `##' operators. 630 631 Look at how the preprocessor gets whitespace output correct 632 normally. The `cpp_token' structure contains a flags byte, and one of 633 those flags is `PREV_WHITE'. This is flagged by the lexer, and 634 indicates that the token was preceded by whitespace of some form other 635 than a new line. The stand-alone preprocessor can use this flag to 636 decide whether to insert a space between tokens in the output. 637 638 Now consider the result of the following macro expansion: 639 640 #define add(x, y, z) x + y +z; 641 sum = add (1,2, 3); 642 ==> sum = 1 + 2 +3; 643 644 The interesting thing here is that the tokens `1' and `2' are output 645 with a preceding space, and `3' is output without a preceding space, 646 but when lexed none of these tokens had that property. Careful 647 consideration reveals that `1' gets its preceding whitespace from the 648 space preceding `add' in the macro invocation, _not_ replacement list. 649 `2' gets its whitespace from the space preceding the parameter `y' in 650 the macro replacement list, and `3' has no preceding space because 651 parameter `z' has none in the replacement list. 652 653 Once lexed, tokens are effectively fixed and cannot be altered, since 654 pointers to them might be held in many places, in particular by 655 in-progress macro expansions. So instead of modifying the two tokens 656 above, the preprocessor inserts a special token, which I call a 657 "padding token", into the token stream to indicate that spacing of the 658 subsequent token is special. The preprocessor inserts padding tokens 659 in front of every macro expansion and expanded macro argument. These 660 point to a "source token" from which the subsequent real token should 661 inherit its spacing. In the above example, the source tokens are `add' 662 in the macro invocation, and `y' and `z' in the macro replacement list, 663 respectively. 664 665 It is quite easy to get multiple padding tokens in a row, for 666 example if a macro's first replacement token expands straight into 667 another macro. 668 669 #define foo bar 670 #define bar baz 671 [foo] 672 ==> [baz] 673 674 Here, two padding tokens are generated with sources the `foo' token 675 between the brackets, and the `bar' token from foo's replacement list, 676 respectively. Clearly the first padding token is the one to use, so 677 the output code should contain a rule that the first padding token in a 678 sequence is the one that matters. 679 680 But what if a macro expansion is left? Adjusting the above example 681 slightly: 682 683 #define foo bar 684 #define bar EMPTY baz 685 #define EMPTY 686 [foo] EMPTY; 687 ==> [ baz] ; 688 689 As shown, now there should be a space before `baz' and the semicolon 690 in the output. 691 692 The rules we decided above fail for `baz': we generate three padding 693 tokens, one per macro invocation, before the token `baz'. We would 694 then have it take its spacing from the first of these, which carries 695 source token `foo' with no leading space. 696 697 It is vital that cpplib get spacing correct in these examples since 698 any of these macro expansions could be stringified, where spacing 699 matters. 700 701 So, this demonstrates that not just entering macro and argument 702 expansions, but leaving them requires special handling too. I made 703 cpplib insert a padding token with a `NULL' source token when leaving 704 macro expansions, as well as after each replaced argument in a macro's 705 replacement list. It also inserts appropriate padding tokens on either 706 side of tokens created by the `#' and `##' operators. I expanded the 707 rule so that, if we see a padding token with a `NULL' source token, 708 _and_ that source token has no leading space, then we behave as if we 709 have seen no padding tokens at all. A quick check shows this rule will 710 then get the above example correct as well. 711 712 Now a relationship with paste avoidance is apparent: we have to be 713 careful about paste avoidance in exactly the same locations we have 714 padding tokens in order to get white space correct. This makes 715 implementation of paste avoidance easy: wherever the stand-alone 716 preprocessor is fixing up spacing because of padding tokens, and it 717 turns out that no space is needed, it has to take the extra step to 718 check that a space is not needed after all to avoid an accidental paste. 719 The function `cpp_avoid_paste' advises whether a space is required 720 between two consecutive tokens. To avoid excessive spacing, it tries 721 hard to only require a space if one is likely to be necessary, but for 722 reasons of efficiency it is slightly conservative and might recommend a 723 space where one is not strictly needed. 724 725 726 File: cppinternals.info, Node: Line Numbering, Next: Guard Macros, Prev: Token Spacing, Up: Top 727 728 Line numbering 729 ************** 730 731 Just which line number anyway? 732 ============================== 733 734 There are three reasonable requirements a cpplib client might have for 735 the line number of a token passed to it: 736 737 * The source line it was lexed on. 738 739 * The line it is output on. This can be different to the line it was 740 lexed on if, for example, there are intervening escaped newlines or 741 C-style comments. For example: 742 743 foo /* A long 744 comment */ bar \ 745 baz 746 => 747 foo bar baz 748 749 * If the token results from a macro expansion, the line of the macro 750 name, or possibly the line of the closing parenthesis in the case 751 of function-like macro expansion. 752 753 The `cpp_token' structure contains `line' and `col' members. The 754 lexer fills these in with the line and column of the first character of 755 the token. Consequently, but maybe unexpectedly, a token from the 756 replacement list of a macro expansion carries the location of the token 757 within the `#define' directive, because cpplib expands a macro by 758 returning pointers to the tokens in its replacement list. The current 759 implementation of cpplib assigns tokens created from built-in macros 760 and the `#' and `##' operators the location of the most recently lexed 761 token. This is a because they are allocated from the lexer's token 762 runs, and because of the way the diagnostic routines infer the 763 appropriate location to report. 764 765 The diagnostic routines in cpplib display the location of the most 766 recently _lexed_ token, unless they are passed a specific line and 767 column to report. For diagnostics regarding tokens that arise from 768 macro expansions, it might also be helpful for the user to see the 769 original location in the macro definition that the token came from. 770 Since that is exactly the information each token carries, such an 771 enhancement could be made relatively easily in future. 772 773 The stand-alone preprocessor faces a similar problem when determining 774 the correct line to output the token on: the position attached to a 775 token is fairly useless if the token came from a macro expansion. All 776 tokens on a logical line should be output on its first physical line, so 777 the token's reported location is also wrong if it is part of a physical 778 line other than the first. 779 780 To solve these issues, cpplib provides a callback that is generated 781 whenever it lexes a preprocessing token that starts a new logical line 782 other than a directive. It passes this token (which may be a `CPP_EOF' 783 token indicating the end of the translation unit) to the callback 784 routine, which can then use the line and column of this token to 785 produce correct output. 786 787 Representation of line numbers 788 ============================== 789 790 As mentioned above, cpplib stores with each token the line number that 791 it was lexed on. In fact, this number is not the number of the line in 792 the source file, but instead bears more resemblance to the number of the 793 line in the translation unit. 794 795 The preprocessor maintains a monotonic increasing line count, which 796 is incremented at every new line character (and also at the end of any 797 buffer that does not end in a new line). Since a line number of zero is 798 useful to indicate certain special states and conditions, this variable 799 starts counting from one. 800 801 This variable therefore uniquely enumerates each line in the 802 translation unit. With some simple infrastructure, it is straight 803 forward to map from this to the original source file and line number 804 pair, saving space whenever line number information needs to be saved. 805 The code the implements this mapping lies in the files `line-map.c' and 806 `line-map.h'. 807 808 Command-line macros and assertions are implemented by pushing a 809 buffer containing the right hand side of an equivalent `#define' or 810 `#assert' directive. Some built-in macros are handled similarly. 811 Since these are all processed before the first line of the main input 812 file, it will typically have an assigned line closer to twenty than to 813 one. 814 815 816 File: cppinternals.info, Node: Guard Macros, Next: Files, Prev: Line Numbering, Up: Top 817 818 The Multiple-Include Optimization 819 ********************************* 820 821 Header files are often of the form 822 823 #ifndef FOO 824 #define FOO 825 ... 826 #endif 827 828 to prevent the compiler from processing them more than once. The 829 preprocessor notices such header files, so that if the header file 830 appears in a subsequent `#include' directive and `FOO' is defined, then 831 it is ignored and it doesn't preprocess or even re-open the file a 832 second time. This is referred to as the "multiple include 833 optimization". 834 835 Under what circumstances is such an optimization valid? If the file 836 were included a second time, it can only be optimized away if that 837 inclusion would result in no tokens to return, and no relevant 838 directives to process. Therefore the current implementation imposes 839 requirements and makes some allowances as follows: 840 841 1. There must be no tokens outside the controlling `#if'-`#endif' 842 pair, but whitespace and comments are permitted. 843 844 2. There must be no directives outside the controlling directive 845 pair, but the "null directive" (a line containing nothing other 846 than a single `#' and possibly whitespace) is permitted. 847 848 3. The opening directive must be of the form 849 850 #ifndef FOO 851 852 or 853 854 #if !defined FOO [equivalently, #if !defined(FOO)] 855 856 4. In the second form above, the tokens forming the `#if' expression 857 must have come directly from the source file--no macro expansion 858 must have been involved. This is because macro definitions can 859 change, and tracking whether or not a relevant change has been 860 made is not worth the implementation cost. 861 862 5. There can be no `#else' or `#elif' directives at the outer 863 conditional block level, because they would probably contain 864 something of interest to a subsequent pass. 865 866 First, when pushing a new file on the buffer stack, 867 `_stack_include_file' sets the controlling macro `mi_cmacro' to `NULL', 868 and sets `mi_valid' to `true'. This indicates that the preprocessor 869 has not yet encountered anything that would invalidate the 870 multiple-include optimization. As described in the next few 871 paragraphs, these two variables having these values effectively 872 indicates top-of-file. 873 874 When about to return a token that is not part of a directive, 875 `_cpp_lex_token' sets `mi_valid' to `false'. This enforces the 876 constraint that tokens outside the controlling conditional block 877 invalidate the optimization. 878 879 The `do_if', when appropriate, and `do_ifndef' directive handlers 880 pass the controlling macro to the function `push_conditional'. cpplib 881 maintains a stack of nested conditional blocks, and after processing 882 every opening conditional this function pushes an `if_stack' structure 883 onto the stack. In this structure it records the controlling macro for 884 the block, provided there is one and we're at top-of-file (as described 885 above). If an `#elif' or `#else' directive is encountered, the 886 controlling macro for that block is cleared to `NULL'. Otherwise, it 887 survives until the `#endif' closing the block, upon which `do_endif' 888 sets `mi_valid' to true and stores the controlling macro in `mi_cmacro'. 889 890 `_cpp_handle_directive' clears `mi_valid' when processing any 891 directive other than an opening conditional and the null directive. 892 With this, and requiring top-of-file to record a controlling macro, and 893 no `#else' or `#elif' for it to survive and be copied to `mi_cmacro' by 894 `do_endif', we have enforced the absence of directives outside the main 895 conditional block for the optimization to be on. 896 897 Note that whilst we are inside the conditional block, `mi_valid' is 898 likely to be reset to `false', but this does not matter since the 899 closing `#endif' restores it to `true' if appropriate. 900 901 Finally, since `_cpp_lex_direct' pops the file off the buffer stack 902 at `EOF' without returning a token, if the `#endif' directive was not 903 followed by any tokens, `mi_valid' is `true' and `_cpp_pop_file_buffer' 904 remembers the controlling macro associated with the file. Subsequent 905 calls to `stack_include_file' result in no buffer being pushed if the 906 controlling macro is defined, effecting the optimization. 907 908 A quick word on how we handle the 909 910 #if !defined FOO 911 912 case. `_cpp_parse_expr' and `parse_defined' take steps to see whether 913 the three stages `!', `defined-expression' and `end-of-directive' occur 914 in order in a `#if' expression. If so, they return the guard macro to 915 `do_if' in the variable `mi_ind_cmacro', and otherwise set it to `NULL'. 916 `enter_macro_context' sets `mi_valid' to false, so if a macro was 917 expanded whilst parsing any part of the expression, then the 918 top-of-file test in `push_conditional' fails and the optimization is 919 turned off. 920 921 922 File: cppinternals.info, Node: Files, Next: Concept Index, Prev: Guard Macros, Up: Top 923 924 File Handling 925 ************* 926 927 Fairly obviously, the file handling code of cpplib resides in the file 928 `files.c'. It takes care of the details of file searching, opening, 929 reading and caching, for both the main source file and all the headers 930 it recursively includes. 931 932 The basic strategy is to minimize the number of system calls. On 933 many systems, the basic `open ()' and `fstat ()' system calls can be 934 quite expensive. For every `#include'-d file, we need to try all the 935 directories in the search path until we find a match. Some projects, 936 such as glibc, pass twenty or thirty include paths on the command line, 937 so this can rapidly become time consuming. 938 939 For a header file we have not encountered before we have little 940 choice but to do this. However, it is often the case that the same 941 headers are repeatedly included, and in these cases we try to avoid 942 repeating the filesystem queries whilst searching for the correct file. 943 944 For each file we try to open, we store the constructed path in a 945 splay tree. This path first undergoes simplification by the function 946 `_cpp_simplify_pathname'. For example, `/usr/include/bits/../foo.h' is 947 simplified to `/usr/include/foo.h' before we enter it in the splay tree 948 and try to `open ()' the file. CPP will then find subsequent uses of 949 `foo.h', even as `/usr/include/foo.h', in the splay tree and save 950 system calls. 951 952 Further, it is likely the file contents have also been cached, 953 saving a `read ()' system call. We don't bother caching the contents of 954 header files that are re-inclusion protected, and whose re-inclusion 955 macro is defined when we leave the header file for the first time. If 956 the host supports it, we try to map suitably large files into memory, 957 rather than reading them in directly. 958 959 The include paths are internally stored on a null-terminated 960 singly-linked list, starting with the `"header.h"' directory search 961 chain, which then links into the `<header.h>' directory chain. 962 963 Files included with the `<foo.h>' syntax start the lookup directly 964 in the second half of this chain. However, files included with the 965 `"foo.h"' syntax start at the beginning of the chain, but with one 966 extra directory prepended. This is the directory of the current file; 967 the one containing the `#include' directive. Prepending this directory 968 on a per-file basis is handled by the function `search_from'. 969 970 Note that a header included with a directory component, such as 971 `#include "mydir/foo.h"' and opened as 972 `/usr/local/include/mydir/foo.h', will have the complete path minus the 973 basename `foo.h' as the current directory. 974 975 Enough information is stored in the splay tree that CPP can 976 immediately tell whether it can skip the header file because of the 977 multiple include optimization, whether the file didn't exist or 978 couldn't be opened for some reason, or whether the header was flagged 979 not to be re-used, as it is with the obsolete `#import' directive. 980 981 For the benefit of MS-DOS filesystems with an 8.3 filename 982 limitation, CPP offers the ability to treat various include file names 983 as aliases for the real header files with shorter names. The map from 984 one to the other is found in a special file called `header.gcc', stored 985 in the command line (or system) include directories to which the mapping 986 applies. This may be higher up the directory tree than the full path to 987 the file minus the base name. 988 989 990 File: cppinternals.info, Node: Concept Index, Prev: Files, Up: Top 991 992 Concept Index 993 ************* 994 995 [index] 996 * Menu: 997 998 * assertions: Hash Nodes. (line 6) 999 * controlling macros: Guard Macros. (line 6) 1000 * escaped newlines: Lexer. (line 6) 1001 * files: Files. (line 6) 1002 * guard macros: Guard Macros. (line 6) 1003 * hash table: Hash Nodes. (line 6) 1004 * header files: Conventions. (line 6) 1005 * identifiers: Hash Nodes. (line 6) 1006 * interface: Conventions. (line 6) 1007 * lexer: Lexer. (line 6) 1008 * line numbers: Line Numbering. (line 6) 1009 * macro expansion: Macro Expansion. (line 6) 1010 * macro representation (internal): Macro Expansion. (line 19) 1011 * macros: Hash Nodes. (line 6) 1012 * multiple-include optimization: Guard Macros. (line 6) 1013 * named operators: Hash Nodes. (line 6) 1014 * newlines: Lexer. (line 6) 1015 * paste avoidance: Token Spacing. (line 6) 1016 * spacing: Token Spacing. (line 6) 1017 * token run: Lexer. (line 192) 1018 * token spacing: Token Spacing. (line 6) 1019 1020 1021 1022 Tag Table: 1023 Node: Top1011 1024 Node: Conventions2696 1025 Node: Lexer3638 1026 Ref: Invalid identifiers11551 1027 Ref: Lexing a line13500 1028 Node: Hash Nodes18273 1029 Node: Macro Expansion21152 1030 Node: Token Spacing30099 1031 Node: Line Numbering35959 1032 Node: Guard Macros40044 1033 Node: Files44835 1034 Node: Concept Index48301 1035 1036 End Tag Table 1037