1 ======================================== 2 Kaleidoscope: Code generation to LLVM IR 3 ======================================== 4 5 .. contents:: 6 :local: 7 8 Chapter 3 Introduction 9 ====================== 10 11 Welcome to Chapter 3 of the "`Implementing a language with 12 LLVM <index.html>`_" tutorial. This chapter shows you how to transform 13 the `Abstract Syntax Tree <LangImpl2.html>`_, built in Chapter 2, into 14 LLVM IR. This will teach you a little bit about how LLVM does things, as 15 well as demonstrate how easy it is to use. It's much more work to build 16 a lexer and parser than it is to generate LLVM IR code. :) 17 18 **Please note**: the code in this chapter and later require LLVM 2.2 or 19 later. LLVM 2.1 and before will not work with it. Also note that you 20 need to use a version of this tutorial that matches your LLVM release: 21 If you are using an official LLVM release, use the version of the 22 documentation included with your release or on the `llvm.org releases 23 page <http://llvm.org/releases/>`_. 24 25 Code Generation Setup 26 ===================== 27 28 In order to generate LLVM IR, we want some simple setup to get started. 29 First we define virtual code generation (codegen) methods in each AST 30 class: 31 32 .. code-block:: c++ 33 34 /// ExprAST - Base class for all expression nodes. 35 class ExprAST { 36 public: 37 virtual ~ExprAST() {} 38 virtual Value *Codegen() = 0; 39 }; 40 41 /// NumberExprAST - Expression class for numeric literals like "1.0". 42 class NumberExprAST : public ExprAST { 43 double Val; 44 public: 45 NumberExprAST(double val) : Val(val) {} 46 virtual Value *Codegen(); 47 }; 48 ... 49 50 The Codegen() method says to emit IR for that AST node along with all 51 the things it depends on, and they all return an LLVM Value object. 52 "Value" is the class used to represent a "`Static Single Assignment 53 (SSA) <http://en.wikipedia.org/wiki/Static_single_assignment_form>`_ 54 register" or "SSA value" in LLVM. The most distinct aspect of SSA values 55 is that their value is computed as the related instruction executes, and 56 it does not get a new value until (and if) the instruction re-executes. 57 In other words, there is no way to "change" an SSA value. For more 58 information, please read up on `Static Single 59 Assignment <http://en.wikipedia.org/wiki/Static_single_assignment_form>`_ 60 - the concepts are really quite natural once you grok them. 61 62 Note that instead of adding virtual methods to the ExprAST class 63 hierarchy, it could also make sense to use a `visitor 64 pattern <http://en.wikipedia.org/wiki/Visitor_pattern>`_ or some other 65 way to model this. Again, this tutorial won't dwell on good software 66 engineering practices: for our purposes, adding a virtual method is 67 simplest. 68 69 The second thing we want is an "Error" method like we used for the 70 parser, which will be used to report errors found during code generation 71 (for example, use of an undeclared parameter): 72 73 .. code-block:: c++ 74 75 Value *ErrorV(const char *Str) { Error(Str); return 0; } 76 77 static Module *TheModule; 78 static IRBuilder<> Builder(getGlobalContext()); 79 static std::map<std::string, Value*> NamedValues; 80 81 The static variables will be used during code generation. ``TheModule`` 82 is the LLVM construct that contains all of the functions and global 83 variables in a chunk of code. In many ways, it is the top-level 84 structure that the LLVM IR uses to contain code. 85 86 The ``Builder`` object is a helper object that makes it easy to generate 87 LLVM instructions. Instances of the 88 ```IRBuilder`` <http://llvm.org/doxygen/IRBuilder_8h-source.html>`_ 89 class template keep track of the current place to insert instructions 90 and has methods to create new instructions. 91 92 The ``NamedValues`` map keeps track of which values are defined in the 93 current scope and what their LLVM representation is. (In other words, it 94 is a symbol table for the code). In this form of Kaleidoscope, the only 95 things that can be referenced are function parameters. As such, function 96 parameters will be in this map when generating code for their function 97 body. 98 99 With these basics in place, we can start talking about how to generate 100 code for each expression. Note that this assumes that the ``Builder`` 101 has been set up to generate code *into* something. For now, we'll assume 102 that this has already been done, and we'll just use it to emit code. 103 104 Expression Code Generation 105 ========================== 106 107 Generating LLVM code for expression nodes is very straightforward: less 108 than 45 lines of commented code for all four of our expression nodes. 109 First we'll do numeric literals: 110 111 .. code-block:: c++ 112 113 Value *NumberExprAST::Codegen() { 114 return ConstantFP::get(getGlobalContext(), APFloat(Val)); 115 } 116 117 In the LLVM IR, numeric constants are represented with the 118 ``ConstantFP`` class, which holds the numeric value in an ``APFloat`` 119 internally (``APFloat`` has the capability of holding floating point 120 constants of Arbitrary Precision). This code basically just creates 121 and returns a ``ConstantFP``. Note that in the LLVM IR that constants 122 are all uniqued together and shared. For this reason, the API uses the 123 "foo::get(...)" idiom instead of "new foo(..)" or "foo::Create(..)". 124 125 .. code-block:: c++ 126 127 Value *VariableExprAST::Codegen() { 128 // Look this variable up in the function. 129 Value *V = NamedValues[Name]; 130 return V ? V : ErrorV("Unknown variable name"); 131 } 132 133 References to variables are also quite simple using LLVM. In the simple 134 version of Kaleidoscope, we assume that the variable has already been 135 emitted somewhere and its value is available. In practice, the only 136 values that can be in the ``NamedValues`` map are function arguments. 137 This code simply checks to see that the specified name is in the map (if 138 not, an unknown variable is being referenced) and returns the value for 139 it. In future chapters, we'll add support for `loop induction 140 variables <LangImpl5.html#for>`_ in the symbol table, and for `local 141 variables <LangImpl7.html#localvars>`_. 142 143 .. code-block:: c++ 144 145 Value *BinaryExprAST::Codegen() { 146 Value *L = LHS->Codegen(); 147 Value *R = RHS->Codegen(); 148 if (L == 0 || R == 0) return 0; 149 150 switch (Op) { 151 case '+': return Builder.CreateFAdd(L, R, "addtmp"); 152 case '-': return Builder.CreateFSub(L, R, "subtmp"); 153 case '*': return Builder.CreateFMul(L, R, "multmp"); 154 case '<': 155 L = Builder.CreateFCmpULT(L, R, "cmptmp"); 156 // Convert bool 0/1 to double 0.0 or 1.0 157 return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()), 158 "booltmp"); 159 default: return ErrorV("invalid binary operator"); 160 } 161 } 162 163 Binary operators start to get more interesting. The basic idea here is 164 that we recursively emit code for the left-hand side of the expression, 165 then the right-hand side, then we compute the result of the binary 166 expression. In this code, we do a simple switch on the opcode to create 167 the right LLVM instruction. 168 169 In the example above, the LLVM builder class is starting to show its 170 value. IRBuilder knows where to insert the newly created instruction, 171 all you have to do is specify what instruction to create (e.g. with 172 ``CreateFAdd``), which operands to use (``L`` and ``R`` here) and 173 optionally provide a name for the generated instruction. 174 175 One nice thing about LLVM is that the name is just a hint. For instance, 176 if the code above emits multiple "addtmp" variables, LLVM will 177 automatically provide each one with an increasing, unique numeric 178 suffix. Local value names for instructions are purely optional, but it 179 makes it much easier to read the IR dumps. 180 181 `LLVM instructions <../LangRef.html#instref>`_ are constrained by strict 182 rules: for example, the Left and Right operators of an `add 183 instruction <../LangRef.html#i_add>`_ must have the same type, and the 184 result type of the add must match the operand types. Because all values 185 in Kaleidoscope are doubles, this makes for very simple code for add, 186 sub and mul. 187 188 On the other hand, LLVM specifies that the `fcmp 189 instruction <../LangRef.html#i_fcmp>`_ always returns an 'i1' value (a 190 one bit integer). The problem with this is that Kaleidoscope wants the 191 value to be a 0.0 or 1.0 value. In order to get these semantics, we 192 combine the fcmp instruction with a `uitofp 193 instruction <../LangRef.html#i_uitofp>`_. This instruction converts its 194 input integer into a floating point value by treating the input as an 195 unsigned value. In contrast, if we used the `sitofp 196 instruction <../LangRef.html#i_sitofp>`_, the Kaleidoscope '<' operator 197 would return 0.0 and -1.0, depending on the input value. 198 199 .. code-block:: c++ 200 201 Value *CallExprAST::Codegen() { 202 // Look up the name in the global module table. 203 Function *CalleeF = TheModule->getFunction(Callee); 204 if (CalleeF == 0) 205 return ErrorV("Unknown function referenced"); 206 207 // If argument mismatch error. 208 if (CalleeF->arg_size() != Args.size()) 209 return ErrorV("Incorrect # arguments passed"); 210 211 std::vector<Value*> ArgsV; 212 for (unsigned i = 0, e = Args.size(); i != e; ++i) { 213 ArgsV.push_back(Args[i]->Codegen()); 214 if (ArgsV.back() == 0) return 0; 215 } 216 217 return Builder.CreateCall(CalleeF, ArgsV, "calltmp"); 218 } 219 220 Code generation for function calls is quite straightforward with LLVM. 221 The code above initially does a function name lookup in the LLVM 222 Module's symbol table. Recall that the LLVM Module is the container that 223 holds all of the functions we are JIT'ing. By giving each function the 224 same name as what the user specifies, we can use the LLVM symbol table 225 to resolve function names for us. 226 227 Once we have the function to call, we recursively codegen each argument 228 that is to be passed in, and create an LLVM `call 229 instruction <../LangRef.html#i_call>`_. Note that LLVM uses the native C 230 calling conventions by default, allowing these calls to also call into 231 standard library functions like "sin" and "cos", with no additional 232 effort. 233 234 This wraps up our handling of the four basic expressions that we have so 235 far in Kaleidoscope. Feel free to go in and add some more. For example, 236 by browsing the `LLVM language reference <../LangRef.html>`_ you'll find 237 several other interesting instructions that are really easy to plug into 238 our basic framework. 239 240 Function Code Generation 241 ======================== 242 243 Code generation for prototypes and functions must handle a number of 244 details, which make their code less beautiful than expression code 245 generation, but allows us to illustrate some important points. First, 246 lets talk about code generation for prototypes: they are used both for 247 function bodies and external function declarations. The code starts 248 with: 249 250 .. code-block:: c++ 251 252 Function *PrototypeAST::Codegen() { 253 // Make the function type: double(double,double) etc. 254 std::vector<Type*> Doubles(Args.size(), 255 Type::getDoubleTy(getGlobalContext())); 256 FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()), 257 Doubles, false); 258 259 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule); 260 261 This code packs a lot of power into a few lines. Note first that this 262 function returns a "Function\*" instead of a "Value\*". Because a 263 "prototype" really talks about the external interface for a function 264 (not the value computed by an expression), it makes sense for it to 265 return the LLVM Function it corresponds to when codegen'd. 266 267 The call to ``FunctionType::get`` creates the ``FunctionType`` that 268 should be used for a given Prototype. Since all function arguments in 269 Kaleidoscope are of type double, the first line creates a vector of "N" 270 LLVM double types. It then uses the ``Functiontype::get`` method to 271 create a function type that takes "N" doubles as arguments, returns one 272 double as a result, and that is not vararg (the false parameter 273 indicates this). Note that Types in LLVM are uniqued just like Constants 274 are, so you don't "new" a type, you "get" it. 275 276 The final line above actually creates the function that the prototype 277 will correspond to. This indicates the type, linkage and name to use, as 278 well as which module to insert into. "`external 279 linkage <../LangRef.html#linkage>`_" means that the function may be 280 defined outside the current module and/or that it is callable by 281 functions outside the module. The Name passed in is the name the user 282 specified: since "``TheModule``" is specified, this name is registered 283 in "``TheModule``"s symbol table, which is used by the function call 284 code above. 285 286 .. code-block:: c++ 287 288 // If F conflicted, there was already something named 'Name'. If it has a 289 // body, don't allow redefinition or reextern. 290 if (F->getName() != Name) { 291 // Delete the one we just made and get the existing one. 292 F->eraseFromParent(); 293 F = TheModule->getFunction(Name); 294 295 The Module symbol table works just like the Function symbol table when 296 it comes to name conflicts: if a new function is created with a name 297 that was previously added to the symbol table, the new function will get 298 implicitly renamed when added to the Module. The code above exploits 299 this fact to determine if there was a previous definition of this 300 function. 301 302 In Kaleidoscope, I choose to allow redefinitions of functions in two 303 cases: first, we want to allow 'extern'ing a function more than once, as 304 long as the prototypes for the externs match (since all arguments have 305 the same type, we just have to check that the number of arguments 306 match). Second, we want to allow 'extern'ing a function and then 307 defining a body for it. This is useful when defining mutually recursive 308 functions. 309 310 In order to implement this, the code above first checks to see if there 311 is a collision on the name of the function. If so, it deletes the 312 function we just created (by calling ``eraseFromParent``) and then 313 calling ``getFunction`` to get the existing function with the specified 314 name. Note that many APIs in LLVM have "erase" forms and "remove" forms. 315 The "remove" form unlinks the object from its parent (e.g. a Function 316 from a Module) and returns it. The "erase" form unlinks the object and 317 then deletes it. 318 319 .. code-block:: c++ 320 321 // If F already has a body, reject this. 322 if (!F->empty()) { 323 ErrorF("redefinition of function"); 324 return 0; 325 } 326 327 // If F took a different number of args, reject. 328 if (F->arg_size() != Args.size()) { 329 ErrorF("redefinition of function with different # args"); 330 return 0; 331 } 332 } 333 334 In order to verify the logic above, we first check to see if the 335 pre-existing function is "empty". In this case, empty means that it has 336 no basic blocks in it, which means it has no body. If it has no body, it 337 is a forward declaration. Since we don't allow anything after a full 338 definition of the function, the code rejects this case. If the previous 339 reference to a function was an 'extern', we simply verify that the 340 number of arguments for that definition and this one match up. If not, 341 we emit an error. 342 343 .. code-block:: c++ 344 345 // Set names for all arguments. 346 unsigned Idx = 0; 347 for (Function::arg_iterator AI = F->arg_begin(); Idx != Args.size(); 348 ++AI, ++Idx) { 349 AI->setName(Args[Idx]); 350 351 // Add arguments to variable symbol table. 352 NamedValues[Args[Idx]] = AI; 353 } 354 return F; 355 } 356 357 The last bit of code for prototypes loops over all of the arguments in 358 the function, setting the name of the LLVM Argument objects to match, 359 and registering the arguments in the ``NamedValues`` map for future use 360 by the ``VariableExprAST`` AST node. Once this is set up, it returns the 361 Function object to the caller. Note that we don't check for conflicting 362 argument names here (e.g. "extern foo(a b a)"). Doing so would be very 363 straight-forward with the mechanics we have already used above. 364 365 .. code-block:: c++ 366 367 Function *FunctionAST::Codegen() { 368 NamedValues.clear(); 369 370 Function *TheFunction = Proto->Codegen(); 371 if (TheFunction == 0) 372 return 0; 373 374 Code generation for function definitions starts out simply enough: we 375 just codegen the prototype (Proto) and verify that it is ok. We then 376 clear out the ``NamedValues`` map to make sure that there isn't anything 377 in it from the last function we compiled. Code generation of the 378 prototype ensures that there is an LLVM Function object that is ready to 379 go for us. 380 381 .. code-block:: c++ 382 383 // Create a new basic block to start insertion into. 384 BasicBlock *BB = BasicBlock::Create(getGlobalContext(), "entry", TheFunction); 385 Builder.SetInsertPoint(BB); 386 387 if (Value *RetVal = Body->Codegen()) { 388 389 Now we get to the point where the ``Builder`` is set up. The first line 390 creates a new `basic block <http://en.wikipedia.org/wiki/Basic_block>`_ 391 (named "entry"), which is inserted into ``TheFunction``. The second line 392 then tells the builder that new instructions should be inserted into the 393 end of the new basic block. Basic blocks in LLVM are an important part 394 of functions that define the `Control Flow 395 Graph <http://en.wikipedia.org/wiki/Control_flow_graph>`_. Since we 396 don't have any control flow, our functions will only contain one block 397 at this point. We'll fix this in `Chapter 5 <LangImpl5.html>`_ :). 398 399 .. code-block:: c++ 400 401 if (Value *RetVal = Body->Codegen()) { 402 // Finish off the function. 403 Builder.CreateRet(RetVal); 404 405 // Validate the generated code, checking for consistency. 406 verifyFunction(*TheFunction); 407 408 return TheFunction; 409 } 410 411 Once the insertion point is set up, we call the ``CodeGen()`` method for 412 the root expression of the function. If no error happens, this emits 413 code to compute the expression into the entry block and returns the 414 value that was computed. Assuming no error, we then create an LLVM `ret 415 instruction <../LangRef.html#i_ret>`_, which completes the function. 416 Once the function is built, we call ``verifyFunction``, which is 417 provided by LLVM. This function does a variety of consistency checks on 418 the generated code, to determine if our compiler is doing everything 419 right. Using this is important: it can catch a lot of bugs. Once the 420 function is finished and validated, we return it. 421 422 .. code-block:: c++ 423 424 // Error reading body, remove function. 425 TheFunction->eraseFromParent(); 426 return 0; 427 } 428 429 The only piece left here is handling of the error case. For simplicity, 430 we handle this by merely deleting the function we produced with the 431 ``eraseFromParent`` method. This allows the user to redefine a function 432 that they incorrectly typed in before: if we didn't delete it, it would 433 live in the symbol table, with a body, preventing future redefinition. 434 435 This code does have a bug, though. Since the ``PrototypeAST::Codegen`` 436 can return a previously defined forward declaration, our code can 437 actually delete a forward declaration. There are a number of ways to fix 438 this bug, see what you can come up with! Here is a testcase: 439 440 :: 441 442 extern foo(a b); # ok, defines foo. 443 def foo(a b) c; # error, 'c' is invalid. 444 def bar() foo(1, 2); # error, unknown function "foo" 445 446 Driver Changes and Closing Thoughts 447 =================================== 448 449 For now, code generation to LLVM doesn't really get us much, except that 450 we can look at the pretty IR calls. The sample code inserts calls to 451 Codegen into the "``HandleDefinition``", "``HandleExtern``" etc 452 functions, and then dumps out the LLVM IR. This gives a nice way to look 453 at the LLVM IR for simple functions. For example: 454 455 :: 456 457 ready> 4+5; 458 Read top-level expression: 459 define double @0() { 460 entry: 461 ret double 9.000000e+00 462 } 463 464 Note how the parser turns the top-level expression into anonymous 465 functions for us. This will be handy when we add `JIT 466 support <LangImpl4.html#jit>`_ in the next chapter. Also note that the 467 code is very literally transcribed, no optimizations are being performed 468 except simple constant folding done by IRBuilder. We will `add 469 optimizations <LangImpl4.html#trivialconstfold>`_ explicitly in the next 470 chapter. 471 472 :: 473 474 ready> def foo(a b) a*a + 2*a*b + b*b; 475 Read function definition: 476 define double @foo(double %a, double %b) { 477 entry: 478 %multmp = fmul double %a, %a 479 %multmp1 = fmul double 2.000000e+00, %a 480 %multmp2 = fmul double %multmp1, %b 481 %addtmp = fadd double %multmp, %multmp2 482 %multmp3 = fmul double %b, %b 483 %addtmp4 = fadd double %addtmp, %multmp3 484 ret double %addtmp4 485 } 486 487 This shows some simple arithmetic. Notice the striking similarity to the 488 LLVM builder calls that we use to create the instructions. 489 490 :: 491 492 ready> def bar(a) foo(a, 4.0) + bar(31337); 493 Read function definition: 494 define double @bar(double %a) { 495 entry: 496 %calltmp = call double @foo(double %a, double 4.000000e+00) 497 %calltmp1 = call double @bar(double 3.133700e+04) 498 %addtmp = fadd double %calltmp, %calltmp1 499 ret double %addtmp 500 } 501 502 This shows some function calls. Note that this function will take a long 503 time to execute if you call it. In the future we'll add conditional 504 control flow to actually make recursion useful :). 505 506 :: 507 508 ready> extern cos(x); 509 Read extern: 510 declare double @cos(double) 511 512 ready> cos(1.234); 513 Read top-level expression: 514 define double @1() { 515 entry: 516 %calltmp = call double @cos(double 1.234000e+00) 517 ret double %calltmp 518 } 519 520 This shows an extern for the libm "cos" function, and a call to it. 521 522 .. TODO:: Abandon Pygments' horrible `llvm` lexer. It just totally gives up 523 on highlighting this due to the first line. 524 525 :: 526 527 ready> ^D 528 ; ModuleID = 'my cool jit' 529 530 define double @0() { 531 entry: 532 %addtmp = fadd double 4.000000e+00, 5.000000e+00 533 ret double %addtmp 534 } 535 536 define double @foo(double %a, double %b) { 537 entry: 538 %multmp = fmul double %a, %a 539 %multmp1 = fmul double 2.000000e+00, %a 540 %multmp2 = fmul double %multmp1, %b 541 %addtmp = fadd double %multmp, %multmp2 542 %multmp3 = fmul double %b, %b 543 %addtmp4 = fadd double %addtmp, %multmp3 544 ret double %addtmp4 545 } 546 547 define double @bar(double %a) { 548 entry: 549 %calltmp = call double @foo(double %a, double 4.000000e+00) 550 %calltmp1 = call double @bar(double 3.133700e+04) 551 %addtmp = fadd double %calltmp, %calltmp1 552 ret double %addtmp 553 } 554 555 declare double @cos(double) 556 557 define double @1() { 558 entry: 559 %calltmp = call double @cos(double 1.234000e+00) 560 ret double %calltmp 561 } 562 563 When you quit the current demo, it dumps out the IR for the entire 564 module generated. Here you can see the big picture with all the 565 functions referencing each other. 566 567 This wraps up the third chapter of the Kaleidoscope tutorial. Up next, 568 we'll describe how to `add JIT codegen and optimizer 569 support <LangImpl4.html>`_ to this so we can actually start running 570 code! 571 572 Full Code Listing 573 ================= 574 575 Here is the complete code listing for our running example, enhanced with 576 the LLVM code generator. Because this uses the LLVM libraries, we need 577 to link them in. To do this, we use the 578 `llvm-config <http://llvm.org/cmds/llvm-config.html>`_ tool to inform 579 our makefile/command line about which options to use: 580 581 .. code-block:: bash 582 583 # Compile 584 clang++ -g -O3 toy.cpp `llvm-config --cppflags --ldflags --libs core` -o toy 585 # Run 586 ./toy 587 588 Here is the code: 589 590 .. literalinclude:: ../../examples/Kaleidoscope/Chapter3/toy.cpp 591 :language: c++ 592 593 `Next: Adding JIT and Optimizer Support <LangImpl4.html>`_ 594 595