Home | History | Annotate | Download | only in tutorial
      1 ==============================================
      2 Kaleidoscope: Adding JIT and Optimizer Support
      3 ==============================================
      4 
      5 .. contents::
      6    :local:
      7 
      8 Chapter 4 Introduction
      9 ======================
     10 
     11 Welcome to Chapter 4 of the "`Implementing a language with
     12 LLVM <index.html>`_" tutorial. Chapters 1-3 described the implementation
     13 of a simple language and added support for generating LLVM IR. This
     14 chapter describes two new techniques: adding optimizer support to your
     15 language, and adding JIT compiler support. These additions will
     16 demonstrate how to get nice, efficient code for the Kaleidoscope
     17 language.
     18 
     19 Trivial Constant Folding
     20 ========================
     21 
     22 Our demonstration for Chapter 3 is elegant and easy to extend.
     23 Unfortunately, it does not produce wonderful code. The IRBuilder,
     24 however, does give us obvious optimizations when compiling simple code:
     25 
     26 ::
     27 
     28     ready> def test(x) 1+2+x;
     29     Read function definition:
     30     define double @test(double %x) {
     31     entry:
     32             %addtmp = fadd double 3.000000e+00, %x
     33             ret double %addtmp
     34     }
     35 
     36 This code is not a literal transcription of the AST built by parsing the
     37 input. That would be:
     38 
     39 ::
     40 
     41     ready> def test(x) 1+2+x;
     42     Read function definition:
     43     define double @test(double %x) {
     44     entry:
     45             %addtmp = fadd double 2.000000e+00, 1.000000e+00
     46             %addtmp1 = fadd double %addtmp, %x
     47             ret double %addtmp1
     48     }
     49 
     50 Constant folding, as seen above, in particular, is a very common and
     51 very important optimization: so much so that many language implementors
     52 implement constant folding support in their AST representation.
     53 
     54 With LLVM, you don't need this support in the AST. Since all calls to
     55 build LLVM IR go through the LLVM IR builder, the builder itself checked
     56 to see if there was a constant folding opportunity when you call it. If
     57 so, it just does the constant fold and return the constant instead of
     58 creating an instruction.
     59 
     60 Well, that was easy :). In practice, we recommend always using
     61 ``IRBuilder`` when generating code like this. It has no "syntactic
     62 overhead" for its use (you don't have to uglify your compiler with
     63 constant checks everywhere) and it can dramatically reduce the amount of
     64 LLVM IR that is generated in some cases (particular for languages with a
     65 macro preprocessor or that use a lot of constants).
     66 
     67 On the other hand, the ``IRBuilder`` is limited by the fact that it does
     68 all of its analysis inline with the code as it is built. If you take a
     69 slightly more complex example:
     70 
     71 ::
     72 
     73     ready> def test(x) (1+2+x)*(x+(1+2));
     74     ready> Read function definition:
     75     define double @test(double %x) {
     76     entry:
     77             %addtmp = fadd double 3.000000e+00, %x
     78             %addtmp1 = fadd double %x, 3.000000e+00
     79             %multmp = fmul double %addtmp, %addtmp1
     80             ret double %multmp
     81     }
     82 
     83 In this case, the LHS and RHS of the multiplication are the same value.
     84 We'd really like to see this generate "``tmp = x+3; result = tmp*tmp;``"
     85 instead of computing "``x+3``" twice.
     86 
     87 Unfortunately, no amount of local analysis will be able to detect and
     88 correct this. This requires two transformations: reassociation of
     89 expressions (to make the add's lexically identical) and Common
     90 Subexpression Elimination (CSE) to delete the redundant add instruction.
     91 Fortunately, LLVM provides a broad range of optimizations that you can
     92 use, in the form of "passes".
     93 
     94 LLVM Optimization Passes
     95 ========================
     96 
     97 LLVM provides many optimization passes, which do many different sorts of
     98 things and have different tradeoffs. Unlike other systems, LLVM doesn't
     99 hold to the mistaken notion that one set of optimizations is right for
    100 all languages and for all situations. LLVM allows a compiler implementor
    101 to make complete decisions about what optimizations to use, in which
    102 order, and in what situation.
    103 
    104 As a concrete example, LLVM supports both "whole module" passes, which
    105 look across as large of body of code as they can (often a whole file,
    106 but if run at link time, this can be a substantial portion of the whole
    107 program). It also supports and includes "per-function" passes which just
    108 operate on a single function at a time, without looking at other
    109 functions. For more information on passes and how they are run, see the
    110 `How to Write a Pass <../WritingAnLLVMPass.html>`_ document and the
    111 `List of LLVM Passes <../Passes.html>`_.
    112 
    113 For Kaleidoscope, we are currently generating functions on the fly, one
    114 at a time, as the user types them in. We aren't shooting for the
    115 ultimate optimization experience in this setting, but we also want to
    116 catch the easy and quick stuff where possible. As such, we will choose
    117 to run a few per-function optimizations as the user types the function
    118 in. If we wanted to make a "static Kaleidoscope compiler", we would use
    119 exactly the code we have now, except that we would defer running the
    120 optimizer until the entire file has been parsed.
    121 
    122 In order to get per-function optimizations going, we need to set up a
    123 `FunctionPassManager <../WritingAnLLVMPass.html#what-passmanager-doesr>`_ to hold
    124 and organize the LLVM optimizations that we want to run. Once we have
    125 that, we can add a set of optimizations to run. We'll need a new
    126 FunctionPassManager for each module that we want to optimize, so we'll
    127 write a function to create and initialize both the module and pass manager
    128 for us:
    129 
    130 .. code-block:: c++
    131 
    132     void InitializeModuleAndPassManager(void) {
    133       // Open a new module.
    134       TheModule = llvm::make_unique<Module>("my cool jit", TheContext);
    135 
    136       // Create a new pass manager attached to it.
    137       TheFPM = llvm::make_unique<FunctionPassManager>(TheModule.get());
    138 
    139       // Do simple "peephole" optimizations and bit-twiddling optzns.
    140       TheFPM->add(createInstructionCombiningPass());
    141       // Reassociate expressions.
    142       TheFPM->add(createReassociatePass());
    143       // Eliminate Common SubExpressions.
    144       TheFPM->add(createGVNPass());
    145       // Simplify the control flow graph (deleting unreachable blocks, etc).
    146       TheFPM->add(createCFGSimplificationPass());
    147 
    148       TheFPM->doInitialization();
    149     }
    150 
    151 This code initializes the global module ``TheModule``, and the function pass
    152 manager ``TheFPM``, which is attached to ``TheModule``. Once the pass manager is
    153 set up, we use a series of "add" calls to add a bunch of LLVM passes.
    154 
    155 In this case, we choose to add four optimization passes.
    156 The passes we choose here are a pretty standard set
    157 of "cleanup" optimizations that are useful for a wide variety of code. I won't
    158 delve into what they do but, believe me, they are a good starting place :).
    159 
    160 Once the PassManager is set up, we need to make use of it. We do this by
    161 running it after our newly created function is constructed (in
    162 ``FunctionAST::codegen()``), but before it is returned to the client:
    163 
    164 .. code-block:: c++
    165 
    166       if (Value *RetVal = Body->codegen()) {
    167         // Finish off the function.
    168         Builder.CreateRet(RetVal);
    169 
    170         // Validate the generated code, checking for consistency.
    171         verifyFunction(*TheFunction);
    172 
    173         // Optimize the function.
    174         TheFPM->run(*TheFunction);
    175 
    176         return TheFunction;
    177       }
    178 
    179 As you can see, this is pretty straightforward. The
    180 ``FunctionPassManager`` optimizes and updates the LLVM Function\* in
    181 place, improving (hopefully) its body. With this in place, we can try
    182 our test above again:
    183 
    184 ::
    185 
    186     ready> def test(x) (1+2+x)*(x+(1+2));
    187     ready> Read function definition:
    188     define double @test(double %x) {
    189     entry:
    190             %addtmp = fadd double %x, 3.000000e+00
    191             %multmp = fmul double %addtmp, %addtmp
    192             ret double %multmp
    193     }
    194 
    195 As expected, we now get our nicely optimized code, saving a floating
    196 point add instruction from every execution of this function.
    197 
    198 LLVM provides a wide variety of optimizations that can be used in
    199 certain circumstances. Some `documentation about the various
    200 passes <../Passes.html>`_ is available, but it isn't very complete.
    201 Another good source of ideas can come from looking at the passes that
    202 ``Clang`` runs to get started. The "``opt``" tool allows you to
    203 experiment with passes from the command line, so you can see if they do
    204 anything.
    205 
    206 Now that we have reasonable code coming out of our front-end, let's talk
    207 about executing it!
    208 
    209 Adding a JIT Compiler
    210 =====================
    211 
    212 Code that is available in LLVM IR can have a wide variety of tools
    213 applied to it. For example, you can run optimizations on it (as we did
    214 above), you can dump it out in textual or binary forms, you can compile
    215 the code to an assembly file (.s) for some target, or you can JIT
    216 compile it. The nice thing about the LLVM IR representation is that it
    217 is the "common currency" between many different parts of the compiler.
    218 
    219 In this section, we'll add JIT compiler support to our interpreter. The
    220 basic idea that we want for Kaleidoscope is to have the user enter
    221 function bodies as they do now, but immediately evaluate the top-level
    222 expressions they type in. For example, if they type in "1 + 2;", we
    223 should evaluate and print out 3. If they define a function, they should
    224 be able to call it from the command line.
    225 
    226 In order to do this, we first prepare the environment to create code for
    227 the current native target and declare and initialize the JIT. This is
    228 done by calling some ``InitializeNativeTarget\*`` functions and
    229 adding a global variable ``TheJIT``, and initializing it in
    230 ``main``:
    231 
    232 .. code-block:: c++
    233 
    234     static std::unique_ptr<KaleidoscopeJIT> TheJIT;
    235     ...
    236     int main() {
    237       InitializeNativeTarget();
    238       InitializeNativeTargetAsmPrinter();
    239       InitializeNativeTargetAsmParser();
    240 
    241       // Install standard binary operators.
    242       // 1 is lowest precedence.
    243       BinopPrecedence['<'] = 10;
    244       BinopPrecedence['+'] = 20;
    245       BinopPrecedence['-'] = 20;
    246       BinopPrecedence['*'] = 40; // highest.
    247 
    248       // Prime the first token.
    249       fprintf(stderr, "ready> ");
    250       getNextToken();
    251 
    252       TheJIT = llvm::make_unique<KaleidoscopeJIT>();
    253 
    254       // Run the main "interpreter loop" now.
    255       MainLoop();
    256 
    257       return 0;
    258     }
    259 
    260 We also need to setup the data layout for the JIT:
    261 
    262 .. code-block:: c++
    263 
    264     void InitializeModuleAndPassManager(void) {
    265       // Open a new module.
    266       TheModule = llvm::make_unique<Module>("my cool jit", TheContext);
    267       TheModule->setDataLayout(TheJIT->getTargetMachine().createDataLayout());
    268 
    269       // Create a new pass manager attached to it.
    270       TheFPM = llvm::make_unique<FunctionPassManager>(TheModule.get());
    271       ...
    272 
    273 The KaleidoscopeJIT class is a simple JIT built specifically for these
    274 tutorials, available inside the LLVM source code
    275 at llvm-src/examples/Kaleidoscope/include/KaleidoscopeJIT.h.
    276 In later chapters we will look at how it works and extend it with
    277 new features, but for now we will take it as given. Its API is very simple:
    278 ``addModule`` adds an LLVM IR module to the JIT, making its functions
    279 available for execution; ``removeModule`` removes a module, freeing any
    280 memory associated with the code in that module; and ``findSymbol`` allows us
    281 to look up pointers to the compiled code.
    282 
    283 We can take this simple API and change our code that parses top-level expressions to
    284 look like this:
    285 
    286 .. code-block:: c++
    287 
    288     static void HandleTopLevelExpression() {
    289       // Evaluate a top-level expression into an anonymous function.
    290       if (auto FnAST = ParseTopLevelExpr()) {
    291         if (FnAST->codegen()) {
    292 
    293           // JIT the module containing the anonymous expression, keeping a handle so
    294           // we can free it later.
    295           auto H = TheJIT->addModule(std::move(TheModule));
    296           InitializeModuleAndPassManager();
    297 
    298           // Search the JIT for the __anon_expr symbol.
    299           auto ExprSymbol = TheJIT->findSymbol("__anon_expr");
    300           assert(ExprSymbol && "Function not found");
    301 
    302           // Get the symbol's address and cast it to the right type (takes no
    303           // arguments, returns a double) so we can call it as a native function.
    304           double (*FP)() = (double (*)())(intptr_t)ExprSymbol.getAddress();
    305           fprintf(stderr, "Evaluated to %f\n", FP());
    306 
    307           // Delete the anonymous expression module from the JIT.
    308           TheJIT->removeModule(H);
    309         }
    310 
    311 If parsing and codegen succeeed, the next step is to add the module containing
    312 the top-level expression to the JIT. We do this by calling addModule, which
    313 triggers code generation for all the functions in the module, and returns a
    314 handle that can be used to remove the module from the JIT later. Once the module
    315 has been added to the JIT it can no longer be modified, so we also open a new
    316 module to hold subsequent code by calling ``InitializeModuleAndPassManager()``.
    317 
    318 Once we've added the module to the JIT we need to get a pointer to the final
    319 generated code. We do this by calling the JIT's findSymbol method, and passing
    320 the name of the top-level expression function: ``__anon_expr``. Since we just
    321 added this function, we assert that findSymbol returned a result.
    322 
    323 Next, we get the in-memory address of the ``__anon_expr`` function by calling
    324 ``getAddress()`` on the symbol. Recall that we compile top-level expressions
    325 into a self-contained LLVM function that takes no arguments and returns the
    326 computed double. Because the LLVM JIT compiler matches the native platform ABI,
    327 this means that you can just cast the result pointer to a function pointer of
    328 that type and call it directly. This means, there is no difference between JIT
    329 compiled code and native machine code that is statically linked into your
    330 application.
    331 
    332 Finally, since we don't support re-evaluation of top-level expressions, we
    333 remove the module from the JIT when we're done to free the associated memory.
    334 Recall, however, that the module we created a few lines earlier (via
    335 ``InitializeModuleAndPassManager``) is still open and waiting for new code to be
    336 added.
    337 
    338 With just these two changes, let's see how Kaleidoscope works now!
    339 
    340 ::
    341 
    342     ready> 4+5;
    343     Read top-level expression:
    344     define double @0() {
    345     entry:
    346       ret double 9.000000e+00
    347     }
    348 
    349     Evaluated to 9.000000
    350 
    351 Well this looks like it is basically working. The dump of the function
    352 shows the "no argument function that always returns double" that we
    353 synthesize for each top-level expression that is typed in. This
    354 demonstrates very basic functionality, but can we do more?
    355 
    356 ::
    357 
    358     ready> def testfunc(x y) x + y*2;
    359     Read function definition:
    360     define double @testfunc(double %x, double %y) {
    361     entry:
    362       %multmp = fmul double %y, 2.000000e+00
    363       %addtmp = fadd double %multmp, %x
    364       ret double %addtmp
    365     }
    366 
    367     ready> testfunc(4, 10);
    368     Read top-level expression:
    369     define double @1() {
    370     entry:
    371       %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
    372       ret double %calltmp
    373     }
    374 
    375     Evaluated to 24.000000
    376 
    377     ready> testfunc(5, 10);
    378     ready> LLVM ERROR: Program used external function 'testfunc' which could not be resolved!
    379 
    380 
    381 Function definitions and calls also work, but something went very wrong on that
    382 last line. The call looks valid, so what happened? As you may have guessed from
    383 the API a Module is a unit of allocation for the JIT, and testfunc was part
    384 of the same module that contained anonymous expression. When we removed that
    385 module from the JIT to free the memory for the anonymous expression, we deleted
    386 the definition of ``testfunc`` along with it. Then, when we tried to call
    387 testfunc a second time, the JIT could no longer find it.
    388 
    389 The easiest way to fix this is to put the anonymous expression in a separate
    390 module from the rest of the function definitions. The JIT will happily resolve
    391 function calls across module boundaries, as long as each of the functions called
    392 has a prototype, and is added to the JIT before it is called. By putting the
    393 anonymous expression in a different module we can delete it without affecting
    394 the rest of the functions.
    395 
    396 In fact, we're going to go a step further and put every function in its own
    397 module. Doing so allows us to exploit a useful property of the KaleidoscopeJIT
    398 that will make our environment more REPL-like: Functions can be added to the
    399 JIT more than once (unlike a module where every function must have a unique
    400 definition). When you look up a symbol in KaleidoscopeJIT it will always return
    401 the most recent definition:
    402 
    403 ::
    404 
    405     ready> def foo(x) x + 1;
    406     Read function definition:
    407     define double @foo(double %x) {
    408     entry:
    409       %addtmp = fadd double %x, 1.000000e+00
    410       ret double %addtmp
    411     }
    412 
    413     ready> foo(2);
    414     Evaluated to 3.000000
    415 
    416     ready> def foo(x) x + 2;
    417     define double @foo(double %x) {
    418     entry:
    419       %addtmp = fadd double %x, 2.000000e+00
    420       ret double %addtmp
    421     }
    422 
    423     ready> foo(2);
    424     Evaluated to 4.000000
    425 
    426 
    427 To allow each function to live in its own module we'll need a way to
    428 re-generate previous function declarations into each new module we open:
    429 
    430 .. code-block:: c++
    431 
    432     static std::unique_ptr<KaleidoscopeJIT> TheJIT;
    433 
    434     ...
    435 
    436     Function *getFunction(std::string Name) {
    437       // First, see if the function has already been added to the current module.
    438       if (auto *F = TheModule->getFunction(Name))
    439         return F;
    440 
    441       // If not, check whether we can codegen the declaration from some existing
    442       // prototype.
    443       auto FI = FunctionProtos.find(Name);
    444       if (FI != FunctionProtos.end())
    445         return FI->second->codegen();
    446 
    447       // If no existing prototype exists, return null.
    448       return nullptr;
    449     }
    450 
    451     ...
    452 
    453     Value *CallExprAST::codegen() {
    454       // Look up the name in the global module table.
    455       Function *CalleeF = getFunction(Callee);
    456 
    457     ...
    458 
    459     Function *FunctionAST::codegen() {
    460       // Transfer ownership of the prototype to the FunctionProtos map, but keep a
    461       // reference to it for use below.
    462       auto &P = *Proto;
    463       FunctionProtos[Proto->getName()] = std::move(Proto);
    464       Function *TheFunction = getFunction(P.getName());
    465       if (!TheFunction)
    466         return nullptr;
    467 
    468 
    469 To enable this, we'll start by adding a new global, ``FunctionProtos``, that
    470 holds the most recent prototype for each function. We'll also add a convenience
    471 method, ``getFunction()``, to replace calls to ``TheModule->getFunction()``.
    472 Our convenience method searches ``TheModule`` for an existing function
    473 declaration, falling back to generating a new declaration from FunctionProtos if
    474 it doesn't find one. In ``CallExprAST::codegen()`` we just need to replace the
    475 call to ``TheModule->getFunction()``. In ``FunctionAST::codegen()`` we need to
    476 update the FunctionProtos map first, then call ``getFunction()``. With this
    477 done, we can always obtain a function declaration in the current module for any
    478 previously declared function.
    479 
    480 We also need to update HandleDefinition and HandleExtern:
    481 
    482 .. code-block:: c++
    483 
    484     static void HandleDefinition() {
    485       if (auto FnAST = ParseDefinition()) {
    486         if (auto *FnIR = FnAST->codegen()) {
    487           fprintf(stderr, "Read function definition:");
    488           FnIR->print(errs());
    489           fprintf(stderr, "\n");
    490           TheJIT->addModule(std::move(TheModule));
    491           InitializeModuleAndPassManager();
    492         }
    493       } else {
    494         // Skip token for error recovery.
    495          getNextToken();
    496       }
    497     }
    498 
    499     static void HandleExtern() {
    500       if (auto ProtoAST = ParseExtern()) {
    501         if (auto *FnIR = ProtoAST->codegen()) {
    502           fprintf(stderr, "Read extern: ");
    503           FnIR->print(errs());
    504           fprintf(stderr, "\n");
    505           FunctionProtos[ProtoAST->getName()] = std::move(ProtoAST);
    506         }
    507       } else {
    508         // Skip token for error recovery.
    509         getNextToken();
    510       }
    511     }
    512 
    513 In HandleDefinition, we add two lines to transfer the newly defined function to
    514 the JIT and open a new module. In HandleExtern, we just need to add one line to
    515 add the prototype to FunctionProtos.
    516 
    517 With these changes made, let's try our REPL again (I removed the dump of the
    518 anonymous functions this time, you should get the idea by now :) :
    519 
    520 ::
    521 
    522     ready> def foo(x) x + 1;
    523     ready> foo(2);
    524     Evaluated to 3.000000
    525 
    526     ready> def foo(x) x + 2;
    527     ready> foo(2);
    528     Evaluated to 4.000000
    529 
    530 It works!
    531 
    532 Even with this simple code, we get some surprisingly powerful capabilities -
    533 check this out:
    534 
    535 ::
    536 
    537     ready> extern sin(x);
    538     Read extern:
    539     declare double @sin(double)
    540 
    541     ready> extern cos(x);
    542     Read extern:
    543     declare double @cos(double)
    544 
    545     ready> sin(1.0);
    546     Read top-level expression:
    547     define double @2() {
    548     entry:
    549       ret double 0x3FEAED548F090CEE
    550     }
    551 
    552     Evaluated to 0.841471
    553 
    554     ready> def foo(x) sin(x)*sin(x) + cos(x)*cos(x);
    555     Read function definition:
    556     define double @foo(double %x) {
    557     entry:
    558       %calltmp = call double @sin(double %x)
    559       %multmp = fmul double %calltmp, %calltmp
    560       %calltmp2 = call double @cos(double %x)
    561       %multmp4 = fmul double %calltmp2, %calltmp2
    562       %addtmp = fadd double %multmp, %multmp4
    563       ret double %addtmp
    564     }
    565 
    566     ready> foo(4.0);
    567     Read top-level expression:
    568     define double @3() {
    569     entry:
    570       %calltmp = call double @foo(double 4.000000e+00)
    571       ret double %calltmp
    572     }
    573 
    574     Evaluated to 1.000000
    575 
    576 Whoa, how does the JIT know about sin and cos? The answer is surprisingly
    577 simple: The KaleidoscopeJIT has a straightforward symbol resolution rule that
    578 it uses to find symbols that aren't available in any given module: First
    579 it searches all the modules that have already been added to the JIT, from the
    580 most recent to the oldest, to find the newest definition. If no definition is
    581 found inside the JIT, it falls back to calling "``dlsym("sin")``" on the
    582 Kaleidoscope process itself. Since "``sin``" is defined within the JIT's
    583 address space, it simply patches up calls in the module to call the libm
    584 version of ``sin`` directly. But in some cases this even goes further:
    585 as sin and cos are names of standard math functions, the constant folder
    586 will directly evaluate the function calls to the correct result when called
    587 with constants like in the "``sin(1.0)``" above.
    588 
    589 In the future we'll see how tweaking this symbol resolution rule can be used to
    590 enable all sorts of useful features, from security (restricting the set of
    591 symbols available to JIT'd code), to dynamic code generation based on symbol
    592 names, and even lazy compilation.
    593 
    594 One immediate benefit of the symbol resolution rule is that we can now extend
    595 the language by writing arbitrary C++ code to implement operations. For example,
    596 if we add:
    597 
    598 .. code-block:: c++
    599 
    600     #ifdef _WIN32
    601     #define DLLEXPORT __declspec(dllexport)
    602     #else
    603     #define DLLEXPORT
    604     #endif
    605 
    606     /// putchard - putchar that takes a double and returns 0.
    607     extern "C" DLLEXPORT double putchard(double X) {
    608       fputc((char)X, stderr);
    609       return 0;
    610     }
    611 
    612 Note, that for Windows we need to actually export the functions because
    613 the dynamic symbol loader will use GetProcAddress to find the symbols.
    614 
    615 Now we can produce simple output to the console by using things like:
    616 "``extern putchard(x); putchard(120);``", which prints a lowercase 'x'
    617 on the console (120 is the ASCII code for 'x'). Similar code could be
    618 used to implement file I/O, console input, and many other capabilities
    619 in Kaleidoscope.
    620 
    621 This completes the JIT and optimizer chapter of the Kaleidoscope
    622 tutorial. At this point, we can compile a non-Turing-complete
    623 programming language, optimize and JIT compile it in a user-driven way.
    624 Next up we'll look into `extending the language with control flow
    625 constructs <LangImpl05.html>`_, tackling some interesting LLVM IR issues
    626 along the way.
    627 
    628 Full Code Listing
    629 =================
    630 
    631 Here is the complete code listing for our running example, enhanced with
    632 the LLVM JIT and optimizer. To build this example, use:
    633 
    634 .. code-block:: bash
    635 
    636     # Compile
    637     clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core mcjit native` -O3 -o toy
    638     # Run
    639     ./toy
    640 
    641 If you are compiling this on Linux, make sure to add the "-rdynamic"
    642 option as well. This makes sure that the external functions are resolved
    643 properly at runtime.
    644 
    645 Here is the code:
    646 
    647 .. literalinclude:: ../../examples/Kaleidoscope/Chapter4/toy.cpp
    648    :language: c++
    649 
    650 `Next: Extending the language: control flow <LangImpl05.html>`_
    651 
    652